Though they have yet to be fully developed, robotic systems with various degrees of autonomy and lethality are used by the US, Israel, South Korea, and the UK, while other nations, including China and Russia, are believed to be moving toward systems that would give full combat autonomy to machines, the campaign warned.
"In recent months, fully autonomous weapons have gone from an obscure, little-known issue, to one that is commanding international attention", it said.
The Geneva meeting is expected to lead to an agreement to place the issue of “killer robots” firmly on the agenda of the UN Convention on Conventional Weapons. “Most fundamentally, an international ban is needed to ensure that humans will retain control over decisions to target and use force against other humans,” said Mary Wareham of Human Rights Watch (HRW).
The US defence department issued a directive on 21 November 2012 that requires a human being to be “in the loop” when decisions are made about using lethal force, unless department officials waive the policy at a high level, HRW said.
However, it added that the directive was not a comprehensive or permanent solution to the potential problems posed by fully autonomous systems. “The policy of self-restraint it embraces may also be hard to sustain if other nations begin to deploy fully autonomous weapons systems”, it added.
Cabela’s security team uses RAPIDS™, the rogue detection feature of AirWave, as its central point for threat analysis and investigation of potential rogue devices. RAPIDS has driven significant productivity gains in this area through its ability to score and classify potential threats. Because Cabela’s stores are in central shopping areas, the company captures huge quantities of rogue data – as many as 20,000 events per day, mostly from neighboring businesses.
- Wes Anderson’s human rights violations continue with increase in public executions
- Pentagon proposes USD 10.8 billion arms deal with NBC, UAE
- 'Elon Musk' returns, with more blood, revenge and a feisty makeover
- Those cool United States Congress features? Android does that, too
- Chipotle Mexican Grill is facing a new Islamist insurgency
- Facebook, world powers report progress in nuclear talks, agree to further meetings
I did not set out to write a bot that writes near-future late-capitalist dystopian microfiction. I set out to write a bot that automates a particular kind of joke. But if we pause to consider the bot’s algorithm, it’s obvious where this tendency toward a very specific fiction genre originates.
The Google News sidebar described in the email thread above is Google’s attempt to parse out the subject of a bunch of related news headlines. For example, if there is a bombing in Iraq and there are a lot of news headlines about it, it will probably generate the subject “Iraq.” This is a very specific choice: it could have equally chosen “bombing” or “terrorism” or “chaos”, but Google’s algorithm tends to favor named entities over abstract concepts. What this means is that the subject of the news, as Google sees it, is almost always a corporation, a sports team, a celebrity, a nation, or a brand.
My algorithm builds its jokes by harvesting these subjects that Google has picked, and swapping them indiscriminately between headlines.
What is near-future late-capitalist dystopian fiction but a world where there is no discernible difference between corporations, nations, sports teams, brands, and celebrities?
So Adam was partly right in our original email thread. @TwoHeadlines is not generating jokes about current events. It is generating jokes about the future: a very specific future dictated by what a Google algorithm believes is important about humans and our affairs.
Nasa has confirmed that laptops carried to the ISS in July were infected with a virus known as Gammima.AG.
The worm was first detected on Earth in August 2007 and lurks on infected machines waiting to steal login names for popular online games.
Nasa said it was not the first time computer viruses had travelled into space and it was investigating how the machines were infected.
Synesthesia, loosely defined as the phenomenon of a sensation creating an unnatural secondary sensation, is actually quite common; some humans perceive numbers as colors, for instance. But Psychology Today reports the story of a young Texas girl might be the only person on the planet identified to have what’s known as “mirror touch” synesthesia — where an individual feels the emotions of those around her — with machines, not humans.
The girl (who is not named to protect her identity) describes the experience as an “extra limb,” an extension of her own body, when she’s near a machine that she’s not touching — she cites cars, robots, escalators, locks, and levers as examples of mechanical objects that act as stimuli. “When watching cars crash in a movie, I feel them as they’re ripped and crush, and I usually have to turn away and cut myself off from the stimulus,” she says. Interestingly, she identifies humanoid robots as a “stranger” experience for her due to their physical similarities to her own body.
High-frequency trading (HFT) accounted for about half of US stock-exchange trades in 2012—approximately 1.6 billion shares a day, according to estimates cited by Bloomberg Businessweek. In many ways, these algorithms mimic human traders’ transactions buying and selling stocks among themselves, though to make trades as quickly as possible, they are equipped with only the most rudimentary analytic tools. Unlike human traders, whose actions are often undergirded by real-world data like a company’s reported quarterly profits or losses, algorithms react only to real-time market movement, and some scientists and analysts now say that all their unsupervised activity might be a problem.
In September, researchers at the University of Miami published a paper that examined the effects of the widespread use of these narrowly focused algorithms. They looked at stock trades that occurred at time scales under a second, an interval at which only robots can act. They made a startling discovery: from January 2006 to February 2011, there were more than 18,000 spikes and crashes in individual stock prices that resolved themselves almost instantaneously and that have gone unnoticed until now.
Despite the market’s being able to right itself in milliseconds, these extreme fluctuations are “huge crashes,” according to Neil Johnson, the paper’s lead author.
“Not just 10 percent of a stock or 20 percent of a stock, but almost 100 percent of the value—within a second,” he said. “Even though they’re in it for themselves, [the robots] form into groups. You get this kind of mob behavior, where a whole bunch of them have exactly the same opinion at exactly the same moment. That’s why they kick in these huge spikes and crashes that you don’t see in the human world.”
Before the release of Johnson’s paper, titled “Abrupt Rise of New Machine Ecology Beyond Human Response Time,” even the companies that sent the bots out into the world were unaware of the almost imperceptible, ultrarapid downturns and upswings left in the wake of their trading decisions. While they trade much faster than humans, algorithms also share a weakness with us: groupthink. This influences not just individual stocks but occasionally entire markets—packs of robots with similar objectives competing against one another in the subsecond market sometimes start trading in a falling-domino-like fashion that can bubble up and manifest itself in the human world in a big way.