Sheepdogs could lose their jobs to robots after scientists learned the secret of their herding ability.
Rounding up sheep successfully is a simple process involving just two basic mathematical rules, a study found.
One causes a sheepdog to close any gaps it sees between dispersing sheep. The other results in sheep being driven forward once the gaps have sufficiently closed.
A computer simulation showed that obeying these two rules alone allowed a single shepherd – or sheepdog – to control a flock of more than 100 animals.
The discovery has implications for human crowd control as well as the development of robots that can gather and herd livestock, the scientists said. […]
To conduct the study, the researchers fitted a flock of sheep and a sheepdog with backpacks containing highly accurate GPS satnavs.
Movement-tracking data from the devices was programmed into computer simulations to develop the mathematical shepherding model.
Writing in the Journal of the Royal Society Interface, the researchers concluded: “Our approach should support efficient designs for herding autonomous, interacting agents in a variety of contexts.
"Obvious cases are robot-assisted herding of livestock, and keeping animals away from sensitive areas, but applications range from control of flocking robots, cleaning up of environments and human crowd control."
When I close my laptop, it goes to sleep. It’s a curiously domestic metaphor but it also implies that sleep in humans and other animals is just a kind of low-power standby mode. (Do computers dream of electric sleep?) Last year, Apple announced a twist on this idea: a new feature for the Mac operating system called “Power Nap”. Using Power Nap, your computer can do important things even while asleep, receiving updates and performing backups.
The name Power Nap comes from the term describing the thrusting executive’s purported ability to catch a restorative forty winks in 20 minutes but the functioning of Apple’s feature symbolically implies a yet more ultra-modern and frankly inhuman aspiration: to be “productive” even while dozing. It is the uncanny technological embodiment of the dream most blatantly sold to us by those work-from-home scams online, which promise that you can “make money even while you sleep”.
Sleep, indeed, is a standing affront to capitalism. That is the argument of Jonathan Crary’s provocative and fascinating essay, which takes “24/7” as a spectral umbrella term for round-the-clock consumption and production in today’s world. The human power nap is a macho response to what Crary notes is the alarming shrinkage of sleep in modernity. “The average North American adult now sleeps approximately six and a half hours a night,” he observes, which is “an erosion from eight hours a generation ago” and “ten hours in the early 20th century”.
Back in 1996, Stanley Coren’s book Sleep Thieves blamed insufficient rest for industrial disasters such as the Chernobyl meltdown. Crary is worried about the encroachment on sleep because it represents one of the last remaining zones of dissidence, of anti-productivity and even of solidarity. Isn’t it quite disgusting that, as he notices, public benches are now deliberately engineered to prevent human beings from sleeping on them?
While Apple-branded machines that take working Power Naps are figured as a more efficient species of people, people themselves are increasingly represented as apparatuses to be acted on by machines. Take the popular internet parlance of getting “eyeballs”, which means reaching an audience. “The term ‘eyeballs’ for the site of control,” Crary writes, “repositions human vision as a motor activity that can be subjected to external direction or stimuli … The eye is dislodged from the realm of optics and made into an intermediary element of a circuit whose end result is always a motor response of the body to electronic solicitation.”
You can’t get more “eyeballs” if the people to whose brains the eyeballs are physically connected are asleep. Hence the interest – currently military; before long surely commercial, too – in removing our need for sleep with drugs or other modifications. Then we would be more like efficient machines, able to “interact” with (or labour among) electronic media all day and all night. (It is strange, once you think about it, that the phrase “He’s a machine” is now supposed to be a compliment in the sporting arena and the workplace.)
We live in the exoskeleton of the Internet.
Many of us cannot help looking because of what Susan Sontag has called “the perennial seductiveness of war.” It is a kind of rubbernecking, staring at the bloody aftermath of something that is not an act of God but of man. The effect, as Ms. Sontag pointed out in an essay in The New Yorker in 2002, is anything but certain.
“Making suffering loom larger, by globalizing it, may spur people to feel they ought to ‘care’ more,” she wrote. “It also invites them to feel that the sufferings and misfortunes are too vast, too irrevocable, too epic to be much changed by any local, political intervention.”
So now that war comes to us in real time, do we feel helpless or empowered? Do we care more, or will the ubiquity of images and information desensitize us to the point where human suffering loses meaning when it is part of a scroll that includes a video of your niece twerking? Oh, we say as our index finger navigates to the next item, another one of those.
As war becomes a more remote, mechanized activity, posts and images from the target area have significant value. When a trigger gets pulled or bombs explode, real people are often on the wrong end of it. And bearing witness to the consequences gives meaning to what we see.
So, what’s the trade-off here? In general, we are safer (automation makes airline flying safer, in general) except in the long-tail: pilots are losing both tacit knowledge of flying and some of its mechanics. But in general, we, as humans, have less and less understanding of our machines—we are compartmentalized, looking at a tiny corner of a very complex system beyond our individual comprehension. Increasing numbers of our systems—from finance to electricity to cybersecurity to medical systems, are going in this direction. We are losing control and understanding which seems fine—until it’s not. We will certainly, and unfortunately, find out what this really means because sooner or later, one of these systems will fail in a way we don’t understand.
We’ve seen some less-radical attempts to destroy technology in the real world in recent months, mainly in the form of attacks on people wearing Glass or flying drones, or the drone on its own (by hockey fans who reportedly and incorrectly thought it belonged to the LAPD). As in the movie, the destroyers haven’t been identified or punished, with one exception: Andrea Mears, 23, was charged with third degree assault for attacking a teen boy, Austin Haughwout, 17, flying a drone on a Connecticut beach. She got probation this week, as noted by comprehensive drone chronicler Greg McNeal. It’s easy to call these people Luddites, after the British workers who set about destroying machines — and in some cases killing the people who owned them — in the late 1700s and early 1800s in a futile attempt to turn back the tide of mechanization. It led Britain to pass a law making machine-wrecking punishable by death. But the new machine destroyers’ motivations are different. The original Luddites were worried machines would take their jobs; the Neo-Luddites fear machines will steal their privacy.