Popular Science magazine asks, "
Are we giving our military machines too much power?" Even as we imagine the day when robots finally turn against us, writes Ben Austen, scientists are at work on how best to control them. For this article, Austen interviewed a number of key players in this area, including Wendell Wallach, Patrick Lin, Noel Sharkey, Ronald Arkin, P. W. Singer and many others.
We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power. So what does it mean when whatever can go wrong with these military machines, just might?
I asked that question of Werner Dahm, the chief scientist of the Air Force and the lead author on “Technology Horizons.” He dismissed as fanciful the kind of Hollywood-bred fears that informed news stories about the Navy Fire Scout incident. “The biggest danger is not the Terminator scenario everyone imagines, the machines taking over—that’s not how things fail,” Dahm said. His real fear was that we would build powerful military systems that would “take over the large key functions that are done exclusively by humans” and then discover too late that the machines simply aren’t up to the task. “We blink,” he said, “and 10 years later we find out the technology wasn’t far enough along.”
Dahm’s vision, however, suggests another “Terminator scenario,” one more plausible and not without menace. Over the course of dozens of interviews with military officials, robot designers and technology ethicists, I came to understand that we are at work on not one but two major projects, the first to give machines ever greater intelligence and autonomy, and the second to maintain control of those machines. Dahm was worried about the success of the former, but we should be at least as concerned about the failure of the latter. If we make smart machines without equally smart control systems, we face a scenario in which some day, by way of a thousand well-intentioned decisions, each one seemingly sound, the machines do in fact take over all the “key functions” that once were our domain. Then “we blink” and find that the world is one we no longer are able to comprehend or control.
In regards to rule based prescriptions:
David Woods, a professor at Ohio State University who specializes in human- robot coordination and who works closely with military researchers on automated systems, says that a simple rules-based approach will never be enough to anticipate the myriad physical and ethical challenges that robots will confront on the battlefield. There needs to be instead a system whereby, when a decision becomes too complex, control is quickly sent back either to a human or to another robot on a different loop. “Robots are resources for responsible people. They extend human reach,” he says. “When things break, disturbances cascade, humans need to be able to coordinate and interact with multiple loops.” According to Woods, the laws of robotics could be distilled into a single tenet: “the smooth transfer of control.” We may relinquish control in specific instances, but we must at all times maintain systems that allow us to reclaim it. The more we let go, the more difficult that becomes.
And on human enhancement:
The Air Force is also looking into how humans can become more machinelike, through the use of drugs and various devices, in order to more smoothly interact with machines. The job of a UAV manager often entails eight hours of utter tedium broken up at some unknown time by a couple of minutes of pandemonium. In flight tests, UAV operators tend to “tunnelize,” fixating on one UAV to the exclusion of others, and a NATO study showed that performance levels dropped by half when a person went from monitoring one UAV to just two. Military research into pharmaceuticals that act as calmatives or enhance alertness and acuity are well-known. But in the research lab’s Human Effectiveness Directorate, I saw a prototype of a kind of crown rigged with electrode fingers that rested on the scalp and picked up electric signals generated by the brain. Plans are for operators overseeing several UAVs to wear a future version of one of these contraptions and to undergo continuous heart-rate and eye-movement monitoring. In all, these devices would determine when a person is fatigued, angry, excited or overwhelmed. If a UAV operator’s attention waned, he could be cued visually, or a magnetic stimulant could be sent to his frontal lobe. And when a person displayed the telltale signs of panic or stress, a human (or machine) supervisor could simply shift responsibilities away from that person.
More.
1 comment:
It so happens I've actually been surveyed by the US Department of Defense regarding the ethics of autonomous/semi-autonomous robots in combat. We're still pretty far from anything really more autonomous than remote-control, but the US military is already thinking about ethical and moral ramifications. I don't think that the process is quite as uncontrolled and unexamined as alarmists might think. I do sometimes wonder which aspects they may be forgetting, but so far I have (strangely and somewhat unsettlingly) found the military has thought more deeply about the morality of robot autonomy than most intelligent civilians with whom I've discussed it.
Post a Comment