The phrase “killer robot” instantly conjures images from science fiction, like the ruthless T-1000 Terminator that slowly reforms to resume its deadly mission after being frozen with liquid nitrogen and blasted to pieces. But a future involving autonomous weapons systems—capable of killing people without anyone’s direction—isn’t so far fetched.
In fact, as U.S. soldiers sit in bunkers and conduct drone strikes thousands of miles away, the military is testing a computer-operated drone that could be rolled out in a matter of years. That is unless people can stop it.
Enter the Campaign to Stop Killer Robots, a coalition of NGOs lobbying to preemptively ban lethal, autonomous weapons systems by international treaty. Gathered at a United Nations building in New York on Tuesday, a panel of six campaign members lamented to an equal number of journalists that development of the independent devices is outpacing the progress on reaching a diplomatic deal to ban them.
A central concern of the campaign is that the decision to kill a human may one day be delegated to a machine. As The Intercept’s recently released Drone Papers showed, 9 out of 10 people killed by human-controlled drones during one five-month period in Afghanistan were not the intended targets. The use of autonomous weapons controlled by current artificial intelligence (AI) systems would generate even worse casualty rates, says Toby Walsh, a professor in AI at the University of New South Wales and a campaign member. The machines wouldn’t do a better job at differentiating between soldier and civilian or calculating a more proportionate response, he says. While computer-controlled weapons are imminent, he argues that their artificially intelligent brains are “perhaps 50 or so years away” from having the higher capabilities of a Terminator-esque type of technology.
Today, some autonomous weapons systems are already in use, though many are aimed against incoming munitions. The Patriot antimissile system, for instance, was singled out by the U.S. during the Iraq war as a high-tech success story, though it was responsible for downing two allied warplanes. And mechanized weapons able to spot targets from miles away now line the demilitarized zone separating North and South Korea. They’re technically able to fire without human help, but for now they alert an operator, who makes the final decision on whether to pull the trigger.
“States often argue that…computers are much faster than humans in some functions, so it could give them that split-second advantage,” says Christof Heyns, United Nations special rapporteur on extrajudicial, summary or arbitrary executions. “Some states also argue that such technology can lead in some cases to better targeting.” As a dominant military power, the U.S. is likely leading the research and development of autonomous weapons systems. “The Department of Defense is very focused on autonomy,” says Pentagon spokesman Adrian J.T. Rankine-Galloway, including leveraging existing capabilities in new ways and investing in new game-changing technologies to gain greater operational advantages.
But more efficient killing does not guarantee more humane wars, says Dr. Ian Kerr of the International Committee for Robot Arms Control and a campaign member. By lowering the cost of war, both in terms of soldiers’ lives and dollars allocated (the technology will only become more inexpensive to manufacture), Walsh argues that the frequency of wars will increase. And, as Human Right Watch’s Bonnie Docherty earlier told Newsweek, not only can a machine not be held accountable for a war crime, but under existing law humans who manufacture, program and command these lethal robots would likely escape liability as well. Without accountability, she said, there can be no retribution for victims, no social condemnation and no deterrence of future violations. “How would gaining a humanitarian end be achieved by removing humans from the equation?” Kerr asked the audience on Tuesday. “Fragility of the human condition is what can make war compassionate.”
The campaign recommends an outright ban rather than regulation of autonomous weapon systems, because once the technology is in existence, states will be tempted to use it. And one stocked arsenal is likely the first step toward proliferation and the beginning of a never-ending arms race. Contributing to the campaign in July through an open letter, Stephen Hawking, Elon Musk, Steve Wozniak and nearly 1,000 other artificial intelligence experts added: “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”
Some stakeholders see research restrictions as possibly having a counter-productive effect, stifling the development of AI for favorable applications. Self-driving cars, for instance, could save millions of lives on the road. “Thinking about all the benefits that AI can bring; wouldn’t it be a shame if humans didn’t make good use of this technology?” says Manuela Veloso, professor of computer science at Carnegie Mellon University. She suggests technologies such as drones or AI are morally neutral. Rather, it is the way the technology is used that can be dangerous. And, regardless of restrictions, people will develop AI and abuse it, she says. “What is going to happen with AI is up to humanity,” says Veloso. “This should be a wake-up call for people to make good uses of AI, but I am not sure how you’d prevent the bad guys from doing whatever they want.”
Heyns believes necessary nuance is missing from the conversation on banning killer robots. “There is a continuum of autonomy,” he says. “I am not against lower levels of autonomy, it can indeed be a good thing if it helps humans to take better decisions. But technology must remain tools in the hands of humans, not the other way around, especially where life and death choices are concerned.”
After two years of informal, multilateral talks, the Campaign to Stop Killer Robots has seen little progress. A notable exception is the U.S., which in 2012 released a Department of Defense policy directive requiring “appropriate” levels of human judgment over the use of force. “This is about allowing machines to help human decision-makers make decisions at the campaign and tactical level which will be either faster or better than the adversaries,” Deputy Secretary of Defense Bob Work said in September before the Royal United Services Institute in London, clarifying the U.S.’s intentions with the technology. But a critical make-or-break moment for a more concrete ban is quickly approaching.
On November, 13, at the next annual meeting of the Convention on Conventional Weapons (CCW) in Geneva, 120 signatory countries will make a consensus decision as to whether they want the talks to continue. While it’s “quite clear they will continue discussions,” says Stephen Goose, executive director of Human Rights Watch’s arms division, the activists want more. “We’re increasingly concerned that the informal U.N. talks…are aiming too low and going too slow,” said Mary Wareham, also of Human Rights Watch and a coordinator of the campaign, adding that she hopes countries commit to more formal discussions that lead to negotiations of a new international treaty.
But CCW is notorious for inactivity, says Goose, pointing to only one success: the 1995 preemptive ban on blinding lasers. The other option would be for the group to work outside of the U.N. body and search for state sponsors to introduce a treaty. But this approach may also prove problematic: While no countries have come out against a treaty, Goose says, none is championing the cause either, and pretty soon it will be too late.
“We’d cross a moral line we’ve never crossed before,” Kerr says of autonomous weapons. “It needs international attention.”