On killer robots and human control

Branka Marijan Conventional Weapons, Emerging Technologies

Debating the sophistication of artificial intelligence in lethal autonomous weapons systems

Lethal autonomous weapons systems (LAWS) are more commonly known as killer robots.

LAWS would select, target, and kill without any human input. While these weapons systems do not yet exist, leading scientists agree that they soon will.

Published in The Ploughshares Monitor Volume 37 Issue 2 Summer 2016 by Branka Marijan

Some feel that debating their development now is too abstract. Still, the 2016 World Economic Forum (WEF) Annual Meeting, held January 20-23 in Davos, Switzerland, used one session to reflect on what would happen if robots went to war. This session posed two particularly interesting questions to its Davos audience, which captured different roles of artificial intelligence (AI) in warfare.

Robot friends and enemies

The first: If your country was suddenly at war, would you rather be defended by the sons and daughters of your community, or an autonomous AI weapons system?

About 12 per cent of the audience in the room stated that they would prefer human soldiers, while 88 per cent preferred AI. When the question was opened up to a wider audience, 55 per cent preferred AI systems. While the swing away from AI systems is noteworthy, the continuing preference indicates a belief or confidence in AI warfare systems.

The second question: If your country was suddenly at war, would you rather be invaded by the sons and daughters of your enemy, or an autonomous AI system?

While a technical glitch (!) meant that statistics for this question were not available during the recorded session, panel participants seemed to suggest that the response by the audience in the room was almost the inverse of the first response. Most people preferred to face human enemies.

The preference for using AI systems as friends is perhaps not surprising. Not wanting to put “the sons and daughters of your community” at risk is understandable. If machines can do the dangerous work, let them. Robots don’t get tired and do what they’re programmed to do.

But one of the experts at the WEF session, engineer and roboethicist Alan Winfield, called such confidence “misplaced.” In his view, the response “betrays extraordinary confidence in the sophistication of artificial intelligence. The state of the art, now and in the near future, isn’t that high” (World Economic Forum 2016). Winfield pointed out that robots do not perform well in complex environments and “make mistakes.”

Others suggest that machines that have the ability to select, target, and kill humans are crossing a moral and ethical line. Do we really want to let machines decide which humans die?

Soon we might not have much of a choice. Leading scientists warn that, without a ban on fully autonomous weapons, the global robotic arms race that will develop will be the “third revolution in warfare, after gunpowder and nuclear arms” (Future of Life Institute 2015). Only a few years away are systems that won’t feature Terminator-like figures, but will be more autonomous versions of existing technologies.

Meaningful human control

Unmanned systems, such as drones, already have a significant impact on warfare. The U.S. military, for example, can carry out targeted attacks in Yemen from a base in Nevada. These drones still involve human operators.

Recently, however, the U.S. military revealed that it is testing an unmanned “surface vessel—a self-driving, 132-foot ship designed to travel thousands of miles out at sea without a single crew member on board” (Watson 2016). Developed by U.S. defence contractor Leidos and the Defense Advanced Research Projects Agency (DARPA), the research arm of the U.S. Department of Defense, the vessel is still operated by humans in the testing phase. However, the goal is to send the vessel out on its own. Unlike a drone it will not be remotely controlled, but will receive its mission commands and then be driven by the software. At present the vessel is unarmed. Will it be armed in the future? We don’t know.

The United Nations has been focusing on questions relating to human control of weapons. Meaningful human control of autonomous weapons systems was the focus of the third informal meeting of experts held at the UN Convention on Certain Conventional Weapons (CCW) from April 11-15 in Geneva, Switzerland. In attendance were 94 states, including 82 high-contracting parties, one CCW signatory, and 11 non-signatories (Campaign to Stop Killer Robots 2016). Also attending were international experts and nongovernmental organizations, including Human Rights Watch, the International Committee of the Red Cross, Mines Action Canada, and Project Ploughshares.

In the end, states recommended that an open-ended Group of Governmental Experts (GGE) continue to explore this issue. This recommendation will be decided on at the CCW’s Fifth Review Conference this December.

Civil society organizations that participate in the global Campaign to Stop Killer Robots, including Project Ploughshares, have called on the CCW to support a preemptive ban on fully autonomous lethal weapons systems. Fourteen states are behind this call. Other countries, including Canada, suggest that it is too early for a ban that they feel might limit technological developments. Most are undecided, though no country has expressed support for fully autonomous weapons systems.

Still, a consensus on the need for human control appears to be emerging, with many countries stating that the decision to kill humans should not be given to machines. Experts, governments, and civil society view meaningful human control as “a threshold of human control that is considered necessary” (Roff & Moyes 2016, p. 1). But the term still needs to be clarified.

At the CCW meeting, Canada urged that the GGE should elucidate what is meant by “meaningful human control.” Some countries, such as the United States, refer to “appropriate levels of human judgment” instead. While this term also seems to indicate the need for human involvement in LAWS, the notion of control is stronger in ensuring, for example, that humans can change a machine’s decision to target a potential victim (Human Rights Watch 2016, p. 9). The term “meaningful human control” is the most widely used at the CCW (Human Rights Watch 2016, p. 10).

Most analysts agree on basic features of meaningful human control. Heather Roff, Senior Research Fellow at Oxford University, and Richard Moyes, Managing Partner at UK-based NGO Article 36 (2016, p. 1), explain two premises: “that a machine applying force and operating without any human control whatsoever is broadly considered unacceptable” and that “a human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness, is not sufficient to be considered ‘human control’ in a substantive sense.”

Meaningless control

Existing weapons systems, not considered fully autonomous, already push the limits of “meaningful” human control (Roff 2014). Roff points to the case of the Long-Range Anti-Ship Missile (LRASM). The LRASM, manufactured by Lockheed Martin, was developed in collaboration with DARPA and the U.S. Office of Naval Research (Osborn 2016). Human beings tell it where to go and which target to strike, but the weapon can make a series of decisions on its own. It can also autonomously select its target, based on a stored database, and change course. LRASMs can cooperate with each other to coordinate an attack, essentially working as a swarm.

Is it useful to ask what is meant by meaningful human control when weapons are essentially coordinating the attack without human input? Roff (2014) proposes that we discuss instead “meaningless” human control. Evidence of meaningless control would be to “launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons” or to have a weapon that “perpetually patrols.” However, as Roff herself acknowledges, this is not really a proactive response to the concerns raised by lethal autonomous weapons systems and certainly does not satisfy those calling for a ban.

Accountability gap

Weapons systems with greater autonomy are evolving and ongoing discussions about human control are critical. As the recent Human Rights Watch (2016, p. 16) report on autonomous weapons notes, “mandating meaningful human control of weapons would help protect human dignity in war, ensure compliance with international humanitarian and human rights law, and avoid creating an accountability gap for the unlawful acts of a weapon.”

The issue of accountability points to yet another reason for preserving human control over weapons systems. As the International Committee of the Red Cross (2014) points out, a fully autonomous weapons system could not be held responsible under existing international humanitarian law. It is possible that responsibility would be assigned to the engineers or designers of the weapons, but with so many people involved in designing sophisticated weapons systems, it’s likely that no one would be held accountable. The lack of accountability is dangerous, because it undermines the role of international humanitarian law in governing armed violence and protecting non-combatants. An accurate count of civilian deaths caused by newer technologies, such as drones, is not known. How much more worrying, then, is the possibility of weapons for which accountability would be difficult to establish?

Ban needed now

The questions posed at the WEF session suggest that most humans don’t want to put their fellow citizens in harm’s way. But we must also understand that allowing machines to select, target, and kill humans is wrong.

Banning so-called killer robots does not preclude the development of technology that improves the safety of soldiers and first-responders. All that would be banned are weapons that act independently of human control. Still, while agreeing to this idea in principle, some states, such as Canada, are worried that it would be difficult to separate the “good” technology from the “bad.” At the April CCW meeting, Canada shared its concern about the growing autonomy of weapons systems, but also indicated that it does not believe that a ban is good for the development of dual-use technology.

Some security analysts are concerned that a ban would preclude the development of technology that would make weapons systems more accurate and potentially lead to fewer civilian casualties (Horrowitz & Scharre 2015). Proponents of this perspective want to focus on the way in which the weapons are used. In other words, the issue is not technological advancements, but assuming responsibility for using technology.

There are no simple answers to questions about dual-use technology and placing constraints on technology. But it is still important to start a conversation on how to draw some red lines. Governments already impose regulations to control exports of civilian technology that might be used by certain regimes in nefarious ways. Surely we don’t believe that we cannot build robotics systems or make other technological advancements without also creating a “kill” function.

Technological developments, even in civilian arenas, are fast outpacing our abilities to regulate them. Consider the inability of many city councils around the globe to respond to Uber and Airbnb. The stakes are much higher with autonomous weapons systems. Without a ban, we face a global arms race in deadly weapons over which humans have little control. We need a ban—now.

References

Campaign to Stop Killer Robots. 2016. Ban support grows, process goes slow. April 15.
Future of Life Institute. 2015. Autonomous Weapons: An Open Letter from AI & Robotics Researchers.
Human Rights Watch. 2016. Killer Robots and the Concept of Meaningful Human Control: Memorandum to Convention on Conventional Weapons (CCW) Delegates. April.
Horowitz, Michael C. & Paul Scharre. 2015. The morality of robotic war. The New York Times, May 26.
International Committee of the Red Cross. 2014. Autonomous weapon systems—Q & A. November 12.
Osborn, Kris. 2016. Navy LRASM missile destroys enemy targets semi-autonomously; Lockheed tests ship-fired variant. Scout.com, January 19.
Roff, Heather. 2014. Meaningful or meaningless control. Duck of Minerva, November 25.
Roff, Heather & Richard Moyes. 2016. Meaningful Human Control, Artificial Intelligence and Autonomous Weapons. Briefing paper prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems, UN Convention on Certain Conventional Weapons. April.
Watson, Julie. 2016. Military tests unmanned ship designed to cross oceans. Military.com, May 2.
World Economic Forum. 2016. What if robots go to war?

Spread the Word