On killer police robots and weaponization of autonomous systems

Branka Marijan News

Recent developments in autonomous systems, such as self-driving cars and mobile robotics, are exciting. Proponents promise that they will reduce the number of casualties in car accidents and allow machines to do dangerous jobs. But advances in autonomous technology have still raised concerns. One relates to its weaponization.

In early July Dallas police used a bomb-disposal robot to kill a sniper who had killed five police officers and wounded seven others who were maintaining order during a peaceful demonstration. The robot was deemed necessary to prevent further casualties. This was the first time police had used a robotic system in a “deliberately lethal manner.” The robot was neither sophisticated nor autonomous; it was remotely operated. However, its use does seem to be a “tipping point” and to highlight our society’s growing acceptance of robots as killers.

Supporters and critics of autonomous weapons

Until recently much of the discussion on killer robots referenced warfare. For supporters of autonomous weapons the appeal of these systems is their purported benefits for the military and civilians. They suggest that such systems would protect soldiers and potentially save civilian lives in war zones as robots would not commit crimes, as some human soldiers do. Killer robots don’t need rest and operate as they are programmed to act. Perhaps they could even be programmed to follow international humanitarian laws and norms. From this perspective, even if machines made a mistake the casualties would still likely be lower or no worse than those made by regular soldiers. If this is so, why shouldn’t humans develop killer robots?

However, critic Bonnie Docherty, a Lecturer on Law at Harvard University, notes that if fully autonomous weapons systems were used, “the risk of disproportionate harm or erroneous targeting of civilians would increase.” This view is also supported by leading engineering and robotics experts who suggest that humans place too much trust in the ability of machines to navigate complex environments such as war zones. For now and the foreseeable future, robots perform best in highly controlled environments in which scenarios can be accurately anticipated. In war zones, such predictability is not possible. A machine’s mistake could have dramatic and tragic effects.

Some countries, including the United States, want to maintain some human involvement in weapons systems. But how much? Do human operators press the “kill” button after the target has been identified and selected by an autonomous system? Is this sufficient human control? Or is more human involvement needed in deciding on the target? At what point is it clear that identifiable humans can be held responsible for that decision?

Warning! Automation bias

Automation bias is the tendency of humans to rely too much on available automation, even when their own senses tell them that machines aren’t making the right choices.

Consider recent incidents involving self-driving cars. A Tesla Model S car operating on Autopilot crashed into a van that had stopped in the left lane with its hazard lights on. The driver in the Tesla was quoted as saying, “Yes, I could have reacted sooner, but when the car slows down correctly 1,000 times, you trust it to do it the next time too. My bad.” Although the driver saw the van, he did not react because he trusted the system to act appropriately. And he persisted in this belief, despite advice from Tesla that drivers stay alert and use their own judgment.

The stakes are much higher when humans rely too much on autonomous weapons systems. As Heko Scheltema, Assistant Professor at the University of Amsterdam, points out, in highly stressful situations, such as armed conflict, reliance on automated decision-making is even greater. Soldiers often have to make quick decisions and could rely on an automated system to analyze context and choose a course of action. In 2003, for example, a US Patriot missile battery, shot down a British fighter jet killing the two pilots on board. The officer in charge of the US battery gave the order to shoot down the jet after it was incorrectly identified as an anti-radiation missile (ARM). The unit was not trained in dealing with aircraft or the ARM but the officer made the decision to engage given that the system had identified the object as a threat. This is why experts and political analysts suggest that there needs to be meaningful human control involved in target selection.

The human control of weapons systems must be meaningful control. Defining meaningful control is part of an ongoing discussion. It’s not even clear if “meaningful control” epitomizes  the control that analysts believe that humans should have over weapons systems. But “meaningful” surely indicates more than a human finger pushing a “kill” button after computer systems select targets. Human decision-makers should be present and responsible for all actions taken by weapons system.

A need for regulation

How does this discussion relate to the police use of robots? Some critics suggest that bomb-disposal robots will not be commonly used by police. Expense is one deterrent. The robot used in Dallas cost approximately $100,000. But with more R&D, costs will go down. And, as Dallas showed us, already existing robotic systems can be weaponized. And they can be used in non-combat situations.

Tomorrow is coming quickly. Multiple levels of government need to consider policy responses to this emerging technology, both for domestic law enforcement and in military engagements.

Click to Share