The case for the Petrov rule

Branka Marijan Conventional Weapons, Emerging Technologies, Europe, Featured Leave a Comment

On September 26, 1983, Stanislav Petrov, a 44-year-old lieutenant-colonel in the Soviet Air Defence Forces, made a critical decision (Chan 2017). Petrov was on duty at Serpukhov-15, a control centre outside Moscow that monitored the Soviet Union’s Oko early-warning satellite system, which identified ballistic missile launches, mainly by the United States. The Soviet Union/Russia used these satellites until 2015. Petrov had worked with them from their inception in the 1970s.

Published in The Ploughshares Monitor Volume 38 Issue 4 Winter 2017 by Branka Marijan

That night in 1983, the alarms went off and the system indicated that five intercontinental ballistic missiles had been launched from a base in the United States. Relations between the Soviets and Americans had been tense for weeks. Soviet leader Yuri V. Andropov feared an American attack.

Petrov had to decide whether to act on the information provided by the computer systems or to regard the alarm as a system malfunction. His decision was pivotal. His report would go to his superiors, who would inform the general staff of the Soviet military, who would then consult with Andropov on launching a retaliatory attack.

In response to a “gut instinct,” Petrov reported a system malfunction (Bennetts 2017). He was right. The United States had not launched any missiles that night. Had Petrov reported a ballistic missile launch, the result could have been nuclear war.

Petrov died earlier this year at the age of 77. He only became known to the general public in 1998 when the former commander of Soviet defence, General Yury Votintsev, published his memoirs (Bennetts 2017).

The Petrov rule

Petrov’s story is an insightful tale from the Cold War that clearly indicates the potential dangers of a world with nuclear weapons and one in which we relinquish decision-making to machines. To crystallize the critical lesson here, we can formulate the Petrov rule: Only a human being can make the decision to use a weapons system to launch an attack. This rule is driven by the simple recognition that technology is fallible and that crucial human characteristics, such as Petrov’s “gut instinct,” cannot be replicated by machines.

Elsa B. Kania (2017), an adjunct fellow with the Technology and National Security Program at the Centre for New American Security, writes, “Petrov’s decision should serve as a potent reminder of the risks of reliance on complex systems in which errors and malfunctions are not only probable, but probably inevitable.” Kania is aware that militaries could be tempted to trust algorithms that can process massive amounts of information in the blink of an eye over human beings. Indeed, automation bias—the over-reliance on automated systems that results in errors—is a particular concern as humans interact with ever improving technologies.

Human control and autonomous weapons systems

Petrov was mentioned only once at a side event to the 2017 meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems at the Convention on Certain Conventional Weapons (CCW), held November 13-17 at the UN in Geneva, Switzerland. However, the notion that there is something essentially human, such as Petrov’s gut instinct, that machines cannot replicate was a focal point for debate.

The Campaign to Stop Killer Robots, to which Project Ploughshares belongs, has called for a preemptive ban on the development of autonomous weapons systems that do not have meaningful human control—systems that could select, target, and kill people on their own. So far, 22 countries support an outright ban, with Brazil, Iraq, and Uganda signing on during the GGE meetings (Campaign to Stop Killer Robots 2017).

Most of the countries that spoke at the CCW, including Canada, supported some form of human control over weapons systems. But the nature of human control can become unclear when examining how critical decisions such as selecting, targeting, and killing are made. Are humans ultimately in charge of the decision or are they merely pushing buttons in response to prompts by the system?

When is human control “meaningful” human control? What does “meaningful” mean? This adjective was the subject of much debate. In a recent paper, Article 36, a Campaign member and British civil society organization, stated, “Other terms, such as sufficient-, adequate-, necessary-human control, could be chosen instead of the term ‘meaningful’. The choice of wording has certain subtle implications—for example, the term ‘meaningful’ arguably draws in broader concerns regarding the right to dignity, whereas a term like ‘sufficient’ implies a minimal requirement.” What is clear is that it is important to ensure that any international agreement retains a robust view of the central role a human must play in decision-making.

The human backup

In 1983, the Oko system was still new; it was only placed on combat duty in 1982. Stanislav Petrov knew that it contained flaws and acted accordingly when it sounded a false alarm.

How has our view of technology changed in more than 30 years? Is the same skepticism alive? Kania (2017) asks us to consider how current generations, whose daily lives are filled with interactions with new technologies, might respond to a similar situation. Certainly, military officers using systems with artificial intelligence will have to be trained in their possible fallibility or dysfunction.

In 2010, Petrov declared, “We are wiser than computers. We created them” (Bennetts 2017). Some may point out that machine learning will change that conclusion. Already computers are performing tasks that are beyond human capability. However, the Petrov story should remind us that machines are still machines. Sometimes, no matter what the technology dictates, it is necessary for humans to trust their guts. And their compassion. Their restraint. Their sense of fellow feeling with other humans. So far, no one has been able to create machines with these qualities.

Branka Marijan represented Project Ploughshares at the CCW events.

References

Article 36. 2017. Autonomous weapon systems: Evaluating the capacity for ‘meaningful human control’ in weapon review processes. Discussion paper, November.

Bennetts, Marc. 2017. Soviet officer who averted cold war nuclear disaster dies aged 77. The Guardian, September 18.

Campaign to Stop Killer Robots. 2017. Support builds for new international law on killer robots. November 17.

Chan, Sewell. 2017. Stanislav Petrov, Soviet officer who helped avert nuclear war, is dead at 77. The New York Times, September 18.

Kania, Elsa B. 2017. The critical human element in the machine age of warfare. Bulletin of the Atomic Scientists, November 15.

Spread the Word

Leave a Reply

Your email address will not be published. Required fields are marked *