The military, AI, and why it’s time to worry

Branka Marijan Conventional Weapons, Emerging Technologies, Featured, Ploughshares Monitor

By Branka Marijan

Published in The Ploughshares Monitor Volume 40 Issue 1 Spring 2019

In February 2019, the U.S. Department of Defense released Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity. The DoD’s first publicly released document on AI calls for greater application of AI technologies across the U.S. military. Everything from decision-making to identifying potential malfunctions of hardware will be shaped or monitored by AI systems.

There is no mention of fully autonomous weapons systems—still seemingly years away from development. During a briefing following the release of the summary, Air Force Lt. Gen. Jack Shanahan, the director of the Pentagon’s new Joint Artificial Intelligence Center, stated, “We are nowhere close to the full autonomy question that most people seem to leap to a conclusion on when they think about DoD and AI.”

So, what are we worried about?

Shanahan’s comments implied that critics are only worried about the most sophisticated weapons imaginable. Not so. As Mary Wareham, the advocacy director at Human Rights Watch who heads up the Campaign to Stop Killer Robots, noted, “We are not talking about walking, talking terminator robots that are about to take over the world; what we are worried about is much more looming: conventional weapons systems with autonomy. They are beginning to sneak in.”

The summary clearly states: “By improving the accuracy of military assessments and enhancing mission precision, AI can reduce the risk of civilian casualties and other collateral damage.” This is the talk that is worrying.

Why? Because, as Paul Scharre, a former U.S. Army Ranger and analyst at the Center for a New American Security, warns in his book, Army of None: Autonomous Weapons and the Future of War, there is a risk that military leaders will be “seduced by the allure of machines—their speed, their seeming perfection, their cold precision.”

The promise and reality of precision weapons

As Antoine Bousquet points out in his new book, The Eye of War: Military Perception from the Telescope to the Drone, in the last few decades, military leaders and defence firms have focused on the “surgical strikes” and pinpoint accuracy that modern weaponry is supposed to make possible. In his view, such descriptions fuel “delusional fantasies of frictionless exercises of power through military force,” while “the high precision of modern weapon systems is often questionably invoked to assert ethical superiority by the side that uses them.”

This ethical superiority has been observed in United Nations discussions on autonomous systems. The United States and some other countries have suggested that autonomous weapons, in their hands, would protect civilian lives.

But, as Bousquet illustrates in his book, promises of accuracy and precision often do not reflect the reality. So, while weapons are getting more precise over time, they are still not able to prevent the harming of noncombatants.

More importantly, autonomous weapons will be used in contexts determined by governments and militaries with political and strategic agendas. Sophisticated weapons can still be used to commit atrocities. In 2018, for example, the Saudi-led coalition chose to bomb a school bus in Yemen. These choices will not go away with the introduction of autonomous weapons. Nothing about these weapons can stop the powers that own them from disregarding the possibility of collateral damage—or even deliberately targeting civilians.

Reframing our view of autonomous weapons

The U.S. DoD’s strategy on AI reflects current efforts by some to reframe the discussion on autonomous systems in terms of its possible benefits, especially for noncombatants. But this isn’t the right focus. And we should not be fooled by the allure of rhetoric that suggests that the use of AI in weapons will necessarily lead to better protection of civilians.

The countries and groups that genuinely want to focus on reducing risk for noncombatants are not relying on technical solutions to what are ultimately political and ethical problems. Instead, they look to strengthen international laws and norms regulating warfare, including a ban on certain weapons that harm civilians. Given technical advancements and the appeal that the new technology holds for militaries, there is more than a little urgency to this task.

The greater the precision and accuracy of contemporary weapons, the more we need to question how targets are determined and when collateral damage is deemed legitimate. And there are other critical questions. Who exactly are the civilians and the combatants in a given conflict (in other words, who are the appropriate humans to target)? What distinguishes peace from conflict (or, in what situations can weapons be legitimately employed)?

There are incredible gaps in the international legal regime guiding military drones, which are already used by a number of countries and are under consideration by others, including Canada. When is it acceptable to employ drones as a weapon, for example? And what accountability measures are in place to address the use of drones in non-conflict regions?

AI is being used and will be used by militaries, but it must be used within internationally recognized limits. The best way to ensure the beneficial uses of AI is to have legally binding instruments in place that prohibit the use of weapons that are not under human control.

Spread the Word