By Wendy Stocker
Published in The Ploughshares Monitor Volume 42 Issue 3 Autumn 2021
I think I finally REALLY get it. I’ve been reading analysis of autonomous weapons and AI-powered tech by Ploughshares Senior Researcher Branka Marijan for years, but I’ve never completely understood why so many individuals and organizations and even countries are totally against weapons that can target and kill humans without humans as part of the decision-making process.
I saw humans as part of the problem. They were already maiming and killing fellow humans. Would handing the killing over to machines really make anything worse?
Now I believe, yes, it will.
I recently attended a Waterloo Artificial Intelligence Institute (Waterloo.ai) webinar, The morality of artificial intelligence in warfare, moderated by Branka, with panelists Laura Nolan and Jack Poulson. These two software experts, now advocates for non-tech solutions to human conflict, convinced me that turning warfare over to machines would be truly horrifying. Indeed, in some ways, it already is. As Jack Poulson says, “the future is here, just not evenly distributed.”
Weapons are still under some measure of human control for at least part of their deployment. And we should not forget that humans still create these weapons and all their components. But equipping weapons with artificial intelligence is raising every imaginable red flag.
The development of AI is the first problem. The software creators are not trained to fully comprehend the ethical dilemmas that arise when deploying weapons in complex, ever-changing conflict situations. Moreover, they bring their own biases and preconceptions to the work, which can find their way into the software.
Another flaw: AI likes sameness, not constant change. It learns best in a stable environment, where it can draw conclusions from patterns. Armed conflict, particularly between asymmetrical opponents, is constantly changing and totally unpredictable. Deliberately unpredictable. Discover that the enemy (possibly a machine) is tracking your movements and you change your routines.
Target selection is another HUGE problem for AI. How does it decide who the enemy combatants are, especially in guerrilla or low-level conflict that engages a lot of weekend warriors, who return to their lives as bakers and farmers and itinerant workers between battles? In fact, AI does a poor job of targeting. It is still not past the stage of deciding that all males of a certain age are combatants. Not a very nuanced judgement.
One of the main claims in favour of autonomous weapons is that they can and will decrease the number of civilian deaths—or collateral damage. So, I was stunned to learn that the number of such deaths generally considered acceptable in any action is 30, according to a “collateral damage estimation tool.” Fewer than 30, it’s a proportional action; more than 30, you need to “tweak the parameters” until you get the desired number.
Can a killing machine incorporate ethics when choosing a target or deciding whether or not to fire? Again, the prospects are not bright. While it might be technically possible, for example, to feed a machine everything ever written about international humanitarian law, the fact is that a lot about IHL is “fuzzy.” Interpretations differ. Definitions aren’t constant or universal. How is a machine that bases decisions on consistent patterns supposed to react?
One lesson I learned from the webinar is that, even if you can teach a machine ethical principles, you can’t tell it what those principles will look like in action, EVERY SINGLE TIME. As Laura Nolan explains, you can tell a machine not to attack someone who is surrendering, but you can’t prepare it to recognize every display of surrender. Arms raised above your head means surrender. But what if you’re injured or tied up and can’t lift your arms? Does this mean you are still an active combatant? As Laura says, it is hard to accurately sense the environment. This is a problem for experienced humans; it’s virtually impossible for machines.
Other problems were raised, but let’s go back to the first one—human software developers, who are, after all, only human. It turns out that these developers need to be experts in international law, acute observers of natural and built environments, skilled analysts of human behaviour, and advanced students of every culture on the planet that has ever engaged in military conflict.
That’s if they’re actually told what the software they’re developing will be used for. But they generally aren’t.
Both panelists worked for big tech and both left because they didn’t believe the work was ethical. They no longer believe that machines can be used to solve conflict. Humans cause conflict and humans must resolve conflict with the same tried-and-true human methods in use for millennia: communication, political leadership, compromise, diplomacy.
To which I say, AMEN. Branka, you were right all along.