The dilemma of dual-use AI

September 18, 2023

By Branka Marijan and Rebekah Pullen

Published in The Ploughshares Monitor Volume 44 Issue 3 Autumn 2023

Six hours is all it took for an artificial intelligence (AI) model to suggest 40,000 new possible biochemical weapons. The sheer chaos and humanitarian devastation that would be caused by the realization of any of these ideas, which include incredibly toxic nerve agents, are unfathomable.

Co-opting civilian tech for war

The AI model that offered these blueprints is used by scientists to discover new drugs. This demonstration of a benign technology’s potential for destruction underscores both the inherent dual-use nature of AI and the ease with which beneficent tools can be perverted.

Both militaries and the arms control community are familiar with dual-use technologies. It seems likely that militaries have been adapting civilian technologies for combat for as long as militaries have existed. Previous arms control efforts, such as the Chemical Weapons Convention, have had to contend with misuses of common household items. Recent conflicts, particularly the war in Ukraine, have shown how a variety of productive technologies have been used on battlefields with, it seems, minimal alteration.

As a system enabler that can be easily applied to civilian or military tech, AI poses unique challenges for arms control.

The incorporation of AI technologies into conflict contexts blurs the line between civilian and military use even more. Current military operations can employ a range of AI-assisted tech, from facial recognition technology that identifies possible enemies and war dead to sensors and navigation aids that are used in selecting targets. In Ukraine, computer vision technology that allows AI to interpret information from images or video is being used to scan surveillance drone and video footage. In this way Ukraine can track Russian troop movements and identify suspected war criminals.  

It seems certain that AI technologies developed initially for civilian purposes will continue to be adapted in future conflicts.

The quandary of civilian-led advances

Large investments in civilian AI technologies and building pressure to release updated models mean that most of the interesting AI advancements are happening in the commercial sector. Technological spinoffs of the previous generation, like GPS, happened in reverse, with commercial goods developing from defence innovations.

Experts don’t agree on the extent to which civilian applications of technologies, including AI, can be easily and effectively adapted for defence purposes but it is certainly the case that the challenge facing the arms control community grows as AI advances. As a system enabler that can be easily applied to civilian or military tech, AI poses unique challenges for arms control.

For example, it can be difficult to capture the extent of dual-use applications of individual technologies. And private-sector developers are not considering possible military uses when they create a design, making it harder for arms control experts to anticipate potential misuses of the technology. Quite simply, while civilian developers know that their products might have dual uses, they are not designing with dual uses in mind.

As a system enabler, AI tech is hard to contain. As well, as Haruki Ueno notes in the 2023 publication “Artificial intelligence as dual-use technology,” “since AI is a form of software, outcomes can easily leak or get stolen through the Internet.”

The active engagement of defence research agencies in researching potential defence uses of civilian technologies adds more complexity to arms control endeavours. Perhaps China’s military-civil fusion strategy has received the most attention for its expansive view of the integration of the broader economy and defence sector but China is not the only country fixating on civilian technologies for defence. The United States first unveiled a dual-use strategy in 1995; the Pentagon’s defence research arm, the Defense Advanced Research Projects Agency, currently explores the adoption of civilian technologies.

Indeed, most militaries devote research and development resources to adapting civilian technologies. Ueno describes the potential transfer of developments or know-how from this research to “terrorist or hostile countries” as AI’s dual-use dilemma. But this dilemma extends beyond defence research and development. Arms control efforts must address a nimble technology with multiple attack vectors, without being overly restrictive or limiting legitimate uses of the technology.

The use of adapted civilian AI technologies in a combat zone could expose the military operators to unexpected vulnerabilities, including cyber attacks. No one can predict how technology that was not designed to be used in such a dynamic and safety-critical context will perform; the chance of causing greater harm than anticipated creates operator distrust of the technology. Distrust can only grow with research findings that demonstrate that the most advanced current models are impossible to secure against malicious attacks. These weaknesses, which adversaries will seek to exploit, could undermine any advantage the technology offered.

Weaponizing AI not easy

In the end, we can take comfort in the realization that developing AI weapons, particularly more sophisticated weapons, is still not simple – for militaries or non-state groups. As well, arms control mechanisms already in place – such as those for chemical warfare agents – would limit or prevent some weapons development, including the biochemical weapons mentioned earlier.

The ability to run or develop advanced systems requires access to specific hardware that is increasingly difficult to access, particularly by non-state groups. For example, there are new export controls on sophisticated chip technology, which is needed to run more advanced models. And much of the developing tech is controlled by a few companies that have the expertise to protect their products.

Still, it is inevitable that legitimate and malicious actors will find ways to access AI-assisted technologies, which they will then weaponize in some form. Ensuring that a regulatory framework emerges to guide these developments and prevent misuse or abuse will be critical.

Recognizing and limiting the damage

Arms control mechanisms must be modernized to address all these challenges.

The 2020 UNIDIR report Modernizing Arms Control: Exploring responses to the use of AI in military decision-making considers some ways to modernize arms control so that the dual-use nature of AI is addressed. The authors focus on export controls but see the opportunity to update other national policies that relate to the responsible uses of technology by militaries. More international discussion is needed to better understand what the regulatory toolkit needs to contain and to develop global norms that will prevent the most egregious misuses of technology.

It is in the interest of all states to develop a better understanding of potential dual-use applications, to control access to certain technologies, to put in place safeguards and regulations to prevent misuse, and to anticipate the unintended consequences of premature integration of commercial AI technologies into conflict contexts.

From Blog

Related Post

Get great news and insight from our expert team.

December 19, 2024
Analysis and Commentary

An affront to humanitarian norms: Statement on U.S. decision to supply landmines to Ukraine

December 10, 2024
Analysis and Commentary

Amid Gaza carnage, Canada must step up to vigorously defend IHL

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.