How should militaries use AI?

September 28, 2020

erin

By Erin Yantzi

Published in The Ploughshares Monitor Volume 41 Issue 3 Autumn 2020

Militaries are doing more research and development of artificial intelligence (AI), and are looking to implement AI systems. In early August of this year, the U.S. Defense Advanced Research Projects Agency announced that later in the month a human fighter pilot would face off against an AI algorithm in virtual combat.

While some of the claims about how AI will soon revolutionize warfare are certainly exaggerated, there is still reason for concern.Questions about the international security implications of military AI are being asked more often by governments, multilateral and non-profit organizations, academic institutions, and think tanks.

The current landscape

Today’s AI demonstrates some of the abilities of human intelligence—recognition, learning, reasoning, and judgement—but never gets tired, hungry, or bored. However, AI, still a new technology, is also described as “narrow,” “weak,” and “brittle,” able to perform only specific tasks and prone to failing when pushed outside of its programming and training, or when exposed to, or operating in, new environments.

Paul Scharre, author of The Army of None: Autonomous Weapons and the Future of War, notes that, like earlier general-purpose technologies, AI will be militarized. Militaries could use AI to enable more “everyday” military operations; for example, AI could be a support tool for data analysis to aid in operational decision-making, as with Project Maven (see below).

AI allows for the processing of vast amounts of data. Using AI to identify objects or reveal patterns in a battlespace could allow militaries to make better and faster decisions. In addition, AI would allow for increased integration and autonomy of military systems such as sensors, weapons, robotics, and biometric and information systems.

Project Maven

Today’s militaries are overwhelmed by the data collected during operations. The U.S. Department of Defense (DoD) alone operates more than 11,000 drones, collecting hundreds of thousands of hours of video footage every year.

The U.S. Pentagon’s Project Maven, also known as Algorithmic Warfare Cross-Function Team, was launched in April 2017 “to turn the enormous volume of data available to DoD into actionable intelligence and insights.” An AI-enabled surveillance platform that analyzes drone footage will allow the military to track and monitor targets. In future, it is possible that Project Maven could be integrated into weapons systems to fire on those targets.

Google won the original contract to develop Project Maven. However, in April 2018, thousands of Google employees signed a letter to Google CEO Sundar Pichai, demanding an end to Google’s involvement and calling for a policy that Google would not build “warfare technology.” In June of that year, Google announced that it would not renew the contract that would expire in March 2019. In December 2019, Business Insider reported that software company Palantir, which specializes in big data analytics, would take over Project Maven.

On August 10, 2020, FedScoop reported that Project Maven will transition into the Advanced Battle Management System (ABMS) as part of a larger positioning of traditional backend information-technology capabilities to support broader warfighting functions. The United States Air Force will use Maven’s AI capabilities to analyze and combine data from different sensors used in battle. Will Roper, the Air Force’s assistant secretary for acquisition, technology, and logistics, claimed, “There is no distinction between development systems and warfighting systems anymore in IT. AMBS and Maven are to start blurring that line in September.”

AI and warfare: Reasons for concern

AI is changing warfare. The potential exists to remove humans from the decision-making process to “pull the trigger.” AI will also increase the speed of military operations and responses, which could be problematic. Jennifer Spindel, a political science professor from the University of New Hampshire, warns that “militaries will need to balance their desire for a speedy response with the presence of circuit breakers to limit the potential consequences of actions.”

Human control and responsibility over AI

Spindel believes that, “whether it is used for combat robots or analyzing data, artificial intelligence has the potential to decrease human involvement in war.” Meanwhile, Melanie Sisson, a fellow of the Stimson Center, and Scharre fear that, as AI systems become increasingly complex, they will become harder for humans to understand and less transparent. According to Sisson, this could lead to blind human trust in AI systems rather than human action to ensure that the workings of AI are transparent. Scharre believes that maintaining meaningful human control of AI systems through a “centaur command-and-control model” that involves AI and human teams is key to mitigating the risks of military AI.

AI arms race

AI could give rise to a new arms race as states strive for the most powerful AI-controlled weapons systems. According to Sisson, such a race would mean high rates of investment, lack of transparency, mutual suspicion and fear, and a perceived incentive to deploy first. In its 2019 report The State of AI, peace organization PAX asserts that an AI arms race would have negative economic, political, and societal consequences, while endangering international peace and security.

Scharre highlights two more dangers of such a race. “An AI-accelerated operational tempo” could reduce human control on the battlefield. And the push to produce AI military systems quickly could lead to cutting corners on their safe development, testing, and evaluation. Without due process, seriously flawed systems could be put into operation.

Influencing the future

Various groups are already taking steps to control military AI.

By mid-August of this year, approximately 4,500 AI and robotics researchers had signed an open letter calling for a ban on the development of offensive autonomous weapons. Another initiative, The Safe Face Pledge, calls on organizations to pledge to mitigate the abuse of facial-analysis technology; among other commitments, they are to refrain “from selling or providing facial analysis technologies to locate or identify targets in operations where lethal force may be used or is contemplated.” As well, the Campaign to Stop Killer Robots, a coalition of 165 nongovernmental organizations, is continuing to advocate for a ban on fully autonomous weapons and regulations that ensure that meaningful human control over the use of force is retained.

Google and other tech companies have published principles for AI that include declarations that they will not design or use AI that will be applied to weapons; technology intended to injure people; or technology that gathers or uses information for surveillance in violation of internationally accepted norms, or in contravention of principles of international law and human rights.

In August 2019, the United Nations Office for Disarmament Affairs, the Stanley Center for Peace and Security, and the Stimson Center sponsored a workshop on “The Militarization of Artificial Intelligence.” The foreword to the workshop summary says, “While revolutionary technologies hold much promise for humanity, when taken up for military uses they can pose risks for international peace and security. The challenge is to build understanding among stakeholders about a technology and develop responsive solutions to mitigate such risks.”

Military AI has the world’s attention. Concerns have been raised and actions are being taken. But much more needs to be done. The technology is advancing and countries must decide now how they will use AI.

Erin Yantzi was a Project Ploughshares Peace Research Intern this summer.

Photo: A materials researcher examines experimental data on the ARES artificial intelligence planner, as part of Project Maven with the U.S. Department of Defense. Handout