The AI-enhanced kill chain

March 15, 2022

Published in The Ploughshares Monitor Volume 43 Issue 1 Spring 2022

Military research and development in recent years have focused on artificial intelligence (AI) tools that gather and analyze data quickly. Combined with improved sensors, they make possible faster and seemingly more accurate targeting of enemy positions.

Now this R&D is being operationalized. Last September, according to Secretary of the Air Force Frank Kendall, the United States Air Force, for the first time, used AI to help to identify a target or targets in “a live operational kill chain.”

The “kill chain” outlines the attack structure: a target is identified, forces are prepared for the attack, engagement of the target is planned and ordered, the target is destroyed, and, finally, the results of the action are evaluated. In the last two decades, the time needed to complete this process has decreased significantly.

Even if humans make the final decision, the interventions of machines, time, and distance can result in faulty judgements that result in the loss of civilian lives.

Analysts are fairly certain that the militaries of other countries are working on similar AI capabilities. What is not yet known is how soon they will operationalize such technologies. But, as humans build smarter, AI-enhanced weapons systems, we all need to consider the consequences, to civilians and the planet.

Compressing the kill chain

According to U.S. Army Officer Mike Benitez, in a commentary piece published on the platform “War on the Rocks” in 2017, in the early 1990s it was “crazy” to think that air power could strike emerging targets on the battlefield in less than 10 minutes. But in the intervening years, new technologies had allowed for much quicker data gathering and processing.

Benitez predicted that it wouldn’t be long before, once a target had been detected, it would be verified through other surveillance tools, such as satellite imagery, and a weapon would be fired. The desire for speed would mean that more platforms, such as drones, would be equipped with software capable of assessing target identification and selection. As the drone approached the target, its sensors would get a fix on the target and signal engagement.

Transmitting all the information picked up by the sensors to human analysts, as is generally done now, still takes time and more human resources than might be imagined. According to journalist David Hambling in a Forbes article from last October, a crew of 45 is needed to “handle the returns” of a RQ-4 Global Hawk drone mission.

Artificial and human intelligence: How to determine accountability

Secretary Kendall did not say if the AI-assisted strike in September 2021 employed a drone or a piloted aircraft, nor did he provide any operational information about the role of human operators and analysts. A U.S. Air Force spokesperson did assure Hambling that, even though AI helped with targeting, human intelligence professionals were the ultimate decision-makers.

But are the analysts and operators merely approving the strikes that the AI systems are recommending? How well do they understand the technological contexts and reasons that lead to these recommendations?

And are the humans being given all the information that they need to make suitable military decisions? In contemporary conflicts, operators are often a great distance from the actual targets. There might be no friendly forces in the area. In these cases, the operators must rely on surveillance from various platforms that may not always provide the ability to confirm the targets.

Even if humans make the final decision, the interventions of machines, time, and distance can result in faulty judgements that result in the loss of civilian lives. A recent example is the August 29, 2021 drone attack in Afghanistan.

All these questions and observations illustrate the problems involved in using AI in military decision-making, particularly in target selection and engagement. This then leads to concerns about accountability. If the human decision-maker does not have sufficient control over the decision-making, it might be difficult to hold them accountable for the resulting actions. This is no small matter when trying to apply International Humanitarian Law (IHL or the law of war), which can only regulate the actions of humans, not technologies.

Enhancing human decision-making with AI starts to complicate any determination of what constitutes human control. Research on automation bias has shown that human operators tend to rely too much on technology when it is made available. Human operators can also be subject to automation complacency, expecting the system to function as advertised and not being sufficiently alert for any aberrations. Concurrently, human operators must also be prepared for an automation surprise when a system’s actions are unexpected or when a human operator doesn’t understand the cause of a certain machine response.

The spread of technology

Although the development of military technologies is cloaked in secrecy, we can be fairly certain that some of the AI military tools will soon begin to appear in the arsenals of a growing number of armed forces.

Some hurdles will have to be overcome. There are still challenges in applying AI technologies to larger, more sophisticated weapons systems. Further improvements are needed in robotics, sensors, and energy efficiency. Some military personnel will resist technologies that seem to disrupt the usual structures long employed by military forces. However, the ability to quickly identify and engage targets that AI tools and better sensors will make possible will likely prove irresistible in the end.

Evidence that militaries beyond the United States are keen to leverage these technologies could be found at the October 2021 Scarlet Dragon exercise conducted by U.S. armed forces and observed by representatives from the United Kingdom, Australia, and Canada. According to U.S. Colonel Joseph Boccino, the exercise focused on “using AI to shorten the kill chain.”

In its January 29, 2022 issue, The Economist notes that, in Scarlet Dragon, an “exercise in which a wide range of systems were used to comb a large area for a small target, things were greatly speeded up by allowing satellites to provide estimates of where a target might be in a compact form readable by another sensor or a targeting system, rather than transmitting high-definition pictures of the sort humans look at.” The exercise appears to demonstrate a further move away from meaningful human control over targeting decisions.

In 2017, Colonel Benitez said, “We have entered an era where more cognitive weapons and levels of autonomy will only be limited by policy.” Yet, international policy discussions on autonomous weapons have moved slowly. No progress was made at the December 2021 meetings of the United Nations Convention on Certain Conventional Weapons, the main global forum for such discussions.

Political will is needed to regulate these advancing technologies before it is too late. The development of new, enforceable international policy is the only way to address valid concerns about the deployment of still immature systems that will certainly cause harm to civilians and civilian infrastructure. It is also the only way to bolster international humanitarian law and preserve the principle of human accountability.


From Blog

Related Post

Get great news and insight from our expert team.

How to use open-source intelligence to get to the truth

No Canadian leadership on autonomous weapons

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.