AI targeting in Gaza and beyond

March 26, 2024

By Branka Marijan

Published in The Ploughshares Monitor Spring 2024

Is the world entering an era in warfare in which autonomous or semi-autonomous weapons, augmented by artificial intelligence (AI), become so accurate that military targets are hit with precision, sparing nearby civilian populations? Some experts and countries, including the United States, seem to think so. They are putting their faith in algorithms, which, by analyzing vast amounts of relevant data, can supposedly produce a map of potential targets on the battlespace, prioritize those targets, and then identify the best weapons to use to attack each target.

But we see something else when we examine current military practice involving AI-assisted military systems. We learn that AI tools that work well in a laboratory or a simulation are far from perfect in a real war, particularly one fought in a densely populated area.

Gaza not a laboratory

Consider the current war in Gaza, which has rapidly escalated from a localized crisis to a catastrophic humanitarian emergency with global impacts. In its quest to eliminate Hamas in Gaza, Israel is making significant use of AI to locate and select military targets. Yet the scale of the devastation is truly shocking.

Onlookers are left with key questions: If military AI systems are so advanced and precise, why have so many civilians been killed and maimed in Gaza? And why so much destruction of civilian infrastructure? There are several possible answers. An increasingly convincing one is that AI-aided technology does not live up to the hype.

In its quest to eliminate Hamas in Gaza, Israel is making significant use of AI to locate and select military targets. Yet the scale of the devastation is truly shocking.

Flaws in how AI-assisted tech worked in actual combat could already be seen in 2021. Israel launched Operation Guardian of the Walls against Hamas in Gaza, dubbing the action the “world’s first AI war.” Israel Defense Forces (IDF) used data from various sources, including satellite imagery and signals intelligence, in at least three AI-decision-support systems developed by elite Unit 8200 (the Intelligence Corps).

A system named Alchemist analyzed incoming data and alerted troops in the field to possible attacks. Depth of Wisdom mapped out the network of tunnels, including the depth of each tunnel. This information was critical because the network was so extensive. Reports indicate that Hamas controlled an elaborate 300-mile network of tunnels in Gaza – almost half the length of the New York City subway system (hence the term “Gaza metro”).

Israel also employed an AI targeting-recommendation system named Gospel to zero in on Hamas combatants and their weapons, thus supposedly minimizing civilian casualties. The recommendations from the AI tools were then sent to the air force and ground forces through an application called Pillar of Fire.

With all these AI systems available, the operation still claimed approximately 243 Palestinian lives and wounded another 1,910. According to Israel, more than 100 of those killed were Hamas operatives; the other deaths and injuries were attributed to rockets fired by Hamas.

The quality of human oversight

The systems in use since the 2021 operation have likely been refined. It seems that new technologies have been advanced, including one referred to as Fire Factory. This tool is thought to be capable of evaluating the ammunition capacity and allocating targets to various combat platforms, including both crewed fighter jets and uncrewed drones. The anticipated outcome would be a reduction in civilian casualties and an increase in targeting accuracy.

Gospel and other AI decision-support tools used by Israel for targeting are not seen as fully autonomous, because a human must approve the targets. The recommendations are reviewed by human analysts who then decide whether to approve the target. We don’t know, however, how such decisions are made or how often the recommendations are rejected.

And it is critical to know the extent to which the individuals who approve the target engagement are fully aware of how the AI system arrived at its recommendation. Speaking to the Japan Times, an IDF colonel noted that it can be hard to know how certain decisions were made. He stated, “And then sometimes I’m willing to say I’m satisfied with traceability, not explainability. That is, I want to understand what is critical for me to understand about the process and monitor it, even if I don’t understand what every ‘neuron’ is doing.”

If the humans who approve the targeting do not fully understand how the recommendation was determined, their decision should be judged as not meeting the standards of international humanitarian law (IHL), even if current law does not adequately cover AI targeting.

The Guardian’s investigation of AI targeting by Israel quoted a source who said that a human eye “will go over the targets before each attack, but it need not spend a lot of time on them.” There is a tendency for humans to trust too much in technology – what is known as automation bias.

Yet, Tal Mimran, a Hebrew University of Jerusalem law lecturer who has served in the IDF, has noted that AI targeting tools have limitations. He acknowledged that “there is a point where you need to make a value-based decision.”

International response needed

In a rapidly changing world, more militaries are becoming increasingly dependent on AI-enhanced tools for targeting and other military operations. The current situation in Gaza is only one case that illustrates an immediate need for a robust international response, not only to mitigate the devastating consequences of AI tech on civilian populations but to protect civilian populations.

The challenge is that decision-support systems are not viewed as weapons. Therefore, they are outside the remit of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems or the Convention on Certain Conventional Weapons (CCW) in Geneva.

In an article posted February 2 on the website of the Lieber Institute at West Point, several academics argued that Gospel – along with similar systems – “should be considered as a means of warfare as it forms a military system, or platform, that is being used to facilitate military operations.” If so considered, these systems fall under the scope of the CCW, which is a body of IHL. Therefore, discussion at the CCW could, in theory, be expanded to include decision-making and decision-support systems. At the very least, states could share views on decision-support systems and how they relate to growing autonomy in weapon systems.

However, the CCW relies on consensus – increasingly taken to mean unanimity – to advance all measures and, with certain opposition from key states such as Russia, the CCW seems ill-suited to deal effectively with autonomous weapons or decision-support systems. Newer parallel discussions on responsible military AI offer another avenue on how best to address the use of AI in targeting.

The existing legal obligations of warfaring states are designed to ensure proportionality of attacks and preserve the distinction between combatants and civilians. These standards must not only be maintained but expanded.

The push to use AI systems in warfare is victimizing civilians. Instead of greater precision, it seems that AI is ushering in greater carnage and destruction. Without clear and enforceable rules and norms on the use of all military AI systems, the dangers of escalating violence loom large.

For more on this topic, see Branka’s “How Israel is using AI as a weapon of war” in The Walrus.

Photo: The ruins of Watan Tower destroyed by Israeli airstrikes in Gaza City on October 8, 2023. Credit: Palestinian News & Information Agency (Wafa) in contract with APAimages / CC BY-SA 3.0.

From Blog

Related Post

Get great news and insight from our expert team.

December 19, 2024
Analysis and Commentary

An affront to humanitarian norms: Statement on U.S. decision to supply landmines to Ukraine

December 10, 2024
Analysis and Commentary

Amid Gaza carnage, Canada must step up to vigorously defend IHL

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.