By Branka Marijan
Published in The Ploughshares Monitor Volume 44 Issue 1 Spring 2023
In February, Branka Marijan presented the LUCIR Lecture: Future Warfare and Civilian Protection at the Leiden University Centre for International Relations in the Hague. Following is an edited excerpt.
The development of military applications of artificial intelligence (AI) has intensified in the past few years, especially since the Russian invasion of Ukraine, raising serious concerns about global stability and protection of civilians in war zones. There is no commonly agreed upon framework among states on which systems and uses are permissible and who is accountable for the effects of such systems and uses. In this time of current geopolitical upheaval, such a deficit is worrying. It is crucial that the protection of civilians and the retention of human control over key military functions, such as the selection of targets, be critical considerations in the development of any application of AI in military systems.
Current state of technology
There is evidence of diverse uses of AI by militaries in the war in Ukraine. While some applications assess battle damage and identify targets, others perform more mundane tasks, like predicting ammunition needs and determining if various weapons and systems need repairs. Interestingly, not only states but private companies such as Palantir are stepping in and providing the technology to the Ukrainian government.
Even before the Russian invasion of Ukraine there were instances of the use of AI in weapons platforms. A notable example is the use of the Turkish-made Kargu-2 loitering munition in Libya. It is not clear if the Kargu-2 can function fully autonomously – independently of human operators – in selecting a target. But the Kargu-2, which has some machine learning and image processing capabilities, is a clear example of a current weapon system that incorporates AI. The Kargu-2 also shows us that such advanced tech is not restricted to major powers but can be accessed by smaller states.
State of governance
Given the lack of agreement on accountability for increasingly autonomous systems and the potential for proliferation, such technologies need to be regulated. The United Nations Convention on Conventional Weapons (CCW) has been exploring regulation for about nine years but has made little headway, due to the actions of spoilers like Russia and the lack of political will of the most advanced militaries.
Still, interest in achieving regulation could be growing. This year there will be several events on responsible military AI or autonomous weapons. The next after REAIM 2023 is a regional conference in Costa Rica on February 23 and 24, and then CCW meetings in the first week of March. Luxembourg will host a conference on autonomous weapons from April 25 to 26. Other events later in the year are likely.
What next?
An international framework to bolster international humanitarian law is needed to regulate the ability of systems to select and engage human targets without meaningful control by human operators. The current lack of agreement could encourage some states to deploy and test systems that are not ready for the battlefield, with unpredictable, possibly catastrophic results.
Harm could also arise from seemingly less direct applications of technology, such as the collection of vast amounts of data on civilians in war zones. For example, the Taliban-controlled Afghan government now controls biometric databases of Afghan security and military personnel who assisted Western donor governments.
Currently, technology is outpacing regulation on military applications of artificial intelligence. However, there is still time to develop an international agreement that ensures the protection of civilians – if states have the political will.