Published in The Ploughshares Monitor Volume 42 Issue 4 Winter 2021
At first glance, it might appear that seven years of international discussions on autonomous weapons have had few concrete results. At the time of writing, the third session of the 2021 United Nations (UN) Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS) was scheduled to take place in early December in Geneva, Switzerland. The most that is expected from these meetings is a proposal to continue talking.
But I believe that the discussions that began in 2014 have evolved in important ways. For example, I recall the November 2017 meeting on autonomous weapons. It began like many others. The more powerful countries claimed that weapons that could select and engage targets on their own with little if any human control were a concern for the distant future and didn’t need to be regulated yet.
But on this occasion, there was a lunchtime side event at which a short video entitled Slaughterbots was screened. Created by the Future of Life Institute, this video showed a fictional near-future in which any state or group or individual could purchase armed autonomous microdrones that would target and kill selected individuals.
Terrorists and criminals are becoming bolder in their use of smaller commercial drones, which are managing to evade some air defence and drone jamming systems.
Some, such as author and analyst Paul Scharre, dismissed the video for playing up unlikely scenarios. But when the formal meeting reconvened after lunch, I sensed a shift in the mood of the room. The claims that autonomous weapons were not of immediate concern were, I felt, less strident.
THE CURRENT STATE OF TECHNOLOGY AND GOVERNANCE
Fast-forward four years and there is growing evidence to support the concerns that civil society organizations have expressed all along. In the absence of international regulation, autonomous technologies are being developed and used.
A 2021 UN Panel of Experts report on Libya claimed that an autonomous Turkish drone, Kargu-2, “hunted down and remotely engaged” troops that were loyal to Libyan General Khalifa Haftar. Experts have debated whether the drone was truly autonomous, but the manufacturer of the Kargu-2, STM, has touted its machine learning and facial recognition capabilities.
In November 2020, an Iranian nuclear scientist was killed, apparently with a robotic machine gun that used artificial intelligence (AI) to assist in selecting and engaging the target. While not strictly autonomous – human operators were involved in target selection and engagement – the gun illustrates how AI is enabling remote killing.
Terrorists and criminals are becoming bolder in their use of smaller commercial drones, which are managing to evade some air defence and drone jamming systems. A recent assassination attempt on Iraqi Prime Minister Mustafa al-Kadhimi was apparently conducted with modified, pre-programed consumer drones armed with explosives. Experts predict that future attacks will be even more sophisticated.
Today, more than 100 countries have drones in their military arsenals. The drones now being developed have improved imaging, sensors, and software. A few lines of code could mean that they become more autonomous. Loitering munitions, or kamikaze drones, are of particular interest as they can autonomously patrol an area and then destroy themselves on impact.
It seems unlikely that all these governments, some unstable, will on their own be able to ensure that the technology is only used responsibly and that it is kept out of the hands of terrorist groups. Already, it seems, some countries are supplying or diverting weapons to militias or groups that they support. According to UN experts, Houthi rebels in Yemen have access to weapons, including drones and land attack cruise missiles, that are similar to those produced by Iran.
As well, leading militaries continue to invest in the research and development of new autonomous weapon systems. Within months of each other, China and the United States revealed how AI could beat human fighter jet pilots in simulated dogfights. There have been recent demonstrations of swarming technologies. In the last few years, first-generation drone swarms, possibly sent by non-state groups, have been used against Russian air bases in Syria and Saudi oil facilities.
CATEGORIZING RISK
Without a global regulatory framework and specific prohibitions on certain autonomous systems, it seems inevitable that ever more autonomous systems will soon be developed and employed by many states and become readily available to non-state armed groups, posing an unacceptable risk to global stability. Countries need to begin serious talks immediately to avoid these consequences.
The focus of the December LAWS meetings will be on the nature of the mandate of the Group of Governmental Experts going forward. A strong mandate means that the group can start developing regulations; a weaker mandate will still allow discussions to continue.
Either result will leave countries with options. Middle-power countries, including Canada, have largely remained on the sidelines of discussions but would be well placed to propose and advocate innovative, effective approaches.
One approach is risk-based. It focuses on determining which systems pose unacceptable risks and require prohibitions or strict restrictions and which systems pose lesser risks and require less regulation. A risk approach has been used effectively in previous arms control agreements, which will need to be carefully studied.
Creating categories of risks can provide a governance framework that associates certain risks with certain responses.
Technology such as anti-personnel weapons and weapons that function without meaningful human control pose unacceptable risks and require specific prohibitions. As analyst Zachary Kallenborn indicates, this category would include the chemical, biological, and nuclear weapons that are already prohibited by existing agreements.
The next category would include weapons that pose high risks to the health and safety of civilians. Included in this category would be systems that are less lethal but still capable of immobilizing or causing reverberating effects on civilians and the environment. Moderate-risk systems would be regulated to ensure the transparency of their functioning. Low-risk systems would not require international regulation at all.
MANAGING RISK EFFECTIVELY
Much of the emerging technology is dual-use and can have both civilian and military functions. A risk-based approach is well designed to address many of the challenges that dual-use poses for regulators. The European Commission, for example, has already proposed a risk-based governance framework for AI-enhanced technologies.
Some basic principles must govern the regulation of high-risk technologies. States must ensure that human control is maintained over critical functions of weapons, including selection and engagement of targets. This key requirement will bolster international humanitarian law, ensure human accountability, and preserve human dignity.
It is also critical that states address the proliferation of weapons and the transfer of weapons to non-state groups and terrorist organizations. Everyone should be willing to oppose activities that have a high risk of destabilizing regions or even the entire world. Major states, in particular, should become more receptive to such an approach.
Civil society has been important in keeping various concerns about autonomous weapons in the public domain and needs to continue to advocate concrete proposals that advance effective regulation. Ultimately, however, it is the duty of states to protect all civilians from high-risk technologies. Because the Slaughterbots near-future is already here.
Photo: “Turkish STM Kargu kamikaze drone” by Armyinform.com.ua is licensed by CC BY 4.0