Where does the world stand on killer robots? The view from recent UN meetings on autonomous weapons

Branka Marijan Conventional Weapons, Emerging Technologies, Featured, News

From March 25 to 29, the Group of Governmental Experts on Lethal Autonomous Weapons met at the United Nations Convention on Certain Conventional Weapons in Geneva, Switzerland.

Three key points of argument

The meetings began with a full room and a clear statement of the position of UN Secretary- General António Guterres: “As I have said on a number of occasions, machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” He impressed upon the meeting the significance and urgency of their work: “The world is watching, the clock is ticking.”

However, in the discussion that followed, countries with advanced militaries reasserted their own oft-repeated view that there is no need for new regulations. Three points appeared in much of the debate on the way forward.

  • 1. A few countries, including Canada, argued that existing international humanitarian law (IHL) is sufficient to address the challenges posed by autonomous weapons. Most other countries, as well as the International Committee of the Red Cross (ICRC), disagreed.

The ICRC has clearly outlined the reason that extant IHL does not cover autonomous weapons: because IHL applies to humans and not machines. Simply put, under IHL, you cannot hold a computer system accountable for its actions.

The opposing view is that a weapons system without meaningful human control is already illegal under IHL. Therefore, there is no need for new regulations.

However, it is not clear that current IHL would account for all the complex circumstances that could develop with the use of autonomous weapons. Is a field commander, who simply initiates activation of a system that he/she does not understand and cannot override, to be held accountable for the subsequent actions of that system? New regulation might bolster IHL and also provide clarity on the development and uses of emerging technologies that have some measure of autonomy.

  • 2. Most countries argued that human control over weapons systems should be maintained. The point of contention: where to draw the line.

What level of human control is appropriate or acceptable? Discussions clearly showed that countries interpret control differently and have different comfort levels with machine decisions. The United Kingdom, for example, seemed to suggest that machines may make more accurate decisions than humans. So, some countries wanted limited human control, while others believe that more meaningful control is needed.  

  • 3. A few powerful countries pushed the benefits of autonomous weapons, citing greater accuracy and predictability.

Russia stated that autonomous weapons would be predictable in their selection of targets. The United States claimed that autonomous weapons could even promote greater protection of civilians. This view was shared by Australia as well. However, several countries, such as Chile, pointed to the need to consider moral and ethical concerns as well and not simply the purported benefits. Other countries statements highlighted the reality that machines are not error or bias proof.

Where does Canada stand?

Canada remained on the sidelines. On the first day, Canada did not make a single statement. And, while it made several statements on the second day, it said little that was new.

But one new focus was both noteworthy and concerning. Canada stated that it is actively researching autonomy in weapons systems “through academic, industrial and governmental channels.” The Canadian delegation was also clear in not wanting to prejudge the outcomes of this research.

What next?

More talks. Countries will meet in late August for two days. It is not clear how the three points noted above, and the many additional points that must be raised, can be addressed in such a short time.

Major countries seem comfortable with such an approach, which limits the opportunities to do the hard negotiating needed to develop new international law. What remains possible is a non-binding instrument, such as a political declaration, which some might see as a way to placate civil society and others who are concerned about autonomous weapons.

Some countries, including Germany, have suggested that a political declaration could be an interim step to a treaty. But what happens if we kick the can down the road, only to discover that we’ve run out of road?

In November, countries will have another opportunity to determine the mandate for further discussion on autonomous weapons. The Campaign to Stop Killer Robots, of which Project Ploughshares is a member, has called for countries to agree to a negotiating mandate for a treaty to ban autonomous weapons that have no meaningful human control over critical functions, such as selection and engagement of target.

Based on the most recent meetings, however, it seems likely that the few countries that clearly oppose a new treaty, including the United States, Russia, Israel, and South Korea, will ensure that the forum produces no substantial outcome. But there are other forums. This issue is simply too important for global security and civilian lives to let regulation fall by the wayside.

Spread the Word