5 Misconceptions about Autonomous Weapons Systems

November 2, 2020

Branka Marijan|Cesar Jaramillo

View PDF

During several years of discussions on autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW), several arguments against their regulation have surfaced. Some seem intentionally misleading, while others are out of touch with the rapid development of emerging technologies and the current trends in academic research and analysis.Here are 5 common—and incorrect—assumptions:

1. The concern is with complex systems in the distant future.

Concrete action to address the worrying trend towards the erosion of human control over weapons systems is not only urgently needed today, but long overdue. Advances in artificial intelligence (AI) and information technology, among other technical innovations, have already permeated the military strategies of several nations, in a rapidly evolving process that is leaving policymakers in the dust.Unreliable systems that can be assembled today and in the near future are at the core of immediate concerns in discussion on autonomous weapons systems. There is now a clear demonstrable trend among some militaries toward the development and deployment of weapons systems that operate without significant human control over critical functions of target selection and engagement.Automation bias, that is, the human tendency to over trust technology, is a well-documented issue that will only increase as decisions are made with greater distance in time and space. Growing reliance on technologies, such as artificial intelligence that remain brittle and incapable of understanding context necessitates a legal requirement of significant human control over selection and engagement of targets. New regulations are needed to bolster the application of International Humanitarian Law in  current and future conflicts.

2. A workable definition of fully autonomous weapons systems cannot be reached.

On the contrary, several states have already clearly defined key elements of autonomous weapons systems. The International Committee of the Red Cross (ICRC) has proposed a general definition: autonomous weapons are “weapons that can independently select and attack targets, i.e. with autonomy in the 'critical functions' of acquiring, tracking, selecting and attacking targets.” The ICRC and other organizations and experts have also provided guidance to aid states in understanding the type and extent of control necessary over weapons systems.These promising beginnings can be further developed and a final definition negotiated through a diplomatic process by states that have participated in the CCW dialogue. Because the technology will continue to evolve, states must be sure to future-proof any definition to ensure that regulations pertaining to autonomous weapons remain relevant.

3. Autonomous weapons would only be used by militaries in conflict contexts.

Rather, history and current events tell us that technologies are not easily confined to one sphere. If other actors covet these systems, they will find a way to acquire them.Moreover, some of the technological components of autonomous weapons, such as facial recognition, are already in development and are being used by police and national governments on their own citizens.There is no doubt that fully autonomous weapons will be sought out by non-state actors and authoritarian regimes. National law enforcement and border agencies are also likely to gain access to some systems, raising concerns about civilian protection in non-conflict contexts.Given the diffuse nature of the technological components it is imperative to ensure that more sophisticated military systems are not developed as they will proliferate. To prevent such technology creep, existing international regulations on dual-use technologies can be looked to for best practices and regulatory mechanisms.

4. Autonomous systems would perform more ethically than humans.

Machines are no more “ethical” than their creators. Their operation is a function of the algorithms and other data that make up their programing—and the data used to train AI systems is biased, inextricably tied to social and political realities. Facial recognition technologies, for example, often misidentify minority communities. So far, there is no evidence that more advanced technologies will eliminate concerns about societal bias and the inability of machines to understand context.Smart machines might, indeed, hit targets more accurately. But the choice of targets could still be wrong. A lack of correct contextual information, such as cultural and religious practices, could lead to an inappropriate selection of target, as has occurred with signature drone strikes.Soldiers and commanders who commit crimes, even under orders, can be held morally and legally responsible for those crimes. Machines have no moral compass and assume no legal responsibility. If ordered to commit atrocities, they will do so. But this raises the question: who is accountable? As such, the use of autonomous weapons only raises new ethical concerns.

5. The benefits of autonomous systems outweigh the risks.

Some states claim that autonomous weapons systems will benefit noncombatants. Certainly, the potential exists for governments to use new technologies to create more precise targeting systems that save civilian lives and reduce the risks to civilian infrastructure. Still, these advancements can happen without relinquishing human control over critical functions of weapons systems.Yet civilians and civilian structures are too often now the deliberate target of conventional weapons that are employed by governments and non-state actors. This will not change when more advanced weapons are employed. Indeed, the scale of harm could escalate as swarming technologies are developed and multiple systems are deployed at once.Autonomous weapons can be hacked. They are imperfect and can make mistakes. And, once in operation, they are hard to stop. The risks of accidents and adversarial attacks on an autonomous system bring further security risks to civilians and pose new challenges for global security.Conflicts could escalate if autonomous systems are involved in an accident or the machine makes a mistake. Militaries would have a difficult time proving that it was a machine error and not a military intent for an action to be carried out.Rather than minimizing risks, these systems could cause more civilian harm and result in the escalation of conflict if the error is seen as intentional. The benefits definitely do not outweigh the risks.