By Branka Marijan
Published in The Ploughshares Monitor Winter 2024
"What’s in a name?” Shakespeare’s Juliet famously asked, reasoning that a rose would smell as sweet whatever it was called, and so the name was of no importance. However, if we ask “What’s in a term?” and the term is “meaningful human control” (MHC), the answer might be the future of arms control and the regulation of artificial intelligence (AI).
As AI technology advances and developers achieve greater tech autonomy, the need also grows for robust human oversight of critical decision-making to ensure that legal accountability and ethical standards are upheld. MHC has become a focal point in international discussions on new developments ranging from autonomous weapons to self-driving cars and AI in medicine, reflecting mounting concerns about the relationship between AI systems and human operators – and who is ultimately accountable for decisions influenced by these systems.
The word “meaningful” in MHC also points to a broad agreement that having a human simply approve decisions suggested by AI is not sufficient, especially when lives are at stake. However, as AI-enabled technologies become increasingly integrated into both civilian and military operations, the demand for meaningful human control is growing more urgent and more complex.
The meaning of “meaningful”
The concept of “meaningful human control” emerged out of discussions on lethal autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW). First coined by Richard Moyes, the Director of Article 36, a disarmament nongovernmental organization, it was later refined by Moyes and Heather Roff, an academic and researcher. Roff and Moyes encouraged states to contribute to the defining of the term, particularly by highlighting key factors that enhance human oversight. The following are essential:
- predictable and reliable technology,
- transparent systems,
- users in possession of accurate information,
- the opportunity for timely human action and intervention, and
- mechanisms for accountability.
Indeed, at the CCW discussions that I have attended over the years, Moyes has reiterated that MHC defines a starting point for a commitment by states to some measure of human control over critical functions of autonomous weapons, including selection and engagement of targets, and to proper accountability.
Understanding U.S. opposition
While MHC is a popular idea with many states, several influential countries, particularly the United States, fear that it could invite scrutiny of their existing military systems and weaken their competitive edge, especially against China. They believe that the MHC requirement could require human checks that would slow down response times, when speed is a key selling point of military AI. At the same time, there is a belief in the West that China and Russia will not abide by restrictions, putting the United States and its allies at a military disadvantage.
Roff notes in a recent blog post that MHC has come to mean a level of physical control over weapon systems that is not expected or even possible over weapons in general. She observes that “there are ways we can try to maintain ‘control’ over the use of force, but these too are processes, rules and institutions, and do not in any way require physical control.” Roff expresses concern that the push to keep humans in physical control means that older systems, which can cause more harm, are seen as preferable simply because a human is pushing a button. Finally, Roff notes that, while MHC remains key to many discussions, there is still a great deal of confusion about what it would entail.
The United States has proposed alternative terms such as “appropriate context-informed judgments” and “appropriate care” in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy that it is spearheading. While these terms aim to convey a degree of human oversight of AI-driven systems, they are vague, lacking specific requirements for implementation.
On appropriate care
As part of this U.S.-led effort, Canada and Portugal are co-chairing a working group on accountability that aims to clarify the meaning of “appropriate care.” In a recent paper for the Centre for International Governance Innovation, Leah West, a lawyer and associate professor at Carleton University, examines the declaration’s requirement that commanders “exercise appropriate care.”
West explains that this term emphasizes the need for commanders and operators to make informed, context-specific decisions about AI systems. Such decisions should be based on the system’s function, their training, their understanding of the target and environment, and the requirements of international humanitarian law (IHL). She argues that autonomous weapons and military AI can align with existing IHL principles; rather, the issue is the willingness of commanders to depend on autonomous weapons and AI decision-support systems, which could expose them to criminal liability.
To responsibly deploy these systems, military commanders must, according to West, ensure that they adhere to key IHL principles. Thus, AI systems must be predictable, training must exceed a basic understanding of the technology, and commanders must exercise discipline by showing restraint in deploying systems that might violate IHL, even if not deploying increases risks to their own forces. West intends this proposal to be a starting point that states can elaborate.
How can the concept of “appropriate care” be strengthened? First, additional constraints on militaries are needed on which decisions can be delegated to autonomous weapons or influenced by AI systems. Even when extensive training and a certain level of system predictability are in place, states will need to consider how AI could impact human judgment, especially in life-and-death situations that could involve large military operations.
Roff and Moyes have argued that states should also establish processes to check for potential malfunctions or errors, ensuring that human intervention is possible if needed. Drawing from healthcare, where “appropriate care” focuses on the patient, the emphasis here should be on protecting civilians.
What remains to be done
The challenge for states is to ensure that “appropriate care” is not interpreted too loosely. A broader concern is the potential for autonomous weapons and AI systems to escalate conflicts and normalize autonomous warfare, raising significant concerns for global peace and security.
Once “meaningful human control” or “appropriate care” – or some other term – becomes generally accepted, states must establish robust governance frameworks that prioritize transparency, accountability, and oversight. Transparency builds public trust in AI systems by ensuring that the development and deployment of these AI-assisted weapons are subject to both domestic and international scrutiny. Clearly defining the roles and responsibilities of military commanders and including mechanisms for external review establish accountability. And comprehensive oversight ensures that AI in military contexts aligns with ethical standards and safeguards against misuse.
Whatever the term chosen, the end product must be a legally binding agreement or treaty that features both prohibitions and regulations. Systems that lack sufficient levels of human control or pose a serious risk to civilian populations should be prohibited. The stakes are too high to settle for anything less.