On December 13, 2019, Canadian Minister of Foreign Affairs François-Philippe Champagne was mandated to “advance international efforts to ban the development and use of fully autonomous weapons systems.”
Despite this, Champagne and Canadian officials have not appeared to engage on this issue in 2020. During a week of virtual sessions hosted in September at the Geneva offices of the United Nations Convention on Certain Conventional Weapons, Canada remained silent. Not once in the last year has the Minister focused on autonomous weapons when explaining Canada’s foreign policy priorities.
To be sure, 2020 was a challenging year. But the global pandemic did not stop Germany from assuming a leadership role and bringing together countries and experts from around the world for virtual events. One in April focused specifically on autonomous weapons. Another in November dealt more broadly with how new technologies challenge arms control. At neither did Canada make a meaningful contribution.
This almost total lack of engagement stands in stark contrast to Canada’s open commitment to the development of the Global Partnership on Artificial Intelligence (GPAI). Even though the United States was initially opposed, Canada and France pushed forward. Each promoted this initiative to ensure responsible uses of AI during their respective G7 presidencies in 2018 and 2019. Canada is the 2020-2021 Chair of the GPAI Council.
If Champagne and the Canadian government believe that autonomous weapons and responsible uses of AI are separate issues, then they are incorrect. UN discussions on autonomous weapons held since 2014 have revealed core concerns that extend well beyond weapons systems. Distill all the worry and fear and you arrive at this key question: At what point should computer systems NOT be permitted to make autonomous decisions?
This question is relevant to all applications of AI. In many ways, talks on autonomous weapons first revealed how countries understand the role of human agency and algorithmic decision-making.
If Champagne and the Canadian government believe that autonomous weapons and responsible uses of AI are separate issues, then they are incorrect.
The militaries of countries including Canadian allies the United States, Australia, and the United Kingdom are confidently developing autonomous sea vessels and unmanned ground vehicles. They are testing swarms of drones that communicate with each other and independently assess ways to achieve an objective. This confidence continues despite demonstration of the limitations of some of these technologies as well as presence of bias.
All actors appear to agree that some human control must be retained, but there is no agreement on what “human control” means. In some definitions, humans retain oversight of computer systems; in others, humans must begin an action that the systems can then complete without further human engagement.
These definitions will carry over to commercial settings and the delivery of some government services and healthcare. Organizations, in line with government regulation, will need to determine the point at which a human should be involved in automated decision-making. As well, all involved must come to consensus on the type of decisions that should remain the domain of humans.
The need to establish limits on autonomous weapons is acute. Systems that could soon be in operation that have the potential to cause grave harm.
Without new regulation, countries will likely use autonomous weapons systems as they see fit. Existing international laws do not apply to algorithms. If humans can plausibly assign responsibility to an autonomous system, then no human can be held accountable. In this event, all responsible uses of AI will be undermined.
There is no responsible AI without regulation of autonomous weapons and a ban on fully autonomous systems—those with no significant human control. Canada’s active engagement in the GPAI is an opportunity to show the leadership on autonomous weapons promised more than a year ago.