Will Canada step up its diplomatic efforts on military AI?

February 16, 2023

By Branka Marijan

With the support of the government of South Korea, the government of the Netherlands is hosting at The Hague the first global summit on responsible military artificial intelligence (AI) on February 15 and 16. The normative landscape for military AI is being drawn, and several states are eager to hold the pen. Canada, however, does not seem to be in this group, even though, with its expertise in the responsible development of AI, its engagement on defence applications of AI is long overdue.

 Consider the following:

  • As a champion, with France, of the Global Partnership on Artificial Intelligence, Canada aims to ensure the responsible development of AI and could build on this activity to become a norm setter in military AI.
  • In June 2022, the Canadian government proposed the Artificial Intelligence and Data Act, which aims to ”protect individuals against a range of serious risks associated with the use of artificial intelligence systems, including risks of physical or psychological harm or biased output with adverse impacts on individuals.”
  • Canada ranks highly in terms of talent, innovation, and investment in AI technologies. According to the Global AI index of Tortoise Media, Canada ranks 4th overall, behind only the United States, China, and the United Kingdom.

But Canada has made only scant contributions to most international discussions and policy development. At meetings on autonomous weapons held by the United Nations Convention on Conventional Weapons, for example, Canada has been largely silent, very rarely making statements or asking questions of experts.

While the United States recently updated its directive on autonomous systems, initially published in 2012, Canada has never released a formal position on autonomous weapons. It is not clear if the mandate given in December 2019 to the then-Minister of Foreign Affairs François-Philippe Champagne is still in place for current Minister of Foreign Affairs Melanie Joly.

The 2019 mandate instructed the Minister of Foreign Affairs to support efforts to ban autonomous weapons. This position would have distinguished Canada from most of its allies, which aim to steer the regulatory discussion to voluntary mechanisms, such as codes of conduct. However, the mandate was never implemented. The subsequent paucity of contributions to international discussions is likely a signal that Canada is now unwilling to lead efforts to develop legally binding instruments.

Such a lack of clarity hinders the work of policy analysts at Global Affairs Canada and the Department of National Defence. Without a clear political direction, it is not surprising that the extent of meaningful Canadian engagement in relevant discussions has been limited.

I have been observing Canada’s lack of engagement on the military use of AI for eight years. It is frustrating to watch because Canada has the expert community, including academics, human rights researchers, and a willing industry, that want to engage on these issues.

Some states, including Canada’s principal ally, the United States, have started to employ increasingly autonomous systems in conflict situations. The United States has used AI in a live operational kill chain, which presumably means that the AI was used to select and engage human targets. The conflict in Nagorno-Karabakh saw the significant deployment of loitering munitions. And the war in Ukraine is now offering opportunities to test AI software for targeting, assessing battle damage, and predicting possible attacks. Battlefield use of AI is accelerating.

In this environment, a robust regulatory response is urgently needed. Fortunately, the window of opportunity has not yet closed.

Even though some countries, have put significant effort into developing policies and their thinking on the application of AI in warfare, most national policies are not yet fully developed. Thus, the opportunity to contribute to the emergence of norms and regulations still exists. But Canada must act quickly if it wants to help to build the capacity and shape the discussions and norms that emerge. 

I have been observing Canada’s lack of engagement on the military use of AI for eight years. It is frustrating to watch because Canada has the expert community, including academics, human rights researchers, and a willing industry, that want to engage on these issues.

Clearpath Robotics, a Waterloo-based company, was the first company in the world to support a ban on autonomous weapons systems that function without meaningful human control. A number of Canadian academics and civil society organizations have continuously engaged on these issues, writing open letters to the government, publishing research and policy recommendations.

Yet Canada seems to lack political will and a serious commitment of government resources. If these are not forthcoming, Canada will be a follower and lose its chance to shape a diplomatic response to the use of military AI. This missed opportunity could have important consequences if the introduction of AI-enabled systems in the not-too-distant future threatens global stability.

 

From Blog

Related Post

Get great news and insight from our expert team.

No items found.

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.