At first glance, it might appear that seven years of international discussions on autonomous weapons have had few concrete results. At the time of writing, the third session of the 2021 United Nations (UN) Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS) was scheduled to take place in early December in Geneva, Switzerland. The most that is expected from these meetings is a proposal to continue talking.
If you are feeling anxious about the state of global affairs, you are not alone. At Project Ploughshares, we are keenly aware of the multiple, overlapping crises facing the world today. So are millions around the world, increasingly concerned about the complexity of the formidable challenges before us – and about our collective ability as an international community to craft credible and effective responses.
Canada is in dire need of a solid diplomatic strategy that responds to the growing nexus between emerging technologies and national security. Newly-appointed foreign minister Mélanie Joly would do well to prioritize the development of robust and forward-looking policies to tackle tech-related security concerns, as is increasingly the case in the foreign ministries of a number of countries—including key Canadian allies as well as would-be adversaries.
There is a growing global consensus that all AI technology should exhibit the characteristics of transparency, justice and fairness, non-maleficence, and privacy. While a specific blueprint of responsible AI in defence applications has not yet emerged, shared commitments to reliable technologies that operate with an appropriate role for human judgement and experience are increasingly accepted.
I think I finally REALLY get it. I’ve been reading analysis of autonomous weapons and AI-powered tech by Ploughshares Senior Researcher Branka Marijan for years, but I’ve never completely understood why so many individuals and organizations and even countries are totally against weapons that can target and kill humans without humans as part of the decision-making process.
When I started with Ploughshares in 2015, I did a scan of our work and saw that new technologies were transforming and amplifying existing security concerns across our programs—outer space security, arms control, the abolition of nuclear weapons, the nature and causes of armed conflict.
Over the past few months, experts have been surprised by the media attention given to the Turkish-made Kargu-2 kamikaze drone or loitering munition. Everyone, it seems, wants to know if the use of the Kargu-2 in Libya in March 2020 was the first instance of an autonomous weapon being used in conflict.
Responsible uses of artificial intelligence (AI) have been featured prominently in recent national discussions and multilateral forums. According to the Organisation for Economic Co-operation and Development (OECD), 60 countries have multiple initiatives and more than 30 have national AI strategies that consider responsible use. However, the use of AI for national defence has not generally been tackled yet.
As various ministries of the federal government, as well as relevant ministries at the provincial level, seek to develop policy and procedures on the use of AI, they will need clear guidance on the risks associated with different AI applications and how they should be regulated. So far, no Canadian agency has taken the lead in providing the guidance needed to plan for high-risk AI use, particularly in security and defence applications.
To no one’s surprise, United Nations discussions on the regulation of autonomous weapons have stalled. Last year, the global pandemic caused delays, with only one week of discussions—partly in Geneva, Switzerland and partly virtual—taking place from September 21-25. November’s annual meeting of the Convention on Certain Conventional Weapons (CCW), at which the 2021 schedule for discussions on autonomous weapons would have been set, was cancelled.