There is a growing global consensus that all AI technology should exhibit the characteristics of transparency, justice and fairness, non-maleficence, and privacy. While a specific blueprint of responsible AI in defence applications has not yet emerged, shared commitments to reliable technologies that operate with an appropriate role for human judgement and experience are increasingly accepted.
I think I finally REALLY get it. I’ve been reading analysis of autonomous weapons and AI-powered tech by Ploughshares Senior Researcher Branka Marijan for years, but I’ve never completely understood why so many individuals and organizations and even countries are totally against weapons that can target and kill humans without humans as part of the decision-making process.
When I started with Ploughshares in 2015, I did a scan of our work and saw that new technologies were transforming and amplifying existing security concerns across our programs—outer space security, arms control, the abolition of nuclear weapons, the nature and causes of armed conflict.
Over the past few months, experts have been surprised by the media attention given to the Turkish-made Kargu-2 kamikaze drone or loitering munition. Everyone, it seems, wants to know if the use of the Kargu-2 in Libya in March 2020 was the first instance of an autonomous weapon being used in conflict.
Responsible uses of artificial intelligence (AI) have been featured prominently in recent national discussions and multilateral forums. According to the Organisation for Economic Co-operation and Development (OECD), 60 countries have multiple initiatives and more than 30 have national AI strategies that consider responsible use. However, the use of AI for national defence has not generally been tackled yet.
As various ministries of the federal government, as well as relevant ministries at the provincial level, seek to develop policy and procedures on the use of AI, they will need clear guidance on the risks associated with different AI applications and how they should be regulated. So far, no Canadian agency has taken the lead in providing the guidance needed to plan for high-risk AI use, particularly in security and defence applications.
To no one’s surprise, United Nations discussions on the regulation of autonomous weapons have stalled. Last year, the global pandemic caused delays, with only one week of discussions—partly in Geneva, Switzerland and partly virtual—taking place from September 21-25. November’s annual meeting of the Convention on Certain Conventional Weapons (CCW), at which the 2021 schedule for discussions on autonomous weapons would have been set, was cancelled.
Join us for a free, virtual discussion focused on the weaponization of artificial intelligence and technical, ethical, military and security concerns.
The United States is at the forefront of advancements in autonomous swarming technologies. A U.S. government-appointed panel has even said that the country has a “moral imperative” to develop weapons …
According to a recent report by Canada’s privacy commissioner Daniel Therrien and three provincial counterparts, Clearview AI has broken Canada’s privacy laws. Therrien told reporters that the company’s technology and …