There is a growing global consensus that all AI technology should exhibit the characteristics of transparency, justice and fairness, non-maleficence, and privacy. While a specific blueprint of responsible AI in defence applications has not yet emerged, shared commitments to reliable technologies that operate with an appropriate role for human judgement and experience are increasingly accepted.
I think I finally REALLY get it. I’ve been reading analysis of autonomous weapons and AI-powered tech by Ploughshares Senior Researcher Branka Marijan for years, but I’ve never completely understood why so many individuals and organizations and even countries are totally against weapons that can target and kill humans without humans as part of the decision-making process.
Responsible uses of artificial intelligence (AI) have been featured prominently in recent national discussions and multilateral forums. According to the Organisation for Economic Co-operation and Development (OECD), 60 countries have multiple initiatives and more than 30 have national AI strategies that consider responsible use. However, the use of AI for national defence has not generally been tackled yet.
As various ministries of the federal government, as well as relevant ministries at the provincial level, seek to develop policy and procedures on the use of AI, they will need clear guidance on the risks associated with different AI applications and how they should be regulated. So far, no Canadian agency has taken the lead in providing the guidance needed to plan for high-risk AI use, particularly in security and defence applications.
Join us for a free, virtual discussion focused on the weaponization of artificial intelligence and technical, ethical, military and security concerns.
The United States is at the forefront of advancements in autonomous swarming technologies. A U.S. government-appointed panel has even said that the country has a “moral imperative” to develop weapons …
During a week of virtual sessions hosted in September at the Geneva offices of the United Nations Convention on Certain Conventional Weapons, Canada remained silent. Not once in the last year has Canada’s Minister of Foreign Affairs focused on autonomous weapons when explaining Canada’s foreign policy priorities.
During several years of discussions on autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW), several arguments against their regulation have surfaced. Some seem intentionally misleading, while others are out of touch with the rapid development of emerging technologies and the current trends in academic research and analysis.
Militaries are doing more research and development of artificial intelligence (AI), and are looking to implement AI systems. In early August of this year, the U.S. Defense Advanced Research Projects Agency announced that later in the month a human fighter pilot would face off against an AI algorithm in virtual combat.
This pandemic has in fact brought into sharper focus the choices that are made about where resources are allocated, which technologies are developed, and for what purposes. These types of choices are and will be particularly important when it comes to applications of AI for national and global security.