Military research and development in recent years have focused on artificial intelligence (AI) tools that gather and analyze data quickly. Combined with improved sensors, they make possible faster and seemingly more accurate targeting of enemy positions. Now this R&D is being operationalized. Last September, according to Secretary of the Air Force Frank Kendall, the United States Air Force, for the first time, used AI to help to identify a target or targets in “a live operational kill chain.”
Two titans from the Cold War era seem set to go another round, this time over the prospect of Ukraine’s membership in the North Atlantic Treaty Organization (NATO), which the United States calls a sovereign Ukrainian decision and Russia opposes vehemently. Whatever the outcome of the current standoff, another confrontation between the United States and Russia that merits closer attention is brewing — one that may fundamentally reshape the US-Russia security relationship in the not-so-distant future.
At first glance, it might appear that seven years of international discussions on autonomous weapons have had few concrete results. At the time of writing, the third session of the 2021 United Nations (UN) Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS) was scheduled to take place in early December in Geneva, Switzerland. The most that is expected from these meetings is a proposal to continue talking.
There is a growing global consensus that all AI technology should exhibit the characteristics of transparency, justice and fairness, non-maleficence, and privacy. While a specific blueprint of responsible AI in defence applications has not yet emerged, shared commitments to reliable technologies that operate with an appropriate role for human judgement and experience are increasingly accepted.
I think I finally REALLY get it. I’ve been reading analysis of autonomous weapons and AI-powered tech by Ploughshares Senior Researcher Branka Marijan for years, but I’ve never completely understood why so many individuals and organizations and even countries are totally against weapons that can target and kill humans without humans as part of the decision-making process.
Responsible uses of artificial intelligence (AI) have been featured prominently in recent national discussions and multilateral forums. According to the Organisation for Economic Co-operation and Development (OECD), 60 countries have multiple initiatives and more than 30 have national AI strategies that consider responsible use. However, the use of AI for national defence has not generally been tackled yet.
As various ministries of the federal government, as well as relevant ministries at the provincial level, seek to develop policy and procedures on the use of AI, they will need clear guidance on the risks associated with different AI applications and how they should be regulated. So far, no Canadian agency has taken the lead in providing the guidance needed to plan for high-risk AI use, particularly in security and defence applications.
Join us for a free, virtual discussion focused on the weaponization of artificial intelligence and technical, ethical, military and security concerns.
The United States is at the forefront of advancements in autonomous swarming technologies. A U.S. government-appointed panel has even said that the country has a “moral imperative” to develop weapons …
During a week of virtual sessions hosted in September at the Geneva offices of the United Nations Convention on Certain Conventional Weapons, Canada remained silent. Not once in the last year has Canada’s Minister of Foreign Affairs focused on autonomous weapons when explaining Canada’s foreign policy priorities.