The United States has relaxed its drone export policy, bringing into question the relevance of the existing arrangement guiding exports of drones, the Missile Technology Control Regime (MTCR). Countries that were not allowed to purchase some U.S. drones under the previous understanding of MTCR guidelines now face fewer restrictions. While the United States is not likely to export lethal drones …
Why a blog about a recently released novel? Because, as many readers know, fiction is often a compelling way to capture our concerns about the future—and the present. In techno-thriller Burn-In, A Novel of the Real Robotic Revolution, authors P.W. Singer and August Cole examine the many roles that technology can assume in 21st–century national and global security and in …
“Security theatre” is now being used to describe security measures that provide a false sense of safety by only seeming to address specific concerns. Typically expensive, such measures make life more difficult for ordinary people and actually decrease overall security by taking resources and attention away from effective responses. Moreover, some of these measures, like social profiling and the targeting of specific minority populations, can even pose dangers to human security if they are abused.
By early June, 38 countries had turned to a variety of technologies, including smartphone applications, location data analytics, wearable technologies, and even drones and unmanned ground vehicles to monitor the spread of COVID-19 and to control the behaviour of citizens during the pandemic. The use of some of this tech has raised concerns among civil libertarians.
This pandemic has in fact brought into sharper focus the choices that are made about where resources are allocated, which technologies are developed, and for what purposes. These types of choices are and will be particularly important when it comes to applications of AI for national and global security.
As global anxiety grows about the profound impact of the COVID-19 crisis, it may seem that no stone should be left unturned to resolve it. But governments’ use of technology presents clear risks of misuse and abuse. As the crisis unfolds, the methods used by states to tackle it will demand careful public scrutiny, rooted on legitimate expectations of enhanced transparency.
As more surveillance technologies are being used in this fight, a broader conversation has begun on the need to balance the demands of public health with the preservation of privacy and human rights.
Disarmament and arms control have not featured prominently, if at all, in mandate letters to Canada’s foreign ministers in many years. But at the end of 2019, Canadian Foreign Minister François-Philippe Champagne was given a new mandate to “advance international efforts to ban the development and use of fully autonomous weapons systems.”
It sounds like the stuff of science fiction. Enabled by significant advances in artificial intelligence (AI) and robotics, fully autonomous weapons systems with the ability to select targets and employ lethal force with no human involvement—also known as killer robots—may soon emerge.
Police forces were not forthcoming about their use of Clearview AI and facial-recognition technology in general, until a February report revealed that Canada was the largest market for Clearview AI technology outside the United States. The technology seems to have spread quietly, sometimes without the knowledge of those in charge.