When Trust Breeds Vulnerability
AI, Disinformation, and Security in the Grey Zone
On a morning in May 2023, an AI-generated image depicting an explosion near the Pentagon circulated widely online, prompting brief confusion in global information and financial markets before being debunked by U.S. authorities and major media outlets. Although entirely fake, the image appeared credible enough to demonstrate how quickly synthetic content can trigger real-world reactions before verification mechanisms can respond.
The incident lasted only minutes, but it exposed a deeper national security vulnerability—one that matters not only to the United States but to all open, digitally connected democracies, including Canada. AI-generated falsehoods can spread doubt among citizens faster than institutions can verify, correct, or respond. In today’s security landscape, speed matters. It allows influence operations to operate in the space between peace and war—often described as the grey zone—where actors pursue advantage through ambiguity, deniability, and manipulation rather than overt force.
AI-driven mis- and disinformation intensify these dynamics. By generating realistic text, images, audio, or video at scale, AI enables deceptive content to spread faster than responsibility can be assigned or trust restored. For Canada, this means that threats to national security increasingly emerge not through direct attack, but through the erosion of trust during moments of uncertainty.
A Defence Issue: Influence, Speed, and Instability
In Canada and around the world, mis- and disinformation are no longer merely social media nuisances. They have become security threats, undermining public trust precisely when institutions need it most. When people cannot distinguish verified information from intentional manipulation, trust in elections, emergency alerts, scientific advice, and public leadership erodes, creating fertile ground for grey-zone competition in which uncertainty itself becomes a strategic weapon.
Misinformation and disinformation have long been part of international competition, from Cold War-era propaganda broadcasts to covert “active measures” campaigns. What has changed is the speed and scale. Artificial intelligence has dramatically reduced the cost, time, and effort required to conduct influence operations, enabling hostile actors to overwhelm information environments faster than institutions can respond. Canada’s intelligence agencies, including the Communications Security Establishment, have repeatedly warned that hostile states employ digital influence operations to undermine democratic processes, a concern reflected in recent national cyber threat assessments.
Generative AI models can rapidly produce synthetic videos, voices, or written materials that mimic official news or policy announcements, generating thousands of tailored variants in the time it takes institutions to issue a single correction. During the early stages of Russia’s war against Ukraine, a widely reported deepfake video falsely showed President Volodymyr Zelensky calling on Ukrainian troops to surrender, illustrating how future operations may seek to blur the line between truth and fiction rather than relying solely on force.
For national defence, the risk lies not only in deception itself but in the instability it creates. Mis- and disinformation distort the information environment precisely when decisions must be made quickly, information is incomplete, and the costs of error are high. Fake alerts, fabricated military movements, or deepfake government statements can delay decision-making, complicate coordination with allies, and increase the risk of miscalculation.
A Human Security Issue: Environmental Crises and the Information Fog
AI-driven mis- and disinformation also pose significant threats to human security during environmental crises. The United Nations Development Programme has cautioned that artificial intelligence accelerates climate-related mis- and disinformation, weakening scientific consensus and distorting public discussions on mitigation and adaptation.
Environmental crises are inherently disorienting, and opportunistic actors are quick to exploit that uncertainty. Government reporting emphasizes that deepfakes, synthetic audio, and fabricated expert commentary make it harder for the public to judge what is credible, undermining not only immediate safety but longer-term trust in environmental governance. As climate impacts intensify, AI-driven disinformation compounds risk not by harming ecosystems directly, but by weakening social and institutional resilience.
Canada is already confronting environmental disasters on a scale once thought exceptional. Recent wildfire seasons in Canada have triggered mass evacuations, degraded air quality across North America, and strained emergency services, underscoring the importance of trusted public information during crises.
An effective response during these moments relies on trusted information: evacuation notices, fire maps, air-quality alerts, and official guidance. When AI-generated falsehoods circulate alongside legitimate communications, public confidence erodes, compliance declines, and protective action may be delayed.
A Technology Governance Issue: Falling Behind the Pace of Change
While AI accelerates the spread of mis- and disinformation, governance responses remain slow and fragmented. Canada lacks a specific federal framework for AI-generated political content, and the proposed Artificial Intelligence and Data Act (AIDA) stalled when Parliament was prorogued in early 2025. Meanwhile, generative tools continue to evolve at a rapid pace.
Public concern continues to rise. Analysis by Policy Horizons Canada highlights growing public anxiety about AI misuse and skepticism about existing safeguards. At the same time, both government and civil society lack uniform standards for verifying AI-generated content, responding to deepfake incidents, or coordinating across jurisdictions. Detection tools lag behind increasingly sophisticated generative systems, widening the gap between technological capability and institutional response.
When Trust Becomes a Vulnerability
Canada’s vulnerability lies less in institutional weakness than in institutional trust. High public confidence in government, science, and media means that false information designed to appear authoritative can spread quickly and cause harm before it is corrected. In a small, highly connected information environment—with responsibilities divided across multiple levels of government—uncertainty can slow response and complicate attribution. Canada’s careful, evidence-based approach to public action, normally a strength, can in this context give disinformation campaigns time to take hold, particularly during crises when speed and clarity matter most.
Trust as the New Battleground
AI has not created misinformation, but it has transformed its strategic impact by accelerating its speed, expanding its reach, and intensifying its psychological effects. Across national defence, emergency response, and public decision-making, one lesson stands out: trust now erodes faster than institutions can verify, correct, or restore it.
This dynamic lies at the heart of grey-zone competition. AI-driven mis- and disinformation exploit ambiguity and doubt not only in geopolitical rivalries abroad, but also during domestic crises at home. The result is a weakening of public trust in elections, emergency response, and government decision-making, all without the use of open force.
In this landscape, the battleground is not territory but trust. Canada faces a choice: allow AI-driven grey-zone tactics to outpace institutional adaptation, or recognize that safeguarding trust is now a core component of national security. The challenge is not only defending facts, but maintaining the public’s capacity to discern—and believe—when something is true.
Ishmael Philip Carrey was a Research Intern at Project Ploughshares through the Technology Governance Initiative at the Balsillie School of International Affairs, with support from Mitacs. This article forms part of a broader research collaboration supported by the Department of National Defence’s MINDS (Mobilizing Insights in National Defence and Security) program, in partnership with the University of Guelph, focused on public engagement at the intersection of environmental risk and national security in the grey zone.
Published in The Ploughshares Monitor Spring 2026
