The Machines Aren’t Waiting

Why efforts to rein in autonomous weapons have stalled – and why 2026 is the time to act

On 1 June this year, 117 Ukrainian drones swarmed deep into Russian territory, striking airbases that house the Kremlin’s nuclear-capable long-range bombers. The audacious attack, dubbed Operation Spiderweb, captured global attention, not only for its ingenuity but for what it revealed about the changing character of modern warfare.

Amid the headlines, one detail deserved far more notice: the drones’ “terminal guidance”, the autonomous last-mile solution that allows a system to finish the job even if its human operator cannot. Humans still chose the targets. But once a drone locked on, it could keep tracking and strike independently, even if communications were jammed or severed. In other words, autonomy is no longer speculative, it is already shaping the battlefield.

Diplomacy, however, has not kept pace.

The Missing Political Will

For more than a decade, governments, experts, and campaigners have met in Geneva to debate how machines capable of selecting and engaging targets without human intervention should be governed. Yet after countless working papers and earnest interventions, there is still no treaty, no ban, and only a fragile semblance of consensus on even the most basic definitions.

The failure is not for lack of foresight or even nuanced understandings of issues at hand. Governments and civil-society groups have long recognised that AI-enabled weapons pose profound ethical and strategic challenges. Leading technologists, including Stuart Russell, Yoshua Bengio and even Elon Musk have warned about the dangers of these systems.

What has been missing is not insight but the political will, and perhaps the imagination, to turn it into rules. As technology races ahead and the mandate of the UN’s Convention on Certain Conventional Weapons (CCW) nears its end, 2026 is shaping up to be a decisive year. Governments will have to decide whether to persist with a forum that has likely exhausted its usefulness, shift the debate to a new institutional home inside or outside the UN, or fall back on a patchwork of voluntary pledges. None of these routes offers much certainty. The only certainty is that technological innovation, and the battlefield, will not pause for diplomacy.

The Stalled Diplomacy

The first serious discussions began in 2014 under the CCW, the same forum that includes the landmine and cluster-munitions protocols. At the time, the notion of “killer robots” felt speculative, closer to science fiction than to an imminent military reality. Delegates debated definitions, with some states pledging not to pursue more autonomous systems and many still uncertain about the utility of AI in military applications. Large language models, the technologies now shaping global debates, were still years away. The level of autonomy achievable in the near future also remained highly uncertain.

A decade later, the reality is far less theoretical: loitering munitions capable of identifying and striking targets autonomously have already appeared in battle. AI decision support systems assist in targeting decisions and military leaders even use familiar AI tools for decision-making. U.S. Major General William “Hank” Taylor has revealed have a close relationship with ChatGPT, OpenAI’s conversational AI chatbot. 

The technology is no longer confined to a handful of weapon systems; it now permeates everything from sensors and drones to decision-support tools and command networks. Few states are prepared to prohibit capabilities they increasingly regard as the future of warfare.

Yet progress at the CCW has been glacial. Its consensus rule has, in effect, handed every delegation a veto—one that major military powers such as Russia have used enthusiastically, while others have quietly welcomed the cover. The result is well-intentioned paralysis: reports are drafted, chairs are praised, and diplomats thank one another for their constructive spirit.

Outside the formal chamber, side events brim with substantive debate, as experts and governments grapple with the technology’s complexity and its limits. But little of that candour ever reaches the official proceedings. To be fair, the CCW has served a useful purpose: as an incubator, it has helped shape a shared understanding among states that a two-tier approach is needed—one tier for clear prohibitions, and another for systems that require varying levels of regulation. But it has not built consensus to ensure a negotiating pathway.

Civil society, led by the Campaign to Stop Killer Robots, has filled some of the void, pressing for a legally binding ban. The Campaign, which Ploughshares joined in 2015, built on previous successful civil society efforts, such as the 1990s landmine campaign that produced the Ottawa Treaty. But the audience, and the issues, are far more challenging this time. The technology is no longer confined to a handful of weapon systems; it now permeates everything from sensors and drones to decision-support tools and command networks. Few states are prepared to prohibit capabilities they increasingly regard as the future of warfare. Civil society, too, will need to think creatively about how to advance new proposals.

The CCW’s inability to deliver concrete results has spurred a search for alternatives. Regional forums, from Latin America to Africa, have begun issuing joint statements calling for regulation. The United Nations General Assembly, which operates with greater flexibility than the CCW, provides an additional venue for moral and political pressure and has adopted several resolutions encouraging further dialogue on autonomous weapons.

The Shifting Battlefield

The urgency of the issue is being shaped not by negotiators but by soldiers, coders, and private companies. As noted at the outset, the war in Ukraine has become a grim laboratory for increasingly autonomous systems. Both sides have employed AI-assisted drones for surveillance, targeting, and even attack. Speaking to The Guardian, Mykhailo Fedorov, the 34-year-old deputy prime minister of Ukraine and minister of digital transformation, put it plainly: “We strive for full autonomy.”

And this is only part of the story. The United States and China are widely acknowledged to possess far more advanced capabilities and the capacity to develop autonomous systems of greater sophistication and at far greater scale. Consider the U.S. Replicator Program, launched in 2023, which aims to field thousands of uncrewed systems to offset China’s advantage in mass, whether in personnel or equipment.

Questions remain about the effectiveness of Replicator’s initial targets, by August of this year, the Pentagon was expected to have deployed thousands of systems, yet reports suggest the number delivered is in the hundreds. Still, the programme is increasingly seen within the U.S. military as a prototype: a test bed for rapid development, deployment, and operational integration of autonomous systems at scale.

For its part, China has worked on Jiu Tian or Nine Heavens, a mothership drone that has a range of 6,400 kilometres and that can carry six tonnes of ammunition and 100 autonomous drones. China has also tested multiple swarm systems and unveiled new uncrewed platforms during its military parade marking the Second World War victory on 3 September. It has even experimented with “drone swarms and robot wolves” in simulated urban-warfare exercises, using human-machine teams working together.

This diffusion is not limited to states. Cheap sensors, open-source software, and off-the-shelf drones have drastically lowered the barrier to entry. As small, and even military-grade, drones become more accessible, non-state armed groups are acquiring capabilities once reserved for national militaries. At least nine African countries have already seen such groups deploy drones in conflict.

The Road to 2026

If 2025 was a year of muddling through, 2026 will be a year of decisions. It will also reveal what remains of the patchwork of governance initiatives built over the past decade. The United States and several allies are now ambivalent about the Political Declaration on Responsible Military AI, a non-binding document outlining principles of human oversight, reliability, and accountability. More than 60 countries have endorsed it, but there is no sign that the current U.S. administration intends to support it.

Recent resolutions at the UN General Assembly on autonomous weapons and responsible military AI were noticeably more muted in 2025 than in the previous year. These issues are also being raised in other UN forums, including human rights bodies, but those venues are likely to have limited impact given that defence ministries “own” the agenda. The REAIM discussion was not held in 2025; it has been rescheduled for February 2026 and will be hosted by Spain. With the expectation that the dialogue will shift into a more informal format in Geneva after the Summit, diplomats are now grappling with a larger question: what comes next, and what is the most effective pathway forward?

At the same time, strategic competition between the United States and China continues, and the current U.S. administration remains unpredictable in its approach. China, for its part, has oscillated between supporting certain regulatory measures and withholding its endorsement of the 2024 non-binding “blueprint for action” adopted at the REAIM Summit in Seoul. China’s stance at the 2026 REAIM Summit will be an important indicator of how it views the future of this process.

Meanwhile, countries from the Global South are pushing for a legally binding framework, something closer to an arms-control treaty than a declaration of intent. Their argument is straightforward: voluntary principles rarely constrain powerful states. As one diplomat observed years ago during CCW discussions, these technologies will most likely be tested and deployed in countries of the Global South. This divide between voluntary principles and binding rules will shape the diplomatic agenda in the year ahead.

Outside formal negotiations, a new ecosystem of norms is taking shape. Financial firms committed to responsible investment are adopting guidelines for investment in defence technology. AI researchers are calling for “fail-safe” mechanisms and testing standards. The International Committee of the Red Cross is developing guidance on how existing humanitarian law applies to autonomous systems. These may not amount to treaties, but they are building the scaffolding of governance.

Still, time is short. The diffusion of AI in warfare is not waiting for diplomats to agree on commas. Algorithms are already embedded in target-recognition systems, logistics planning, and threat assessment. And the next leap in autonomous technologies is likely to come from the roughly 17,619 startups across NATO countries working on dual-use technologies—innovations with both civilian and military applications.

The political challenge lies in reconciling three competing imperatives. Militaries want operational advantage; technologists want freedom to innovate and some with clearer guidelines than others; and societies want assurance that machines will not decide questions of life and death. None will get exactly what they want. The best that 2026 might deliver is a framework that keeps humans legally and ethically responsible, even as their control grows thinner.

For Canada and other middle powers, the coming year offers a chance to shape that conversation. With fewer vested interests in AI-enabled warfare and a history of bridging divides, such countries can help translate broad principles into practical commitments. It is a narrow but vital diplomatic space, the sort where moral leadership, not military might, carries weight.

Whether that opportunity is seized will depend on how policymakers read the moment. The world has stumbled into every major arms race assuming there was still time to negotiate. It would be a tragic irony if, in the age of intelligent machines, humanity’s problem were not ignorance, but delay.

Published in The Ploughshares Monitor Winter 2025