The Autonomous Conductor?

AI and the Future of Command

Few people outside defence circles spend much time thinking about military command and control, or C2. Yet the way new technologies are transforming command and control should concern a much wider audience than defence analysts alone.

To avoid a jargon-heavy explanation of C2, a useful analogy is that of an orchestra. The conductor acts as the commander, setting the tempo and style of a piece, while relying on the musicians—the units—to execute specific notes. Successful performance requires both centralized direction and decentralized skill. At times, if conditions allow, the conductor may delegate greater autonomy, enabling musicians to innovate within the broader framework of their guidance. Unlike a concert hall, however, military contexts involve life-and-death decisions, and increasingly, countries are directly or indirectly introducing artificial intelligence (AI) into front-line decision-making.

As these technologies migrate from research and development into active theatres of operation, the “orchestra” of command faces a profound systemic shift. The drive for algorithmic speed—the ability to process sensor data and designate targets in seconds—threatens to compress the space available for human judgment and moral deliberation. While military planners argue that AI is necessary to manage the data-saturated “hyperwar” of the future, this evolution risks decoupling the chain of command from the chain of accountability and may heighten escalatory dynamics.

Decision-support tools can narrow perceived options, prioritize certain targets, or shape situational awareness in ways that materially affect the human commander’s judgment … Over time, this can produce what might be described as a form of cognitive compression, where the human remains nominally in charge but operates within increasingly machine-shaped parameters.

In the future battlespace, if the conductor’s role is increasingly sidelined, they risk becoming a spectator to a lethal process they can no longer effectively lead or interrupt. For the international community, the challenge is therefore not purely technical. It requires deeper understanding of how decision-making, human agency, and contextual human judgment are transformed when AI-enabled tools and decision-support systems are integrated into military operations. As international discussions on responsible military AI and autonomous weapons struggle amid geopolitical uncertainty, the need for a broader suite of governance responses is growing more urgent.

Russia’s “Svod” System

Part of the urgency surrounding international discussions stems from the rapid evolution of battlefield management systems in contemporary conflict zones, most notably Ukraine. Analysts have pointed to Russia’s reported roll out of the “Svod” Tactical Situational Awareness Complex as an illustration of how digital integration is reshaping command structures in practice.

Svod appears designed to bring together data from multiple sensors, drones, reconnaissance assets, and battlefield reporting, into a more unified operational picture for commanders and forward units. Russia is not a pioneer in this area. Indeed, the United States and several other advanced militaries have even more sophisticated systems that they are using and developing.

In theory, such systems promise faster targeting cycles and improved coordination across dispersed forces. Promises of greater civilian protection and more accurate targeting are also often made. In practice, however, they also reflect a broader shift toward more distributed and digitally mediated forms of command. As analyst Kateryna Bondar notes, Svod is less centralized than some of the systems the United States and other Western countries have sought to build under so-called joint concepts, that is, efforts to connect all military branches into a unified AI-enabled network for mission execution.

This evolution is partly adaptive. Russian forces have faced significant battlefield pressures that exposed weaknesses in traditional, highly centralized command models. Notably, military analysts point to Russian officers’ inability to make timely decisions in individual engagements as one of the motivating factors for the introduction of technical solutions. Systems like Svod aim to shorten the sensor-to-shooter loop and enable faster tactical responsiveness at lower echelons. Yet the same features that enhance speed and responsiveness also introduce new risks.

First, greater automation and data fusion can create automation bias, where human operators over-trust machine-generated outputs, particularly under time pressure. Researchers Marta Bo and Jessica Dorsey also point to a cognitive action bias, that is the “human tendency to take action, even when inaction would logically result in better outcome.” Second, the fragmentation of decision-making authority across networked units can blur lines of responsibility when AI-enabled tools shape targeting or operational recommendations. Third, highly networked command systems may create new vulnerabilities to spoofing, cyber intrusion, or data manipulation, risks that are amplified in contested information environments. In short, systems like Svod make command faster and structurally different.

Moreover, the Ukrainian context continues to demonstrate the accelerating role of technology in warfare, with new systems being tested and deployed even as efforts to end the conflict, which has devastated the country and its population, repeatedly falter. In such a context, the sustained operational tempo and cognitive load placed on commanders, often over years of continuous conflict, introduce additional human and organizational challenges that are contributing to the seeking of technical solutions for the pressures the commanders carry.

The Human Role Under Pressure

Much of the policy debate on autonomous weapons as well as military AI has focused on whether machines will make lethal decisions independently. This remains a critical concern. However, AI-enabled decision support systems as discussed above add another subtler layer that influences rather than formally replaces human decision-makers.

Decision-support tools can narrow perceived options, prioritize certain targets, or shape situational awareness in ways that materially affect the human commander’s judgment. As researchers at the Center for War Studies at the University of Southern Denmark note, “when a system presents a human with one option of a set of limited options, it makes it challenging to choose other pathways.” Over time, this can produce what might be described as a form of cognitive compression, where the human remains nominally in charge but operates within increasingly machine-shaped parameters.

Reporting from Gaza has illustrated the risk of human rubber-stamping of AI-enabled decision-support systems. Some targeting decisions were reportedly made in as little as 20 seconds, and in practice reduced human decisions to confirming that the identified individual was male.

The orchestra analogy again becomes instructive. The risk is not only that the conductor disappears, but that they continue to stand on the podium while the tempo, score, and cues are increasingly set elsewhere.

For militaries operating under intense time pressure and information overload, the appeal of AI-enabled systems is clear. But the cumulative effect may be to erode human agency not only through a single handover to autonomy, but through incremental shifts in how decisions are framed, accelerated, and executed.

Implications for International Security

These developments carry significant implications for strategic stability and responsible military AI governance.

Faster targeting cycles and compressed decision timelines can increase the risk of inadvertent escalation, particularly in crises where ambiguity is high and human verification windows shrink. Distributed, AI-enabled command systems may also complicate traditional accountability frameworks under international humanitarian law, especially where responsibility becomes diffused across human-machine teams.

For middle powers such as Canada, states that often emphasize responsible technology use, the challenge is twofold. First, they must understand the operational realities driving AI adoption in military contexts and how various systems interact, especially in coalition environments. Second, they must help shape norms, confidence-building measures, and governance frameworks that preserve meaningful human agency in increasingly automated battlespaces. The latter is particularly necessary given that, for now, the United States and China appear disengaged from many of the governance efforts aimed at establishing guardrails on the expanding role of AI in military contexts.

Keeping the Human Conductor in Control

The future of military command is unlikely to be fully autonomous, nor will it remain comfortably human-centric. Instead, it will be defined by increasingly complex human–machine partnerships.

The central policy question, therefore, is not whether AI will enter command and control (it already has), but whether governance frameworks, operational doctrine, and system design will evolve quickly enough to ensure that human judgment remains both meaningful and accountable.

If the orchestra is already playing faster, the task ahead is to ensure the conductor can still shape the music, and, when necessary, bring it to a halt. Or indeed to consider whether the change in tempo is actually necessary in a particular context, or is a result of what Professor Zena Assaad calls a “fabricated fear of falling behind.” As research on orchestras found, a conductor who has greater control also produces superior results. Ultimately, it is government decisions that determine the score and decision-makers must be clear-eyed about how their choices will shape societal outcomes.

Published in The Ploughshares Monitor Spring 2026