Q&A: AI in decision support systems
Dr. Anna Nadibaidze is a postdoctoral researcher in international politics at the Centre for War Studies, University of Southern Denmark. Her work explores the military applications of artificial intelligence (AI) and their implications for international security. She holds a Ph.D. in political science from the University of Southern Denmark, an M.Sc. in International Relations from the London School of Economics, and a B.A. from McGill University.
This exchange focuses on AI-enabled decision support systems (DSS), emerging governance challenges, and the shifting terrain of military AI. It was prompted by discussions at meetings attended by both Anna and Branka, including the Responsible AI in the Military Domain (REAIM) conferences held in The Hague, Netherlands, and Seoul, Republic of Korea. It has been edited for clarity and conciseness.
Branka Marijan (BM): Anna, when did your interest in the military uses of artificial intelligence begin?
Anna Nadibaidze (AN): My work on AI in the military domain really began when I joined the European Research Council-funded project AutoNorms (short for “Weaponised AI, Norms and Order”) in March 2021. My Ph.D. research as part of AutoNorms focused on Russia’s practices in developing weaponized AI and monitoring the global governance debate on autonomous weapon systems.
Just as I started my Ph.D., a United Nations (UN) report stated that fully autonomous weapon systems were used for the first time in an armed conflict (the Libyan civil war). Since then, there has been a lot going on, including, of course, Russia’s full-scale invasion of Ukraine and the Responsible AI in the Military Domain Summits. With all these ongoing empirical and theoretical developments, my interest in the topic has only been growing.
BM: Since you began working on this topic, what changes — political, technological, or conceptual — have stood out?
AN: First, the debate has been broadening to consider different uses of technologies labeled as AI in warfare, beyond the existing extensive focus on autonomous weapon systems. Many researchers and analysts, me included, are now more interested in how human actors intend to work with AI technologies to perform military-related tasks, especially those involving force, rather than thinking about how so-called “killer robots” will replace humans. This includes the use of AI DSS in informing military targeting decision-making.
Second, in recent years there is relatively more information available on empirical developments in this area, not least due to the technological innovation happening as part of Ukraine’s defence against Russia’s invasion, but also as part of general military-technological trends around the world.
Third, the governance debate has visibly shifted away from pursuing a potential path to global arms control toward the “responsible AI” framework, which prioritizes non-legally binding measures such as sets of principles and standards.
Finally, I would like to note a general tendency around the world to push for the integration of AI into various aspects of the economy, politics, and daily life, without always engaging in an assessment of where it is appropriate to use AI. Technologies are often seen as magical solutions to complex phenomena — including warfare.
BM: AI-based decision support systems are less visible than autonomous weapons but no less influential. What challenges do they pose for international governance?
AN: The major challenge for me is that AI DSS can be used as part of various tasks and steps in the complex and multidimensional military decision-making process. But their exact role is not always easy to track because, while they are officially meant to be tools, they can inform military personnel’s decision-making both directly and indirectly.
While, officially, humans remain the ultimate decision-makers in the use of force, they might have (over)relied on AI DSS or (over)trusted the algorithmic output. If something goes wrong, how can we ensure the accountability and responsibility required by the laws of armed conflict? Proper guidance on the use of AI DSS, which gives humans the opportunity to exercise agency, is key but difficult to ensure in practice.
BM: From Ukraine to Gaza, how are recent conflicts shaping our understanding of how emerging technologies are used and misused in war?
AN: We have more information and reporting about how AI and other emerging technologies are used in recent and ongoing armed conflicts. While we should be careful with information we cannot always fully verify, empirical developments from these battlefields can reveal some general trends — for example, concerns about the increasing speed of decision-making and the over-trusting of outputs of AI DSS in a way that is legally and ethically unsound, and not necessarily strategically beneficial, either.
At the same time, in my research I do not see technologies as some “outside” force or influence inevitably affecting humans; I try to consider the societal, political, and institutional contexts within which AI systems are developed and used. So, considering the differences between conflicts and their broader contexts is key for developing an assessment of appropriate and inappropriate uses of AI in the military, in my view.
BM: Given deepening geopolitical rivalries, are we seeing any realistic pathways for normative frameworks of arms control when it comes to military AI?
AN: Currently it seems that the chance for a new, legally binding instrument is slim, unless such a measure is adopted by a restricted group of states, e.g., those that support prohibitions of fully autonomous weapons and/or restrictions on other uses of AI and autonomy in the military domain. One potential way for those states to push for such an instrument would be via the UN General Assembly, although the negotiations might take some years, judging by the experience of the Treaty on the Prohibition of Nuclear Weapons. What seems more realistic in the short term is a set of non-legally binding initiatives such as the “responsible AI” framework, sets of standards, guides of best practices, and political declarations, especially among likeminded groups of states.
BM: Much of the focus is on state actors, but how do non-state groups and private industry factor into the military AI landscape?
AN: Civil society and nongovernmental organizations such as the International Committee of the Red Cross or Human Rights Watch have been playing a key role in the debate for many years — for example, by providing expertise and data that informs many state positions in governance debates at the UN.
But what we’ve also seen in recent years is the increasingly influential role of less “traditional” defence actors — not the big defence contractors but tech and software companies, both Big Tech and startups, in developing and supplying military AI technologies. Some of these non-state actors, such as companies Palantir and Anduril, position themselves explicitly as defence tech providers and engage in promoting political narratives that, in my view, should be critically examined further to understand the increasing influence of these actors in global security and warfare.
BM: Indeed, Anna. The role of private technology firms in shaping modern warfare remains poorly understood. Companies such as Palantir are influencing the conduct and character of conflict in ways that merit far greater scrutiny. Thank you for your insightful overview of AI-enabled decision support systems and for highlighting the broader trends reshaping the future of war.
Photo: This screenshot is from a video of Dr. Anna Nadibaidze participating in the International Conference Beyond Europe – Artificial Intelligence in International Relations and Communication: Opportunities and Challenges, at the Faculty of Political Science and Journalism on 12-13 of December 2024.
Published in The Ploughshares Monitor Summer 2025