By Branka Marijan
There is a growing global consensus that all AI technology should exhibit the characteristics of transparency, justice and fairness, non-maleficence, and privacy. While a specific blueprint of responsible AI in defence applications has not yet emerged, shared commitments to reliable technologies that operate with an appropriate role for human judgement and experience are increasingly accepted.
Government experts from different countries express a range of views, with some pushing for faster and widespread adoption of AI in defence, while others approach AI for defence more cautiously. Critically, balancing normative commitments with security interests—and, in particular, disclosing capabilities and functioning of systems—will need to be thoughtfully addressed. Finally, ethical concerns about the use of AI feature prominently in discussions among democratic countries and in legislation for various domestic applications.
Ultimately, the real test of expressed commitments will be in the behaviour that follows and in the engagement of countries that are considered adversaries. A more globally oriented process would allow for ownership and a stake in norm development by a greater number of countries.