Autonomous collaborative weapons

March 9, 2021

The United States is at the forefront of advancements in autonomous swarming technologies. A U.S. government-appointed panel has even said that the country has a “moral imperative” to develop weapons driven by artificial intelligence (AI). While the morality of this imperative can be debated, there can be no doubt that the technological advancements of the world’s most advanced military warrant attention and monitoring.

Consider, for example, recent tests by the U.S. Air Force Research Laboratory (AFRL) that illustrate a step toward autonomous collaborative weapons—in this case, weapons systems with pre-defined Rules of Engagement (ROE) that can communicate with each other and identify targets through pre-programmed algorithms, independently of the human pilot. The project indicates the push for greater autonomy in weapons systems among more advanced militaries and a blurring of the lines on the level of human control required in weapons systems. It is also happening in a legal vacuum, with international talks on autonomous weapons stalled.

THE GOLDEN HORDE VANGUARD PROGRAM

The February 19 test by the U.S. Air Force’s Golden Horde Vanguard program showed that collaborative autonomous weapons are not a far-in-the-future issue. In the test, four Small Diameter Bombs (SDBs) modified with network swarming technology appeared to have hit the intended target. The test showed an improvement on the technology used in a December 2020 test that saw two modified SDBs fail to engage the correct target due to software issues. The final test of the collaborative SDBs is scheduled for this spring.

Even before all the information on the second test is clear and the third test is conducted, the Golden Horde program is pivoting away from testing its own swarming weapons to providing a simulation program called the Colosseum. AFRL explains that “the Colosseum will be a fully integrated simulation environment with weapon digital twins, or a real-world weapon and a virtual clone, to more rapidly test, demonstrate, improve and transition collaborative autonomous networked technologies.” Simply, the Colosseum is a tool for digital testing of weapons that would bring in a greater number of vendors and a variety of weapons types. Brigadier General Heather Pringle, commander of the AFRL, stated, “This government-owned reference architecture is really going to be an environment where more players can come and compete their own versions of what autonomous collaborative weapons should be.”

Behind this exploration of what an autonomous collaborative weapon should be is a backdrop of stalled talks on autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW). Some countries were against continuing discussions in the hybrid virtual/in-person format dictated by the ongoing pandemic. There is some indication that efforts to resume CCW talks are ongoing, but so far little information has been made public.

HOW AUTONOMOUS?

The AFRL tests, as well as the increasing use of AI in loitering munitions and robot tanks, raise questions about the level of human control over target selection.

In the January 2021 press release on a December test, the AFRL was clear that it saw collaborative weapons as semi-autonomous. AFRL describes the actions taken by systems as being based on “play calling”—functioning like a quarterback calling a play in football. AFRL explains that “a ‘play’ is an established behavior that groups of collaborative weapons, or swarms, can enable (or disable) when they meet certain predefined conditions.” In the view of AFRL, because the collaborative weapons include pre-determined rules of engagement and have an approved list of plays, they are only semi-autonomous.

Interestingly, AFRL has claimed that the Golden Horde program does not use AI to make independent decisions on target selection and engagement. It stresses that targets and ‘plays’ and the ROE are all pre-determined and do not use AI or machine learning to select or change the targets.

Still, as new digital testing with the Colosseum tool moves forward, we can wonder if the systems tested will adapt different ‘plays’ in ways that might not have been anticipated by the operators. As well, it is not clear how much control humans will retain if the ultimate aim is to allow the weapons to change targets if sensor data gathered by the weapons themselves so indicates.

Much will depend on the advancements of the technology and how autonomous these systems can truly be. Still, it is obvious that such determinations cannot be made without clear guidance on the level of human control that is required over target selection and engagement. The talks on autonomous weapons at CCW had started to pull together key elements of the human involvement needed. Restarting these international talks is necessary to further clarify exactly who is calling the plays.

Photo: US Air Force Research Laboratory

From Blog

Related Post

Get great news and insight from our expert team.

October 23, 2024
Analysis and Commentary

How Summit of the Future 2024 dealt with outer space governance

September 23, 2024
Analysis and Commentary

The endless dance of NPT meetings

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.