Human-less or human more?

Tasneem Jamal Featured 0 Comments

Recently, Lockheed Martin, the U.S. aerospace and defence company, released a promotional video of its work on autonomous systems. The main message: “The future of autonomy isn’t human-less. It’s human more.”

Published in The Ploughshares Monitor Volume 39 Issue 2 Summer 2018 by Branka Marijan

Doublethink lives on

This formulation brings to mind George Orwell. In his novel 1984, Newspeak, the language of the fictional society Oceania, is full of “doublethink”—slogans such as “War Is Peace; Freedom Is Slavery, Ignorance Is Strength.” In the appendix, “The Principles of Newspeak,” Orwell explores the power of the structures and vocabulary of language to alter consciousness, to make “heretical thoughts” “literally unthinkable.” In this way, he tells us that we must all be vigilant in seeking the truth behind obfuscation and euphemism.

Lockheed Martin clearly does not want its autonomous systems to be seen to be diminishing human control, thus raising fears about a lesser role for human work, human brain power, and human ethical concerns. Instead, the video shows how these systems will INCREASE human abilities.

In adopting this play on “human-less” and “human more,” Lockheed Martin is coopting the language of civil society organizations, academics, and researchers that have spearheaded the concern about the development of autonomy in weapons systems. For example, participants in the Campaign to Stop Killer Robots have been calling for a commitment to meaningful human control to ensure that decisions to end life are taken by humans, not machines. The relinquishing of human control over such key decisions to machines—making control “human-less”—could even lower the threshold for going to war.

This coopting of vocabulary is not a new tactic. There are plenty of examples in which the language of civil society has been used—and abused—by industry and government. In this case, Lockheed Martin is aiming to moderate concerns. The focus is on human safety: new technology will “keep humans out of harm’s way.”

Words matter

If Orwell was concerned about the abuse of language in the 1940s, how much more so must we be now, in a world bombarded with words. We need to be alert to the uses, abuses, and coopting of language. We need to examine how an issue is framed and whose security and interests receive most attention.

In Sex and Death in the Rational World of Defense Intellectuals, Carol Cohn explores the jargon, which she terms “technostrategic language,” used by defence analysts during the Cold War. Such language was “abstract, sanitized, full of euphemisms.” Discussions about weapons that would have catastrophic consequences were stripped of their power to evoke strong human emotion. Cohn points to defence experts who referred to “counter value attacks,” which in translation meant incinerating cities.

Cohn’s insights can help us to understand the divide between defence experts who discuss nuclear weapons at international conferences and ordinary people who contemplate a bomb falling on their city. She writes, “Anyone who has seen pictures of Hiroshima burn victims or tried to imagine the pain of hundreds of glass shards blasted into the flesh may find it perverse beyond imagination to hear a class of nuclear devices matter-of-factly referred to as ‘clean bombs.’” The rhetoric clearly does not match the event.

Today, we are used to hearing about “collateral damage” when innocent civilians are killed in armed conflict. Does it seem like “secondary” or “subordinate” damage to the families of victims?

The new autonomous weapons systems, based on advanced technological knowledge, are described in esoteric technical language that I’ll call DefenceTech speak. DefenceTech speak blends concepts from artificial intelligence and machine learning with military terminology. For sure, such language is sometimes—maybe often—deliberately used to minimize public comprehension. So, a discussion about automating target identification, selection, and engagement is actually about choosing whom to kill. Such language is more than a barrier to understanding by all but computer scientists or combat engineers. Such specialized conversations are also being used to prevent the robust regulation of these weapons.

At the UN

When autonomous weapons are discussed at the United Nations, questions about jargon, definition, and technology constantly surface. In mid-April, the Group of Governmental Experts met at the UN Convention on Certain Conventional Weapons in Geneva, Switzerland. During the week-long meeting it became clear that some progress has been made in defining what autonomous weapons systems are in terms of critical functions, that is, their ability to select and engage targets (choose whom to kill).

But many countries, including France and Germany, are still seeking more clarification when weapons systems of various levels of sophistication come up for discussion. For some countries, autonomous weapons are years away from deployment and will likely include technology that has not yet been developed. For others, the technology—including sensors, facial recognition software, and robotics—already exists. Definitions are about setting parameters for the types of weapons that would be banned; for some countries, future technological advancements would address some of the current concerns.

A few countries use the complexity of technology and rapid advancements in artificial intelligence as reasons for not developing any significant regulations at the global level. The United States stated at the April meetings, “The issues presented by LAWS [lethal autonomous weapons systems] are complex and evolving, as new technologies and their applications continue to be developed. We therefore must be cautious not to make hasty judgments about the value or likely effects of emerging or future technologies.”

The United States released a working paper ahead of the CCW meeting highlighting the humanitarian benefits of more autonomous systems. The main point appears to be that more precise weapons would be more humane—less “collateral damage.” As well, these systems can assist the military in performing functions other than killing, such as search-and-rescue and information gathering.

But few of us object to the use of artificial intelligence and emerging technology in life-saving operations. The working paper avoids the chief objection, that machines could be making the decision to kill. How can THIS capability be considered humanitarian? Not only what is said, but what is left unsaid, must be carefully examined.

Beyond words

Discourse on language is ultimately about giving certain voices privileges not given to others. It’s not surprising that discussions about future weapons technology amplify military voices, which shape the general perception of these systems. Speaking at the GGE in April, Duke University professor Missy Cummings, a former U.S. Navy fighter pilot, provided a thoughtful critique of the state of technology and expressed concern about how autonomous technology would navigate in conflict.

Still, when asked about the potential of future technology she was very supportive. She even suggested that fighter pilots would no longer suffer from post-traumatic stress disorders if fully autonomous weapons were in operation. This focus on the pilots was interesting. Or perhaps it was the absence of any focus on the civilian victims. The lack of any concern for those who face the consequences of military decisions was a stark example of the silencing of humanitarian voices.

Military-centric discourse has an impact on how autonomous weapons systems are valued and viewed by the world at large. And indeed, to date, military strategy and technological concerns have received most of the attention at international discussions on these systems. This focus must change. It is not only the military, engineers, and computer scientists that have something valuable to say about how emerging technologies should be used in warfare. Or, indeed, IF they should be used.

Ultimately, the debate on autonomous weapons is an ethical, moral, and political one. At the heart of the debate is this question: “Should machines be given the right to kill people?” If we put people—especially potential victims—first, the answer should be a resounding “NO”!

Click to Share

Leave a Reply

Your email address will not be published. Required fields are marked *