Autonomous Kalashnikovs?

Branka Marijan Featured 0 Comments

The Russian manufacturer is developing weapons that can identify targets using artificial intelligence

Russian arms manufacturer Kalashnikov is best known as the producer of the infamous assault rifle, the AK-47. The AK-47, as journalist C.J. Chivers (2010) notes, has “fundamentally rewritten the rules of modern warfare,” mainly due to the “ease of use, cost, reliability, and readily available parts and ammunition.” With these features, what Chivers calls “the everyman gun” became the most widely available weapon in the world. According to some estimates, there are approximately 200 million AK-47s, or roughly one for every 35 people (Laville et al. 2015).

Published in The Ploughshares Monitor Volume 38 Issue 3 Autumn 2017 by Branka Marijan

Now it appears that Kalashnikov is looking to the future and developing weapons with artificial intelligence (AI), with no humans involved in such critical aspects of decision-making as the decision to kill. In July 2017, it announced that it has developed weapons that are capable of identifying, selecting, and killing targets with the use of AI. Moreover, these are only the first of the neural network weapons, essentially computer-directed, that the manufacturer plans to develop.

These are precisely the types of weapons that the Campaign to Stop Killer Robots, to which Project Ploughshares belongs, has sought to ban.

How worried should we be? Kalashnikov has provided little detail and there is significant doubt among political analysts and technological experts that the company can deliver on its promises.
Still, the announcement raised again the crucial issue: without a ban on the development of fully autonomous weapons, we could see a race to create them. In an open letter, leading scientists and tech entrepreneurs such as theoretical physicist Stephen Hawking and Tesla and SpaceX CEO Elon Musk highlighted the danger of military applications of artificial intelligence and, specifically, the concern regarding autonomous weapons. Most notably, they stated, “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow” (Future of Life Institute 2015).

Past lessons and new developments

The general view is that the new Kalashnikov weapons will likely be capable of performing certain functions on their own, but will have humans controlling such key functions as engaging the target. Stephanie Carvin, professor of international security at Carleton University, notes that the Russian arms maker is better known for low-tech products. And it is true that Russia has lagged behind other key military powers in the development and deployment of autonomous systems. For example, the United States Air Force has been deploying drones since the mid-1990s and began arming them in the early 2000s (Axe 2017). The Russians have some drones, but have not yet armed them.

Peter W. Singer, Strategist and Senior Fellow at the New America Foundation in Washington, D.C., believes that Russia is working on more autonomous systems, but does not believe Kalashnikov’s claims of full autonomy (Axe 2017). Patrick Lin, a roboticist at California Polytechnic State University, calls the Kalashnikov claims “vague” (Axe 2017); he asks, “What exactly is [the weapon] learning to do? It could be as simple as recognizing humans, or as complicated as recognizing an adversary who’s carrying a weapon.”

For almost a decade, militaries have been using neural network technology—essentially computer systems that learn from analyzing data. The more data the system is given, the better the system becomes at discerning certain items or objects (Gilbert 2017).

So far, neural networks have been used to recognize targets and map infrastructure, as well as in search-and-rescue missions. Facebook uses this technology to recognize faces in posted photos. However, neural networks can make mistakes and can be manipulated (Metz 2017).

Moreover, neural network technologies learn from example and use past mistakes to improve future decision-making. Deploying these weapons in warzones is incredibly problematic as the mistakes are “permanent and irreversible” (Mizokami 2017). As journalist Kyle Mizokami points out, a robot that mistakenly kills a civilian because it misidentifies the pitchfork the person is holding as a rocket launcher might learn from that mistake. But the civilian is dead.

Still, advancements in technology are occurring. Trends in technology, such as the move to open-source software, powerful processors, and cheap sensors, all allow small teams of builders to create more autonomous machines (Brewster 2016). And more money is going into robotics. According to some estimates, the world will spend $135.4-billion on robotics and related services in 2019, up from $71-billion in 2015 (Brewster 2016).

The chief concern is not that robots will take over, but that humans will put a flawed technology into dangerous systems (McKay 2017). In military applications, the results could be catastrophic.

On regulation

At the United States Governors Association summer meeting, Elon Musk re-emphasized the need for proactive regulation, because “by the time we are reactive in AI regulation, it’s too late” (Gibbs 2017).

So far, discussions at the global level, particularly at the Convention on Conventional Weapons (CCW), have often become mired in disputes over definitions, including what “meaningful human control” means. Russia has called discussions on lethal autonomous weapons “premature” (Emery 2017). Some governments are concerned that a preemptive ban could stifle the development of technology with civilian as well as military applications. (Such a concern is unwarranted, according to a local robotics expert.)

The Canadian government has not indicated that the issue of lethal autonomous weapons systems is a pressing one. However, 55 per cent of Canadians, according to a 2017 IPSOS poll, oppose autonomous weapons, while another 25 per cent are uncertain. Only 5 per cent expressed support. The Canadian government should heed the concerns of the Canadian public and develop a national position that rejects the development of autonomous weapons.

Leading military powers are in a race to develop new high-tech weapons that will transform warfare. The Kalashnikov announcement reinforces our belief that the time to ban these weapons is NOW.

 

References

Axe, David. 2017. How worried should you be about the AK-47 company’s new killer robots? Motherboard, July 17.
Brewster, Signe. 2016. The age of autonomous robots is upon us. Fortune, May 29.
Chivers, C.J. 2010. How the AK-47 rewrote the rules of modern warfare. Wired, January 11.
Emery, David. 2017. Robots with guns: The rise of autonomous weapons systems. Snopes.com, April 25.
Future of Life Institute. 2015. Autonomous weapons: An open letter from AI & robotics researchers, July 28.
Gibbs, Samuel. 2017. Elon Musk: Regulate AI to combat “existential threat” before it’s too late. The Guardian, July 17.
Gilbert, David. 2017. Russian weapons maker Kalashnikov developing killer AI robots. Vice News, July 13.
Laville, Sandra et al. 2015. Why has the AK-47 become the jihadi terrorist weapon of choice? The Guardian, December 29.
McKay, Tom. 2017. No, Facebook did not panic and shut down an AI program that was getting dangerously smart. Gizmodo Australia, August 1.
Metz, Cade. 2017. Uncle Sam wants your deep neural networks. The New York Times, June 22.
Mizokami, Kyle. 2017. Kalashnikov will make an A.I.-powered killer robot. Popular Mechanics, July 19.

Click to Share

Leave a Reply

Your email address will not be published. Required fields are marked *