Are emerging technologies making us safer?

Branka Marijan Featured Leave a Comment

Technology companies, governments, and militaries all want the rest of us to trust new systems and emerging technologies that employ artificial intelligence to provide services, assist in policing, and even wage war more effectively and efficiently. They want us to feel “safer” with emerging technologies and artificial intelligence (AI). Do we? Should we?

Published in The Ploughshares Monitor Volume 39 Issue 3 Autumn 2018 by Branka Marijan

The trust deficit

The notion of a “trust deficit” in emerging tech and AI is getting a lot of attention these days According to a 2018 study by Proof Inc., only 37 per cent of Canadians trust that AI will improve their experience with consumer goods and 39 per cent trust that AI will be good for the economy.

The figures are even lower among Canadian women: 34 and 36 per cent, respectively. Higher distrust could be a response to evidence that some emerging technologies discriminate against women.

So, tech companies are developing principles and practices that ensure that their products contribute to the public good. To address concerns about the type of content that is promoted on some platforms, tech companies are pushing for more AI, rather than human, oversight. For example, YouTube is increasingly using algorithms to remove problematic content before it can be viewed by anyone.

Facebook is giving users a “trustworthiness” score, which is designed to curb the spread of “fake news” and the reporting of information as fake simply because the user disagrees with it. To determine the credibility of an individual, Facebook, too, is relying on algorithms. We don’t know how these algorithms work, because Facebook claims that revealing more information would give malicious actors the means to thwart these measures.

Many critics are worried about the increasing use of algorithms to make decisions and provide scores about individual behaviour. They are most concerned about the ability of security and defence technologies to both dehumanize and surveil individuals.

Security and defence

Militaries are investing heavily in autonomous systems that employ artificial intelligence and robotics. According to The Guardian, approximately 381 partly autonomous systems have been deployed or are being developed in 12 countries, including the United States and Russia. The most notable public response has been revulsion, often referred to as the “ick factor.”

The number and sophistication of autonomous systems are bound to increase in the next few years. Already, countries are working to address the “trust deficit” of their citizens, so that they will come to accept and, yes, trust these new military and security applications as safe and beneficial.

The Canadian context

The Canadian government is among those interested in developing autonomous systems for security and defence applications. The Department of National Defence (DND) is already focusing on solving the issue of public trust. Recently, DND put out a call for proposals on the theme of “Autonomous Systems for Defence and Security: Trust and Barriers to Adoption.” In the proposal, autonomous systems are defined as “systems with the capability to independently compose and select among various courses of action to accomplish goals based on its [information] and understanding of the world, itself, and the situation.” Essentially, these are systems which would be able to take actions without human input.

This diminishing human control over military systems is exactly what tech experts and international civil society organizations are flagging. But members of the military are also wary of using these systems, which rely on programming instead of hard-earned human expertise and experience—and empathy.

So, it is yet another cause of concern for both civil society and the military that the DND call for proposals appears to be aimed not at addressing the real weaknesses and problems with autonomous weapons systems, but achieving public acceptance of them. It states, “Gaining trust in autonomous systems is problematic and requires solutions to encourage acceptance by the general public and defence and security sectors alike. Finding ways to maintain that trust is equally important.” The focus is not on building trust, but gaining trust.

This focus is telling. It seems to illustrate an already settled decision to commit to these new systems. The only problem that is considered is how to make the public go along with it.

Recognizing ethical concerns

The DND call for proposals not only disregards the numerous concerns related to emerging technologies and autonomous weapons, but seeks to circumvent them. These attempts to gain trust, rather than to fully respond to the reasons for distrust, only add to the deficit column when calculating public trust.

Canada is making large investments in research on artificial intelligence. It is in the interests of us all to ensure that diverse stakeholders are involved in discussions regarding governance and application of these technologies. Recognizing ethical concerns and funding research to better understand how autonomous systems would impact diverse individuals, contexts, and environments is crucial.

Canada has largely remained silent on the topic of autonomous weapons at global forums. If the call for proposals is any indication, Canada is failing to address the numerous concerns—humanitarian, technological, governance—that have been brought forward by leading Canadian tech experts, civil society groups, and concerned individuals.

Instead, the mantra seems to be “in algorithms we trust,” with the expectation that public opinion can be swayed to accept the idea of machines killing people. But how safe will we really be?

Click to Share

Leave a Reply

Your email address will not be published. Required fields are marked *