Seven key insights from RightsCon 2018

Branka Marijan Featured 0 Comments

And the inevitable questions

RightsCon, one of the world’s largest annual meetings of digital rights activists, was held in Toronto from May 16-18. Access Now, which organized the conference, decided to hold this year’s event in Toronto instead of San Francisco, partly because of visa restrictions in the United States, but also because of Toronto’s growing reputation as a tech hub. Three days with 450 sessions offered a host of important insights on the human-rights implications of emerging technologies.

Published in The Ploughshares Monitor Volume 39 Issue 2 Summer 2018 by Branka Marijan

Here are seven key insights I gained from panels on phone surveillance, cybersecurity governance, robot wars, and Canada’s position on human rights and artificial intelligence (AI).

1. Computers don’t make logical, unbiased decisions.

As the experts showed, bias becomes embedded in computer models. Bias can be intentional: the writer of code deliberately includes skewed perspectives. Or unintentional: the machine-learning system reproduces bias in the data. As these systems are trained on historical data, the continued presence of bias is no surprise.
One machine-learning system was provided with information from newspapers to make associations. Thus, it learned that “man” is to “doctor” as “woman” is to “nurse.”
And, computer systems make errors. These errors impact the decisions made by the system. Bottom line: computer systems are not infallible.

2. Context matters when using data.

Tech companies collect an incredible amount of information about users on their platforms or in their applications. Some, such as Facebook, also track the user’s activity on other websites to offer more targeted advertising. Now, fear is growing that data collected in one sector of activity will be used in another.

Google’s original motto, “Don’t be evil,” inspired trust, but do we trust Google to follow its newer motto to “do the right thing”? Like Amazon and Microsoft, Google collected personal data to give the user a more efficient experience, while also selling information to advertisers in what was claimed to be a win-win scenario. Recently, Google has been subjected to serious backlash from its own engineers for participating in a U.S. military drone-imaging project. Other tech companies are involved in military projects; Amazon and Microsoft are providing their AI systems to law enforcement.

Is data collected for one purpose now seeping into security and crime-fighting sectors? Who knows for certain?

3. Own your data.

Without individual ownership of personal data, it is hard to control what happens to the data and how it is used. Citizens should feel empowered to ask why certain data is needed and to say no to unreasonable requests for information. We all need to know what information is used by our government and other institutions in making decisions that affect us.

New rules of protection over data are now emerging. For example, telecommunication companies, such as Rogers, have privacy responsibilities to their customers and cannot provide law enforcement with data without proper authorization.

The Canadian government is developing an AI strategy that highlights human-rights concerns, with a particular focus on privacy, freedom of expression, and equity and bias.

4. Algorithmic accountability and transparency are essential.

Algorithms are increasingly being used to make decisions on everything from lending rates to social security benefits. Often, however, organizations that employ these algorithms claim proprietary rights and will not disclose them. This situation is not sustainable, particularly as governments start using algorithms to determine the levels and types of services available to citizens. We all need to know how the algorithm makes a decision and how it can be changed if errors are found.

5. Going dark is a right.

To the police, “going dark” means moving communication to a private or encrypted channel. Often, “going dark” is seen as evidence of illegal activity. But to civil society, “going dark” represents an individual’s right to privacy, safe from constant surveillance by government or anyone else.

Now, with many of us purchasing smart home systems and virtual assistants, such as Alexa, we are willingly, if unwittingly, installing potential surveillance systems that could be used by law and other enforcement agencies.

But giving law enforcement such access is problematic. Certain populations could be targeted. Where do we draw the line on the creeping advancement of surveillance?

6. We must preserve evidence while removing harmful content.

Digital information can easily be broadcast and deleted. It can promote justice or incite hate-based activities. How do we preserve evidence of war crimes, for example, while trying to decrease the amount of dangerous material online?

Witnesses have taken videos of atrocities in Syria and Myanmar, which have appeared on social media platforms, only to be removed soon after. While some organizations work to preserve content that might be useful in international criminal investigations, much of it is lost before anyone sees it, because algorithms have flagged it as sensitive. In 75 per cent of such cases, a human analyst never sees the material before it is removed.

Customers are demanding the removal of harmful content. Security experts want extremist content deleted. The European Union wants harmful content down within two hours. Such demands will only increase the use of algorithms.

But there are no simple solutions. Hate speech can alert peacekeepers to flash points before a crisis spreads. Or it can incite violence. Regulatory controls can be used by governments that are themselves perpetrators of human-rights violations. Where do we draw the line? And who gets to draw it?

7. Hacking is a major security concern.

Many systems, including those that are becoming commonplace in our homes, are easy targets for hackers. A Roomba vacuum that is wirelessly connected to an application can allow the owner to remotely turn on the vacuum and to monitor activity in the house. It can also be used by hackers to get to know the layout of that home. Off-the-shelf drones can be hacked.

Hacking is most dangerous when aimed at critical infrastructure or when it is state-sponsored. Weapons systems are vulnerable to cyberattacks.

Currently, there is no mechanism to hold countries accountable for cyberattacks that they carry out or support. One idea is to create an international organization that would monitor cyber activity and hold countries accountable for their actions. Great work is being done by university-based organizations, such as Citizen Lab at the University of Toronto, which has been able to trace different cyberattacks. More university centres are clearly needed.

Civil society and digital rights activists have much to say about these concerns. But the questions raised are important to all citizens of our connected world. So, join the conversation!

Click to Share

Leave a Reply

Your email address will not be published. Required fields are marked *