Algorithms are not impartial

Branka Marijan Featured 0 Comments

As new technology is shaped by old biases, stereotypes, and prejudices, users must remain vigilant

Joy Buolamwini, now a PhD student at MIT’s Center for Civic Media, was an undergraduate when she first encountered a problem with facial recognition software. She was trying to teach a robot to play peek-a-boo, but the robot did not seem to recognize her (Couch 2017). The robot’s facial recognition software seemed to detect her colleagues, but not her. Buolamwini needed the help of a roommate to finish her assignment (Buolamwini 2016).

Published in The Ploughshares Monitor Volume 39 Issue 1 Spring 2018 by Branka Marijan

Discriminatory data

As a graduate student several years later, Buolamwini, who is African-American, encountered the problem again. She decided to test the software by putting on a white mask. Then the software recognized her. Boulamwini realized that she was the victim of algorithmic bias. The math-based process or set of rules (algorithm) used in this machine-learning system reflected implicit human values.

By then, facial recognition technology was entering the mainstream (Lohr 2018) and Buolamwini knew that she had to speak out. She has been at the forefront of discussions on how algorithms can lead to discriminatory practices and why the data used in new technologies must be transparent.

In Buolamwini’s case, the software’s dataset was predominantly white and male. This is not uncommon. A widely used facial recognition system uses data that is more than 75 per cent male and 80 per cent white (Lohr 2018). In her research, Buolamwini finds that facial recognition software achieves 99 per cent accuracy if the subject is a white man, but only 35 per cent accuracy if the subject is a woman with darker skin.

Facial recognition software illustrates only some of the possible problems of biased machine learning systems. A system using a historical dataset, in which certain groups were excluded or particularly targeted, will replicate these biases. Biases can be compounded if the teams doing the coding are not diverse and fail to consider how the software could be used against different members of society.

Consider this: police in the United States are making more use of facial recognition software that was originally used by the military in war zones and to combat terrorism abroad.

Why bias matters

Experts are telling us that the data and mathematical models on which innovative and disruptive technologies are based are not neutral, but are shaped by the views of their creators. Included in these views are some very old prejudices, stereotypes, and structural inequalities.

As mathematician Cathy O’Neil says in her new book, Weapons of Math Destruction, we trust mathematical models too much and are afraid to question the math because we believe we lack the requisite expertise (Chalabi 2016). O’Neil notes that some of the algorithms impacting people’s lives are secret and the interests they reflect are hidden. She urges everyone to question how decisions are made and the ways in which they impact certain populations.

Prof. Laura Forlano of the Illinois Institute of Technology points out that algorithms are not impartial. “Rather, algorithms are always the product of social, technical, and political decisions, negotiations and tradeoffs that occur throughout their development and implementation. And, this is where biases, values, and discrimination disappear into the black box behind the computational curtain” (Forlano 2018).

In her 2018 book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks traces how new algorithms are further embedding biases about the poor and putting these vulnerable populations in an ever more precarious position. The political and socioeconomic forces long at play are reinforced by new technologies.

The effects on our security

Bias in justice, military, and security applications is particularly worrisome.

Some U.S. judges use a new system to help them determine if parole should be granted. Already disadvantaged people are being given longer sentences because an algorithm indicates that they have a higher chance of reoffending. But an investigation into this system revealed that it may be biased against minorities (Knight 2017). Similarly, predictive policing that uses algorithms to determine where and when certain criminal activity will occur has been seen to be racist.

Algorithms and machine learning appeal to developers of new military technologies and weapons. Based on neutral and impartial data, autonomous weapons systems, they claim, will be more responsible and accountable than human soldiers. The argument is that these systems, coded to respect international humanitarian law and protect non-combatants, will improve security for civilians. But no developer should be allowed to hide behind supposedly objective models, even though few companies and governments appear to be willing to deal with algorithmic bias (Knight 2017).

We also can’t simply adopt the view of Google AI chief John Giannandrea, who has suggested that algorithmic bias and not killer robots should be of greatest concern to the public (Knight 2017). We know that militaries are interested in developing autonomous systems, and we have no reason to believe that they are dedicated to removing bias. As a society, we can’t know precisely how algorithmic bias will be encoded in new weapons systems, but we can be reasonably certain that bias will be present.

How to get algorithmic accountability

Some people are pressing for algorithmic accountability. Buolamwini began the Algorithmic Justice League to involve the tech community and engaged citizens in identifying bias in different technologies.

Some governments are starting to consider the implications of the latest tech. The Canadian government has conducted several consultations on using AI in governance.
More must be done.

Tech companies need to be attuned to bias and held accountable. Yes, it can be difficult for all parties to understand how certain algorithms work and how machine learning systems make certain determinations. But ignorance cannot be used as an excuse—the fallout from a lack of consideration could be too great.

Much can be done to ensure that checks are in place to prevent bias or a badly designed algorithm from being used to make decisions and determinations that impact people’s lives. As O’Neil points out, all models can be interrogated for accuracy. Just as we audit and evaluate other products and systems, we must be able to do the same with emerging technologies.

Civil society organizations must pay closer attention to AI use in their respective fields. Ordinary citizens need to be more informed about how decisions that impact their lives are being made. They should have the right to demand that businesses be more transparent about the types of data and algorithms that they use.

And, as security and military uses of artificial intelligence increase, all of us will need to become even more vigilant—about the uses of AI and machine learning and about the existence of bias in new technology applications.

There is still much we can and must do to counter bias, and to regulate and control the new technology. In cases involving weapons systems, a minimal requirement should be that humans control critical decisions, such as the decision to kill. This is a clear moral and ethical imperative.

 

References

Buolamwini, Joy. 2016. How I’m fighting bias in algorithms. TED Talk, November.

Chalabi, Mona. 2016. Weapons of Math Destruction: Cathy O’Neil adds up the damage of algorithms. The Guardian, October 27.

Couch, Christina. 2017. Ghosts in the Machine. PBS, October 25.

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Forlano, Laura. 2018. Invisible algorithms, invisible politics. Public Books, February 1.

Knight, Will. 2017. Google’s AI chief says forget Elon Musk’s killer robots, and worry about bias in AI systems instead. MIT Technology Review, October 3.

Lohr, Steve. 2018. Facial recognition is accurate, if you’re a white guy. The New York Times, February 9.

Click to Share

Leave a Reply

Your email address will not be published. Required fields are marked *