The “plutonium of AI”: Facial-recognition technology

July 3, 2019

Branka Marijan

By Branka Marijan

Published in The Ploughshares Monitor Volume 40 Issue 2 Summer 2019

In May, San Francisco became the first city in the United States to ban law-enforcement and government agencies from using facial-recognition technology, which identifies individuals by facial features. Civil liberties advocates hope other cities and countries will soon produce their own versions of the “Stop Secret Surveillance” ordinance.

Others see legitimate uses of the software by law enforcement—as in finding a missing child—and want regulation but no ban. In what follows, we’ll consider some key concerns and possible future actions.

BAD FOR OUR HEALTH?

In a recent essay, Luke Stark, a postdoctoral researcher at Microsoft Research Montreal, described facial-recognition technology as the “plutonium of artificial intelligence,” and “anathema to the health of human society,” calling for it to be “heavily restricted.”
Why? Because the technology is flawed. As Joy Buolamwini and others have shown, the technology is fairly accurate when identifying white men. Error rates are much higher when it tries to identify individuals with darker skin or who are not male.

Error rates are connected to a classification of human features that entrenches long-disproven racial theories. Stark writes, “Reducing humans into sets of legible, manipulable signs has been a hallmark of racializing scientific and administrative techniques going back several hundred years.”

But experts don’t think that making these technologies more accurate will solve all problems. Technology does not exist in a vacuum. Products of a biased and discriminating society reflect and even perpetuate bias and discrimination. In fact, greater accuracy can make matters worse for groups already subjected to surveillance. More accurate tools could lead to even closer law-enforcement scrutiny of already overly policed, often minority, communities.

STATE USE OF SURVEILLANCE

Authoritarian regimes are using surveillance technologies—which may employ facial-recognition technology—to control minority populations. There are numerous media reports that China is doing exactly this to the Uyghur population, a minority Muslim group in the northwest. The state collects information in many ways: physical searches, surveillance software that must be installed on phones, and numerous surveillance cameras that are never turned off. The New York Times calls this level of surveillance “automated authoritarianism.”

Many Uyghurs are reportedly sent to reeducation or indoctrination camps for any transgressions discovered through such surveillance. Moreover, the collected video footage and images are fed to Chinese technology companies to improve the accuracy of the facial-recognition software. A vicious virtual circle of repression.

China isn’t alone. Democratic governments are also using and developing facial-recognition tools. Police in the United Kingdom see such technology as crucial in protecting society. According to a BBC Click investigation, police are already running live facial-recognition trials.

Some critics worry that facial-recognition technologies, coupled with the UK’s extensive network of video cameras, could identify a vast number of individuals, creating a database of Orwellian proportions. An existing database used by the UK police includes not only information on criminals, but on ordinary citizens without even a parking ticket on their records. Innocent individuals can request to have their information removed, but first they must know that they are included in this database and it is not clear how that information is acquired.

EFFICIENCY RULES

London Metropolitan police claim facial-recognition technology will make policing more effective and efficient. This is what modern tech offers and promotes. As Vox reporter Sigal Samuel explains, global companies that develop lucrative facial-recognition technology are pushing for its widespread adoption.

Efficiency is gained. But individual privacy is lost. This is reason enough for some opponents to call for a ban on such technology.

As Samuel notes, part of the problem is that facial-recognition technology is being marketed to ordinary people as a convenient tool, with a veneer of futuristic sleekness. At a kiosk in an airport in China, facial-recognition tech can scan your face and provide your flight status. In some U.S. airports, JetBlue is using facial-recognition technology to make the boarding pass irrelevant.

Some consumers might not consider the implications should such technology become ubiquitous.

But uncritical acceptance by the general population of some uses of the technology does not mean that there is no need for regulations or even bans. Yes, cellphones have cameras and many people disclose personal information on social media platforms, so there is no universal presumption of privacy. But the right remains. And the harm of surveillance grows.

Scholarly studies have extensively documented the impacts of surveillance on society and its chilling effects on democracy. And in nondemocratic societies, the ability to navigate everyday life without constant surveillance is crucial for citizens to survive and thrive.

BAN OR REGULATE?

Some tech companies seem unconcerned about the implications of facial-recognition technology and, with few rules in place, have started to market their technology to law enforcement. For example, Amazon has sold facial-recognition technology to U.S. police forces, even though there is evidence that the product is inaccurate and has other weaknesses, particularly related to bias. Microsoft and others have called for better regulations.

But will regulation be enough? A new report from the Georgetown Law Center on Privacy and Technology, Garbage In, Garbage Out: Face Recognition on Flawed Data, reveals that U.S. police forces have altered images, uploaded forensic sketches, and edited computer-generated images to “increase the likelihood that the system returns possible matches.”

According to this report, it is difficult to know how widespread these practices are, but they will surely increase as more police and security services obtain the technology.

We must remain aware that the technology is imperfect. Efforts to improve accuracy have not addressed the original problem of the biased data that is fed into the facial-recognition algorithms. Nor have vulnerabilities to hacking and other cyberattacks been eliminated or minimized.

Finally, we must consider the possible adoption of facial-recognition technology by the military. Countries that are developing autonomous weapons argue that facial-recognition and similar technologies will distinguish civilians from combatants, lessening collateral damage. But minorities and other innocents under threat fear that they will find it even harder to hide from persecution.
Cities, countries, and the global community must acknowledge the different and possibly harmful ways in which various regimes could use this technology. We must all contemplate how it might change warfare and affect civilians in conflict zones.

After such acknowledgement and contemplation, we might all determine that, in the end, only a ban will protect ordinary civilians from an unprecedented degree of surveillance, which could result in the loss of privacy and freedom and even life itself.