Clearview AI and the public use of facial-recognition technology

March 10, 2020

Over the past few weeks, the Royal Canadian Mounted Police, the Ontario Provincial Police, and dozens of municipal police forces have acknowledged their use of facial-recognition technology—particularly the technology produced by startup Clearview AI, based in New York City. Experts have responded by calling for greater privacy protection and, in some cases, an outright ban on the use of facial-recognition technology. With the use of this technology spreading—even shopping malls in Calgary have used it to identify shoppers—the need for regulation that safeguards the privacy of ordinary Canadians is evident.

Police forces were not forthcoming about their use of Clearview AI and facial-recognition technology in general, until a February report revealed that Canada was the largest market for Clearview AI technology outside the United States. The technology seems to have spread quietly, sometimes without the knowledge of those in charge.

Toronto Chief of Police Mark Saunders was apparently unaware that his officers were using Clearview AI tech. He has since closed down such use while he reviews the technology in conjunction with the Crown Attorneys’ Offices and the Information and Privacy Commissioner of Ontario.

Not much information is coming out about how different services will use facial-recognition technology in the future. The RCMP expects to continue to use Clearview AI technology for “exigent circumstances for victim identification in child sexual exploitation investigations, or in circumstances where threat to life or grievous bodily harm may be imminent.”

But, while such aims seem reasonable, concern about false positives remains. The risk that the technology could misidentify innocent people is real. Researchers have shown that accuracy rates for many facial-recognition tools are much lower when used to identify members of minority groups and racialized communities.

In January, Clearview AI made headlines in a The New York Times story on privacy implications. Later reports explained that Clearview had taken information from public and social-media websites to create a database of more than 3 billion images. Anyone with a photo on Facebook, Twitter, or Instagram is in that database.

The backlash that followed these revelations was based on a concern that these images were taken from private accounts. But even if the images were publicly posted, the process of collecting them in a database to be used by law-enforcement agencies was objected to by many. The fact remained that individuals neither knew about, nor gave consent to, the collection, storage, and provision of their images to police forces. And, while some experts suggest that individuals be more cautious about posting pictures on social-media platforms, the problem does not lie at their feet.

The Times even found that Clearview AI technology was being used not only by law enforcement but by private individuals. Investors, potential investors, and clients and friends of the creators of the Clearview AI tool were apparently given access to the app, which became a “plaything of the rich.”

Obviously, the collection of biometric data is not well regulated. New laws are needed to control new technologies. Perhaps a ban on the use of facial-recognition technology. If not, then regulations that dictate how data should be collected and for how long, and what happens to data obtained and stored in different countries. Individuals who do not consent to the use of this technology must not be denied essential services.

The view that “if you have nothing to hide, you have nothing to worry about” is untrue particularly when dealing with new technologies. According to Professor Chris Gilliard, “data [can] falsely implicate you.” Gillard was responding to a news report that described an application that tracked one man’s bike route, putting him at the scene of a crime and making him a top suspect for local police. While the individual was able to clear his name, this case illustrated the burden that mistakes by tech could impose on innocent individuals.

Would more accurate technology solve all our problems? Not really. Technology that could track individuals and groups without making mistakes could effectively monitor civil protestors and put a chill on legal demonstrations or innocent gatherings. Such concerns are felt most keenly by overly policed groups and minority communities.

As the Canadian government carries out its parliamentary review on the uses and abuses of facial-recognition technology, these and other vital concerns must be addressed. Having nothing to hide does not remove the need to worry.

From Blog

Related Post

Get great news and insight from our expert team.

How to use open-source intelligence to get to the truth

No Canadian leadership on autonomous weapons

Let's make some magic together

Subscribe to our spam-free newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.