ITIF Logo
ITIF Search
Banning Police Use of Facial Recognition Would Undercut Public Safety

Banning Police Use of Facial Recognition Would Undercut Public Safety

July 30, 2018

Last week, the American Civil Liberties Union (ACLU) posted an article on its blog claiming that Rekognition, Amazon’s facial recognition technology, falsely matched 28 members of Congress with the mugshots of individuals who had been arrested for crimes. The ACLU has used its article as justification for calling on Congress to place a moratorium on law enforcement use of all facial recognition technology—not just Rekognition—arguing that it is “flawed, biased, and dangerous.” However, banning law enforcement from using facial recognition technology would be wholly inappropriate based on the evidence to date and would limit many beneficial uses of the technology for public safety.

Amazon Rekognition is a powerful and easy-to-use cloud-based image analytics system that coders can use to detect objects, read text, and identify unsafe content in photos and videos. The software can identify faces in images and describe key attributes of those faces, such as the presence of a smile, beard, or sunglasses, as well as predicting gender and emotion. Rekognition can also be used for facial comparison, to see how similar two faces are to one another, and facial recognition, to search for one face in a database of other faces. For example, Sky News recently used Amazon’s “celebrity recognition” feature to identify guests at the royal wedding of Prince Harry and Meghan Markle.

The ACLU used Rekognition to compare photos of all 535 members of Congress against a database it built of 25,000 publicly available mugshots. The ACLU claims that Rekognition falsely matched the faces of 28 members of Congress with arrest photos, and that Rekognition disproportionately made false matches with people of color. These claims have received widespread attention, and already Reps. John Lewis (D-GA) and Jimmy Gomez (D-CA) have requested to meet with Amazon CEO Jeff Bezos to discuss the technology, and Sen. Ed Markey (D-MA), Rep. Luis Gutiérrez (D-IL), and Rep. Mark DeSaulnier (D-CA) have written a letter to Bezos questioning whether the technology should be sold to law enforcement.

These claims are just the latest shots the ACLU has fired in its decades-long war against facial recognition technology. After the September 11 terrorist attacks, the ACLU mobilized to oppose the use of facial recognition technology in airports. It has consistently opposed the technology since then, but more recently, the ACLU has directed most of its fire at Amazon. In May, the ACLU joined other groups in sending a letter to Bezos asking him to not sell Rekognition to the government. And in June, the ACLU sent a letter to Orlando’s mayor and city council demanding the city shut down the Orlando Police Department’s pilot program to test whether the software can reliably recognize officers when they appear in the city’s surveillance cameras.

While the ACLU’s most recent claims about Rekognition have received considerable media attention, they have not yet been substantiated. Ironically, the ACLU, an organization that regularly advocates for transparency, has yet to release the code or the data it used in its test, even though it said this analysis was done with publicly available data. This information is necessary so that others can independently verify its results. Good research should be replicable and reproducible.

The ACLU may be reluctant to release its code because it appears to have poorly implemented the software. The ACLU has admitted that it only used a confidence threshold of 80 percent. This threshold is suitable for some uses, such as tagging friends on social media, but Amazon recommends using a 99 percent threshold when higher levels of accuracy are necessary, such as in law enforcement. Indeed, Dr. Matt Wood, who oversees machine learning at Amazon Web Services, says that the ACLU’s reported error rate would drop from 5 percent to 0 percent at this higher confidence level.

Similarly, the ACLU may be reluctant to release its data because doing so could undermine its claims of bias. The ACLU says that Rekognition is biased because nearly 40 percent of the incorrect matches were people of color, while only 20 percent of members of Congress are people of color. Indeed, at first glance, this may seem like bias. But is that the right metric? What if the database of mugshots consisted of 60 percent people of color? Whether intentional or not, it is possible that the ACLU built its mugshot database to make it more likely for people of color to be wrongly matched if racial minorities were disproportionately represented in the dataset.

Certainly, it is possible that Rekognition performs better on lighter-skinned individuals compared to darker-skinned ones. Indeed, the National Institute of Standards and Technology (NIST) regularly tests many facial recognition algorithms and has documented how error rates vary by race and gender, and companies like Microsoft and IBM have announced initiatives to address these discrepancies. However, the ACLU was selective in claiming bias: It failed to publicize the fact that less than 4 percent of the false matches were female even though women hold 20 percent of the seats in Congress.

It is unclear what error rate and level of bias groups such as the ACLU are willing to accept. The standard should not be perfection, but rather better than the rates humans achieve today. And by that metric, facial recognition technology is clearly a positive step forward. Not only does it dramatically outperform humans in terms of matching speed (and cost), but it also beats humans on accuracy. And while different algorithms have variations in terms of accuracy depending on race and gender, these differences generally pale in comparison to well-documented human biases.

The ACLU’s most recent gimmick is unfortunately another setback for legitimate uses of facial recognition by law enforcement, which not only includes identifying suspects—as was the case in the Capital Gazette shooting—but also aiding victims of human trafficking and child exploitation. Moreover, it is a distraction from legitimate efforts to improve facial recognition, as well as necessary conversations about how to address serious instances of abuse, misconduct, and bias in policing.

Back to Top