ITIF Logo
ITIF Search
Don’t Demonize Facial Recognition Technology, Establish Rules and Norms for Its Use

Don’t Demonize Facial Recognition Technology, Establish Rules and Norms for Its Use

May 24, 2018

Alarmed with Amazon’s decision to sell facial recognition technology to law enforcement agencies, several privacy advocates have sent a letter to the company demanding that it stop selling the technology to the government. Unfortunately, rather than seek commonsense rules or norms around the use of facial recognition, these groups are trying to use the court of public opinion to demonize the technology and force a popular company into not selling it.

Facial recognition technology is a form of automated image recognition that uses computer algorithms to uniquely identify an individual in a database based on a photo. Amazon’s take on the technology, called Rekognition, uses artificial intelligence to identify objects and people from photos or video. For example, Rekognition was recently used to identify famous guests at the royal wedding. In 2016, Amazon started selling the technology to police departments to assist in emergency responses or keep track of certain people, such as the mayor of the city.

Advocates worry about facial recognition technology being used with police body cameras to create a “municipal surveillance system.” This kind of hand-wringing is not new. Concerned with the growing accuracy of facial recognition and its proliferation, some privacy advocates have argued that the technology is a threat to privacy and public anonymity and have sought to limit many different public- and private-sector uses of it.

However, facial recognition technology has many beneficial uses for society, even when used by the government. For example, some organizations have partnered with law enforcement to use facial recognition and artificial intelligence technologies to help find victims of human trafficking. Even the use-case that privacy advocates are crying wolf over, facial recognition systems integrated with police body cameras, can benefit society. The technology can immediately identify people with arrest warrants by searching databases with arrest information and facial data or be used with a city’s cameras to effectively locate “persons of interest” in the event of terrorist attack. For example, the Boston Police used CCTV camera evidence to identify subjects in the Boston Marathon bombing, but this search could have been much more efficient and effective with technology like Rekognition.

This week’s letter cites concerns of over-policing and surveillance, especially in communities of color, as the reason to not sell this technology to the government. However, the widespread deployment of body cameras can help reduce this concern. Body cameras can help protect communities of color by holding law enforcement accountable. Indeed, indisputable video evidence can offer a better record for an altercation, reducing the uncertainty over events and helping bring about justice for all involved. This is why even the ACLU calls on-body police cameras a “win for all” as long as the right protections are in place. There is no reason to suggest the same cannot be true for facial recognition technology.

This is where we get to the heart of why the letter to Amazon is troubling. These organizations are not arguing for “the right protections” to be put into place. Nor are they trying to have a conversation about what rules or norms should govern the use of facial recognition technology. Instead, they are demonizing the technology and trying to stop a company from selling it. But facial recognition, like any technology, is just a tool that can be used for both good or ill. The goal should be to protect people from harms that result from the abuse of the technology, not to stop its overall use. After all, will not another company simply take Amazon’s place to sell technology with similar capabilities to law enforcement?

As Daniel Castro and I have written before, the rhetoric around emerging technologies is often alarmist and can cause lawmakers to overreact to perceived fears. Rather than approach the debate over how to govern facial recognition technologies from a place of fear and demonization, policymakers and advocates should have a constructive conversation over what harms are caused by this technology and tailor regulations to prevent those harms.

Working with the government and making government functions more efficient and effective should be seen as patriotic. Only by de-stigmatizing the technology can we have a constructive conversation and adequately capture its benefits.

Back to Top