ITIF Logo
ITIF Search
Is Mona Lisa Happy? EU Would Ban AI That Could Answer This Question

Is Mona Lisa Happy? EU Would Ban AI That Could Answer This Question

August 21, 2023

As the development and adoption of artificial intelligence (AI) continues to advance, technology critics keep finding new sources of concern and outrage. One of their latest targets is emotion recognition technology—the use of AI to identify human emotions from facial expressions, voice inflections, body language, and other physical signals. Unfortunately, the EU appears poised to crack down on this technology, which would be a mistake since most of the criticism directed toward it is largely misguided and fails to consider its potential benefits.

The recent rise in opposition to emotion recognition technology has been fueled by the concerted efforts of anti-tech groups seeking to impose sweeping bans or restrictions on AI. The civil liberties group Article 19, after pushing for EU lawmakers to ban facial recognition technology, has expanded its advocacy to call for a ban on emotion recognition technology in the AI Act too. A coalition of anti-tech groups, including Access Now, European Data Digital Rights (EDRi), and Bits of Freedom—among others—have also issued a statement arguing for “a prohibition on emotion recognition in the AI Act” and calling these systems “one of the greatest threats” from AI in the EU.

Unfortunately, some policymakers have embraced their concerns. The European Parliament recently adopted a new draft of the AI Act that would prohibit the use of emotion recognition technology in law enforcement, border management, the workplace, and education. Even in the United States, some policymakers have been critical of the technology, with Sen. Ron Wyden (D-OR) labeling emotion recognition technology as “bunk science” and cheering on the potential EU ban. But most of the critiques of emotion recognition technology do not stand up to scrutiny.

Some critics point out that China uses emotion recognition technology as part of its domestic surveillance activities, most notably against its Uyghur population, and they argue that democratic nations should not use the same technology. They fear a slippery slope, wherein Western governments might exploit emotion recognition technology for nefarious purposes that trample on citizens’ basic rights. However, the United States and the EU have strong protections in place for civil liberties and human rights that do not exist in China. The appropriate response to concerns about law enforcement in democratic nations is not to hamstring their effectiveness by cutting them off from the latest technology but instead to maintain proper oversight and accountability of their activities.

The reality is that most of the objections of these groups have less to do with technology and AI, and more to do with their long-standing opposition to law enforcement. For example, in the United States, privacy and civil liberty groups have consistently opposed programs like the Transportation Security Administration’s (TSA) Behavioral Detection program (previously called the Screening of Passengers by Observation Techniques, or SPOT, program) that has agents identify passengers acting suspiciously. While there are legitimate concerns about the costs and effectiveness of some of these government programs, groups opposing emotion recognition technology today are doing so less because of any inherent concerns about the technology and more because they have an opportunity to re-litigate past policy debates.

Critics of this technology often make contradictory arguments. Sometimes they argue that the technology is shockingly invasive, effectively reading people’s inner thoughts and feelings without their permission. Other times they argue that the technology does not work, claiming that AI cannot effectively detect human emotions, attempts to do so are pseudoscience, and companies offering this technology are peddling snake oil. But if the technology is not effective, then critics are wrong to claim it violates people’s privacy (and if it violates people’s privacy, then critics are wrong to claim it is not effective).

The reality is much simpler—emotion recognition technology frequently works but has many limitations. Critics may point to these limitations as a reason for caution, but these critiques overlook the inherent complexities of human emotions. Emotion recognition technology’s imperfections are not due to its own shortcomings but rather reflect the intricate nature of human emotions. A smile, for instance, can suggest a myriad of emotions, from genuine joy to veiled skepticism. Moreover, how people physically express and interpret emotions varies across cultures, personalities, and abilities/disabilities.

Despite these complexities, identifying emotions has enormous value in real-world scenarios. Indeed, emotion recognition technology would be useful for people who have difficulty identifying and distinguishing emotions on their own—a condition that affects an estimated 10 percent of the population—as well as people who are blind or have low vision. Moreover, in many industries, including retail, hospitality, health care, education, and law enforcement, frontline workers routinely adapt to the emotions of customers and clients. For example, a teacher might approach a seemingly sad student differently than one who appears scared, or a sales clerk might treat a customer who appears angry differently than one who looks happy. Employers often seek employees with emotional intelligence because they can adapt to different situations and provide better service to their customers. As AI enables businesses to digitize more services, they will need to understand human emotions to continue providing personalized services.

Policymakers should not succumb to irrational fears that stifle innovation. Emotion recognition technology represents a significant advancement towards a more efficient, responsive, and empathetic world. Rather than ban or impose other restrictions on the technology, policymakers should direct government labs, such as the National Institute of Standards and Technology, to conduct third-party assessments of the technology or hold open competitions, encouraging vendors to share benchmarks on the effectiveness of emotion recognition technology.

Back to Top