ITIF Logo
ITIF Search
IRS Was Wrong to Give In to Hysteria and Drop Use of Facial Verification to Fight Fraud and Protect Consumers

IRS Was Wrong to Give In to Hysteria and Drop Use of Facial Verification to Fight Fraud and Protect Consumers

February 8, 2022

Last year, the Internal Revenue Service (IRS) announced that it was going to begin using facial recognition to improve its identity verification process when taxpayers access its online tools (e.g., to get a copy of their tax records) to increase consumer security and reduce fraud. To do this, taxpayers would upload a copy of their government ID, along with a video selfie, to verify their identity. Not surprisingly, anti-facial recognition activists rallied their forces, claiming that it was too intrusive, too biased, and too risky. Unfortunately, their campaign was successful, as the agency announced this week that it would transition away from using the technology. This announcement is disappointing not only because it represents a step backwards for digital transformation in government but also because it shows how baseless attacks against facial recognition can win out even when they are not supported by facts or evidence.

The primary reason detractors give for opposing the IRS’s use of facial recognition is their “serious concerns about privacy,” although the details of those privacy concerns are a bit murky. After all, the IRS maintains extensive records about taxpayers’ most sensitive financial information, so the idea of the agency also having access to a database of selfies does not seem particularly risky. Some objected to the IRS using a private company, ID.me, to operate its facial verification system, arguing that the company might misuse this information. But again, the IRS routinely uses contractors, including to process sensitive taxpayer information, and requires them to adhere to strict privacy controls and subjects them to penalties for violations, so there is no particular reason why facial recognition presents unique privacy risks.

Critics also claim that the IRS should not use facial recognition because “research shows people of color are more likely to be misidentified.” Here too, the evidence does not support the claims, as independent testing by the National Institute of Standards and Technology (NIST) has shown that the best performing facial recognition algorithms have high accuracy rates across most demographics. In addition, the specific company’s algorithm used by ID.me has performed very well in these tests, with little variation based on demographics.

Moreover, the implication of these incorrect claims about facial recognition’s “bias” seems to be that the IRS would underserve communities of color by locking them out of important government services, which shows just how little the critics understand the technology. It is important to remember that there are two types of errors—false positives (i.e., the system says two photos are of the same person but they are not) and false negatives (i.e., the system says two photos are not of the same person but they are). So higher false-positive rates do not decrease access to services because they do not stop anyone. And higher false negatives rates could potentially decrease access to services, but as NIST notes in one of its recent reports, “false negatives can often be remedied by making second attempts.” In other words, on the relatively rare instances when someone’s photo doesn’t match their government ID, such as because of poor lighting, they can probably just take a new selfie.

Finally, some critics fall back on the claim that the technology presents too much of a security risk to people. For example, critics have argued that if “hackers were able to obtain the ID.me selfie records, it could be especially damaging, with potential uses ranging from committing fraud and identity theft to blackmailing people.” But a person’s face is not a secret, as anyone who has ever gone out in public can attest. The purpose of using facial recognition to enhance user authentication not because the information itself is unknown or unobtainable by anyone else, like a password or PIN, but because its is difficult for hackers to impersonate. After all, most facial recognition verification systems (including the one the IRS was using) all use a “liveness check” to ensure that the selfie is genuine and not just a photo downloaded off the Internet.

As tax season gets under way, it is disappointing to see that the IRS has succumbed to the concerted attacks by advocacy groups opposed to any and all forms of facial recognition. Every year, the IRS attempts to stop billions of dollars of refund fraud, identity theft, and other financial crimes that hurt everyday Americans, and greater use of facial recognition would have been a step in the right direction. Moreover, with constrained budgets and staffing challenges, not to mention steadily increasing demands on the agency, the IRS can barely keep up with its workload.

The only viable solution to this problem is greater use of automation and analytics to increase agency productivity and better use of customer-facing IT. Indeed, the IRS has already embarked on a multiyear IT modernization initiative that will require it to invest billions of dollars in technology upgrades to increase its operational efficiency, enhance the taxpayer experience, and strengthen cybersecurity. However, the IRS is destined to fail if policymakers do not give the agency sufficient latitude to embrace best-in-class services available from the private sector, including the use of facial recognition and other biometrics.

Back to Top