Podcast: A Doorman for the Masses—Debunking Attacks on Facial Recognition, With Daniel Castro

July 12, 2021

(Ed. Note: The “Innovation Fact of the Week” appears as a regular feature in each edition of ITIF’s weekly email newsletter. Sign up today.)

Facial recognition technology has faced widespread allegations of discrimination in recent years, leading some cities to restrict its use—but exactly how valid are these claims? Rob and Jackie sit down with ITIF’s vice president and director of the Center for Data Innovation, Daniel Castro, to discuss why many of the claims are misleading, and how facial recognition can make public and private services more accessible, efficient, and useful.





Rob Atkinson: Welcome to Innovation Files. I’m Rob Atkinson, Founder and President of the Information Technology and Innovation Foundation. And we’re a DC-based think tank that works on technology policy.

Jackie Whisman: And I’m Jackie Whisman. I handle outreach at ITIF, which I’m proud to say is the world’s top-ranked think tank for science and technology policy.

Rob Atkinson: And this podcast is about the kinds of issues we cover at ITIF, from broad economics of innovation to specific policy and regulatory questions about new technologies. And today we’re talking about a pretty new technology, facial recognition.

Jackie Whisman: And our guest today is Daniel Castro, who is Vice President at ITIF and Director of ITIF’s Center for Data Innovation. He writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, internet governance, e-government, and accessibility for people with disabilities. He’s a busy guy. Thanks for being here, Daniel.

Daniel Castro: Thanks for having me on.

Jackie Whisman: So maybe we should start easy. Can you let our audience know what we’re talking about when we say the words facial recognition and maybe quickly describe the technology itself?

Daniel Castro: Sure. So facial recognition basically just means using a computer to compare images of faces. Facial recognition is generally used to answer one of two different questions. So one question can be, is the face in this image, the same as a face in another image? And this is typically what you use to confirm someone’s identity. So for example, if you go to the airport and the Border Patrol agent asks to see your passport, they’re matching you against the photo on a passport. You can also have a computer do the same thing.

That’s also the same type of question you’d be answering if you’re using, for example, your iPhone to unlock the phone using your face. The second question that you could ask, is does the face in this image appear in a database of images? So basically who is this person? And this is what you do when you want to know somebody’s identity and you’re trying to figure out, who are they? So for example, when you upload a photo on Facebook or other social media, and then you have this website look to see if anyone in the photo is one of your friends, that’s facial recognition.

Jackie Whisman: And how did it become such a hot topic? Why are people so angry about it? That seems to be what we’re hearing these days.

Rob Atkinson: Well, some people are angry about it.

Jackie Whisman: We’re not.

Rob Atkinson: Some people are in love with facial recognition.

Daniel Castro: Well, so one of the issues that came up is that people started conflating a lot of different technologies together. So we just talked about facial recognition. There’s also face detection. So the simple question is, is there a face in this photo? And then there’s facial analysis where you’re asking questions about the characteristics of a face. So does this face look like somebody who is young or old? Does this face have the eyes open or closed? Is there a beard or no beard? And generally this works pretty well, but it’s not perfect. And in particular, it’s not very good at predicting things like someone’s gender for some fairly obvious reasons. You can think about something like hair length. Long hair might be a good indicator that someone’s not a man maybe 75% of the time, but certainly not always. And the developers of these systems knew that. They often... The systems themselves don’t make conclusive predictions, instead what they give are confidence scores.

So a system might say this facial analysis system is 85% confident that this photo shows someone smiling. But there was a study that came out at MIT Media Labs, which found that these commercial facial analysis tools were worse at predicting gender on darker and female faces. Now this is a problem, but the problem wasn’t as big as the study made it out to be because they ignored these confidence levels. So if a system said this face is 75% likely to be male, they treated that the same as if the system had said, this is 95% likely to be male, even though the developers specifically say if you’re ever using this application where confidence levels matter, you need to use 99% confidence levels. And in many cases of course, accuracy doesn’t really matter. A store for example, might use something like this to figure out what percentage of their customers are roughly male or female.

So this study came out and the media ignored all the nuances here. And you saw lots of articles, basically taking the claim and saying facial analysis is the same thing as facial recognition. And so suddenly what we were seeing in the news, was this claim that facial recognition is sexist and racist, even though there was absolutely no study out there to support that claim. And so eventually people started to kind of figure this out, but then the ACLU entered the picture. Now the ACLU has long opposed any form of public surveillance. They’re against public cameras. They don’t like anything that looks like surveillance. So they decide to make hay out of this kind of racial bias issue. And so they released a paper claiming that Amazon’s facial recognition technology falsely matched 28 members of Congress against the database of mugshots, and that it disproportionately matched black lawmakers.

And so there were three big problems here with the ACLU study. First of all, the ACLU never let anyone publicly validate its results. That’s just not scientific research if you can’t replicate it. Second, the ACLU never shared details about the images they used. And that really matters, because what you’re matching against can significantly impact the results. And finally, when people did try and replicate it the best they could, it appeared that what ACLU had done is used intentionally a very low confidence level. So 80%, rather than what Amazon recommended, 99% to create the appearance of problems and for the most part it’s worked. And that is the new media narrative that we see.

Rob Atkinson: You mentioned MIT and ACLU. And when people hear that they think, "Oh, well objectivity." And I think what’s important to recognize is the MIT researchers that did this, they’re not objective, they’re basically academic activists who have made it a mission to oppose facial recognition. Surely the ACLU is not objective. They want to do everything possible to paint this technology in the worst possible light. And one of the ways they do it is by conflating facial recognition where it never triggers anything unless there’s a match with sort of full-time Soviet or Chinese government surveillance. Now the last thing in the world we would want would be government cameras that record everywhere you go, but that’s not what... No facial recognition system in the United States that’s ever been thought about even being deployed, would ever do that.

What they would look for would be, "Okay, we now know that there’s a child abductor out there. We have the child abductors face. Can we then say let’s have an automatic trigger. If any of the cameras were to see that face, they would then report it to law enforcement so we could catch this person." That’s really what we’re talking about here. But groups like the ACLU want to spin a narrative that said, "No, no, no. When Dan or Jackie are walking or driving down the street, the government’s going to track them and there’ll be in a database for the rest of their life."

Daniel Castro: Yeah. And that’s the thing, if all I knew about facial recognition is what I heard from the ACLU, I’d want to ban it too. But as you said, that’s not what’s happening here. It’s not real time surveillance, and it’s not trying to create this Chinese style database of where everyone goes. And so what we’ve seen is a lot of government officials reacting to only hearing the kind of MIT, ACLU side of the story. And so they created a number of reactionary laws. They’ve also of course, been motivated by the last year and concerns about racial justice and policing. And those two issues have now kind of come together and been conflated. But over the last year, we’ve basically seen a dozen cities from San Francisco to Oakland, Boston, Jackson, Mississippi, Portland, they’ve all created some type of ban on the technology. And some have banned it, just for law enforcement, some have banned it across the whole government.

And a few, like Portland and now Baltimore are trying to ban it even for the private sector. And the police bans don’t make sense. The technology can be used to find as you said, and an older adult who’s missing, a missing child. It can be used not even in this kind of idea of real-time surveillance, but simply when you have a photo of for example, a victim or a witness at a crime, you can use it to figure out who this person is and proceed with an investigation. Facial recognition was used in the Capital Gazette shooting in Annapolis a few years ago to quickly track down the shooter. I think what’s even more troubling is that we’re seeing some of these bans being extended to the private sector. And that’s where the technology has so many benign uses and uses that are good for consumers and good for businesses.

It can be used in the hospitality industry. So when you go to your hotel, you can automatically get checked in or checked into a loyalty program. It can be used in retail to greet customers or shoplifters. You can use it to get into your gym or your apartment building. And it also has a lot of really useful applications for people with disabilities. So if you’re blind, if you have memory loss, if you have what’s called face blindness, where you can’t recognize people’s faces. You can use this technology to better navigate your life, but these laws would actually restrict even some of these personal uses.

Rob Atkinson: Of all the terrible things about COVID, one of the good things is that I tend to with my wife, watch a lot of streaming BBC now. And we’ve gone through I think, we’re on our fifth different crime series. BBC does the best crime series. And they all have... One we’re on now is a thing called Agent Banks. But anyway, they all end up usually solving some sort of crime or helping solve some sort of crime by looking at the CCTV, the closed circuit TV cameras that are all around the UK. And what’s striking about that is, there’s not somebody looking at the CCTV every day and tracking where people go, rather there’s a crime and on a street and they’ll look at the CCTV to see whether they can find the car that was related to that.

And oftentimes they do, but imagine if they could, there was an episode that we watched last night and there was a picture of the car and a picture of the person, but they didn’t know who the person was. It took them a lot of detective work to find that person and that person was a criminal whose face was I assume, in the database. Imagine if they’d had facial recognition and they could use it. And they could say, "Oh, well that criminal who just committed a murder, we know who that person is." And again, that’s very different from having... And we can talk about this later about sort of how do we deploy this in a way that allows the good use and not the bad use. But I think people oftentimes see it as a black or white thing. It’s either a ban or it’s a Chinese total surveillance system.

Daniel Castro: Yeah. And I think when you ask people the question, "What do you think about facial recognition being used to surveil people?" Yeah, they don’t like it. But when you ask them, "Hey, if you have a photo or a video of somebody stealing your packages off your doorstep, would you like the police to be able to identify who that criminal is?" They’d say, "Yes." In fact, that’s what we’re seeing around the country is, as people get these new video doorbells and they capture these images, they actually take them to police. And police are saying, "We can’t do this manually. If we want to be responsive to the community, we need a better way. And it can’t be sitting here manually looking through mugshot photos. We need to be able to quickly do these types of searches if we want to be effective."

Jackie Whisman: And actually having a single police officer or a single detective searching through mugshots and comparing them to Ring cam footage is probably a lot more biased than having an algorithm do it for you.

Daniel Castro: Absolutely. I mean, what’s interesting here is the technology itself is very accurate. Advances in AI have sped up improvements in the technology tremendously over the past decade. So most of this has been driven by advances in AI. Because we use machine learning, we’re able to automate the learning process and figure out the best ways of matching images together. And so what we found is that NIST, the National Institute of Standards and Technology, compared algorithms, and they found that the algorithms in 2019 were a hundred times more accurate than a decade before. And the most accurate algorithms now only have errors 0.1% of the time. And as we get higher quality cameras, the accuracy rates will only go up.

Rob Atkinson: Could you also talk about the... There’s a well reported NIST study that I have to say virtually every media story got it wrong. And I’ve been on panels where it’s reported, "Well NIST found that algorithms are racially biased." Can you talk about that study and what the right interpretation from that was?

Daniel Castro: Yeah. So as I said at the beginning, there’s kind of two main things facial recognition can do. They can verify someone or they can identify someone. And so NIST went in and they did this testing and they looked at all the different algorithms they tested, and they wanted to see, are there demographic differences in how these algorithms perform? And so they found that the most accurate identification algorithms have undetectable differences between demographic groups. And then when it comes to the verification algorithms, the most accurate verification algorithms have low false positives and negatives across most demographic groups. So we’re talking really low numbers here, like 0.1 or 0.3% false positive rates. And this is incredibly good. Now the problem is, again the way it was interpreted, is that they tested maybe 200 algorithms. So the top 15 performed great. These are the ones that are going to be used at government agencies.

Daniel Castro: These are the ones that are being adopted by the private sector. The ones at the very bottom of the list, I mean, some of these were coming from Chinese labs that are even prohibited from doing business in the United States, so these aren’t getting deployed. But what the reporters would say is they’d say, "Well on average, so taking all 200 algorithms and just averaging that together, there were demographic differences." Now, what does that tell you? I mean, it’s like looking at all possible planes that can be sold, not just the Boeing ones that we all fly on, but also the experimental kits that someone can create and saying, "On average, this many planes crashed per year." That tells you nothing about what actually happens when you’re flying on a commercial airline. And so that’s where again, the details really matter.

Jackie Whisman: And this narrative really has far reaching implications in the policy discussion. And I’d love for you to get into... You spend a lot of time tracking state and local legislation on these issues. So I’d love for you to get into the kinds of laws that we’ve seen in response to this fear.

Daniel Castro: Well, the number one response we’ve seen are bans. You just have some of these local governments trying to ban the technology across the board. And sometimes these are very reactionary laws, they’re not well thought through. So for example, San Francisco, when they passed their ban, they later had to go back and amend it because they had inadvertently banned government employees from using an iPhone, which has facial recognition on it. And so that’s the kind of troubling quick, not well thought through, response we’re seeing. In other cities, they’re already using the technology. And so they’re just trying to be a little more transparent about how they’re using it. They’re trying to put in place better notice for citizens about what they’re doing, maybe hold more meetings about it. They’re trying to disclose when they’re using this. And there, they’re just trying to build public awareness about what’s happening.

Daniel Castro: And then at the federal level, we have at the airports now I know a lot of people haven’t flown over the past year, but once you start flying again, you’re going to see that many airports have adopted this technology, especially for immigration customs. And it’s because there’s really no difference in privacy to you for handing over your passport to a person and having that Border Patrol officer verify that your face matches what’s in your passport, and having the computer do it. And again, when you have a computer do it, it’s definitely doing it more accurately and with less bias than when the human does.

Jackie Whisman: So where do we go from here on facial recognition?

Daniel Castro: So there’s a few things that I think need to happen next. One is, we need federal privacy legislation. Right now, we have all these local governments, cities, counties, passing laws. We have states jumping into this with biometric laws. We don’t want to have this patchwork of privacy laws that inhibit use of this technology. It’s not productive. It doesn’t set clear rules of the road for businesses or consumers or law enforcement. And that’s not where we should probably be putting our focus. So we really need a federal privacy bill with state preemption, that sets rules on how different types of biometric data can be collective, which facial recognition is one of them.

The second is, we need to address concerns about law enforcement use of facial recognition. We targeted federal policies. So one thing we need to do for example, is create minimum performance requirements for this technology. We should be making sure that law enforcement is actually getting the best systems, that they implement best practices, and that they’re using safeguards to provide oversight and accountability. We should also make sure we’re limiting any potential abuse by law enforcement. So for example, we should be requiring that law enforcement get a search warrant anytime they’re using technology to surveil people over an extended period of time, whether it’s with facial recognition technology or anything else. We should also think about limiting the use of the technology in sensitive environments like at protests, which is where I think some people have legitimate concerns.

And then across the board, of course, we should also be making sure that law enforcement agencies are always upholding the standard of probable cause for any arrest, whether facial recognition is involved or not. And then the last area is of course, we just want to see the continued deployment of this technology. Rob, as you mentioned at the beginning, I mean there’s a lot of people who are scared of this technology, but there’s a lot of people who were also supporters of it and they see a lot of real value here. And so the government can help be a lead adopter of this technology and in many areas, especially where we want to have, post-COVID, a more contact free environment. So more facial recognition to enter buildings, using it for payments, using it for other types of applications in public spaces. I think there’s a big opportunity here and hopefully government can help lead the way.

Rob Atkinson: I also want to just make a point I think that sometimes people forget, you did a great event about a year ago with a bunch of law enforcement experts and officials. And I remember one of the takeaways from that is that, either it’s a rule or a practice that you don’t ever convict anybody on the basis of an algorithmic match, it’s an input into the entire process of determining whether somebody is a criminal and guilty. Is that something we could enact into any kind of rules around that? Or is it just something we do?

Daniel Castro: Well, I mean, it’s definitely true that facial recognition is an investigative tool and no algorithm is sending people to jail. There’s always a human in the loop, there’s always a human involved. Now that said, humans make errors. They make errors whether you’re using facial recognition or not. And so that’s where the focus should be is, how do we make sure that those types of errors don’t occur? And narrowly focusing on facial recognition as if that will fix injustice and the criminal and court system is a huge mistake because you’re missing the bigger picture, we’d be missing the bigger picture. And so my hope is people that are interested in that issue, keep focused on again, the bigger picture, because that’s where real impact can be made.

Rob Atkinson: I want to close on that last optimistic note. I don’t drive my car to work very much, but I did yesterday and imagine sort of a typical day where I would drive to work and I go into the building and there’s the parking thing. I have to press some button, I have to get a ticket. And sometimes I forget where the ticket is. I got to go back up to my office when I check out and then I got to get my credit card. Imagine it just takes an image of my face. It’s in the database with my credit card. And then when I pull out, it just looks at my face again, the little gate goes up, and off I go. And then on the way home, I go to the gym. I don’t need to look around for my gym thing or whatever, it just looks at me.

When I go to work in the elevator, we have these elevators, you have to show this little fob on to get to your floor. Oh, you’ve got to do it, you’ve got to find the key. It just looks at your face, says, "Oh yeah, you’re on the sixth floor, up you go." Go into banks, there’s so many areas where it could just make our lives so much more convenient. I mean, I already love it on my phone right now because I can automatically log into my bank account. I don’t need to figure out the password and all it just looks at me and yeah, there you go. So I really think the technology makes our life so much more convenient and provides all these other societal benefits.

Daniel Castro: Absolutely. And I think what’s interesting is I mean, you think about a luxury apartment building that has a doorman. I mean the doorman is there to recognize you by face. So facial recognition has long existed. We just use humans for it. And with the technology, we can automate a lot more of this and make it more accessible, widely available in lots of applications. And once people start using it, they really like it. I mean, that’s what we’ve seen with the iPhone. That’s what we’ve seen with using it with Windows Hello, to unlock your computer. It works really well and it will only get better in the future. So I think as people become more familiar with it, they’re going to start embracing it too.

Rob Atkinson: Yeah. I like that framing. This is a doorman for the masses.

Daniel Castro: That’s right.

Rob Atkinson: All right. Well Daniel, thanks so much for being here. This is a really great discussion of FR and I hope that that view gets more widely at least heard, because the debate is so one-sided oftentimes and so hyperbolic and driven by anti-FR activists, so really, really important.

Daniel Castro: Thanks for having me on.

Jackie Whisman: That’s it for this week. If you liked it, please be sure to rate us and subscribe. Feel free to email show ideas or questions to [email protected]. You can find the show notes and sign up for our weekly email newsletter on our website, itif.org and follow us on Twitter, Facebook and LinkedIn at @ITIFdc.

Rob Atkinson: We have more episodes and great guests lined up. New episodes will drop every other Monday. We hope you continue to tune in.

Twitter Image: 
Podcast: A Doorman for the Masses—Debunking Attacks on Facial Recognition, With Daniel Castro