(Ed. Note: The “Innovation Fact of the Week” appears as a regular feature in each edition of ITIF’s weekly email newsletter. Sign up today.)
The Office of the Australian Information Commissioner (OAIC) has issued two major rulings in the past month arguing that companies using facial recognition have violated the privacy of Australians under the country’s Privacy Act of 1988. If these rulings stand, they will make it risky for organizations to deploy not only facial recognition but most forms of biometrics in the future. If the OAIC is unwilling to adopt a narrower interpretation of this law, then Australian lawmakers should intervene.
The first case involved the convenience store 7-Eleven. Between June 2020 and August 2021, the company deployed tablets throughout its retail locations to allow customers to complete a voluntary survey about their in-store experience. The company used facial recognition to prevent customers from completing multiple responses to the survey in the same location on the same day, and it deleted all of the images it collected after seven days. However, Angelene Falk, the Australian Information Commissioner and Privacy Commissioner, said 7-Eleven violated the Privacy Act because “any benefits to the business in collecting this biometric information were not proportional to the impact on privacy.” In other words, she’s saying that the risks to consumers outweigh the benefits to the company.
The Commissioner is wrong on two counts. First, using facial recognition does not create an inherent risk to consumers. The Commissioner’s argument is that any biometric information presents a serious privacy risk because “should this kind of information be misused or compromised…it cannot be reissued or cancelled like other forms of compromised identification information.” However, as security professionals know, biometrics are not private—unlike passwords or PINs, they are not secrets. For example, anyone can create a biometric identifier of another person using a photo of them (i.e., a faceprint). Treating face prints as private information fundamentally misunderstands the nature of biometrics and misleadingly suggests that organizations should never use biometrics because merely collecting it presents a significant privacy risk to consumers.
Second, the Commission has created a completely arbitrary standard on which to judge these tradeoffs. How does the Commissioner know that the risks outweigh the benefits? Simply because she says it does. The irony, in this case, is that customers completed the survey in 7-Eleven stores, and all of these stores have security cameras that monitor customers. Indeed, one common 7-Eleven store sign reads: “Site is under constant video surveillance” and “By entering the store you consent to facial recognition cameras capturing and storing your image.” While the OAIC has not objected to these cameras (yet), the Commissioner argues that biometrics were not necessary to prevent fraudulent responses to the survey because 7-Eleven could have instead simply “included additional survey questions asking the customer whether they had already responded to the survey in the same store in the last 20 hours.” Maybe 7-Eleven should also eliminate its security cameras and instead just ask shoplifters to leave their contact information before stealing things. That would minimize the impact on privacy too.
The second case involved Clearview AI, a U.S. company that offers a service to law enforcement agencies that allows them to search for someone in public images on the Internet, including social media, news media, and mugshot websites. The Commissioner argued that Clearview’s processing of images is “unreasonably intrusive and unfair” because Australians “don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes.” Once again, the OAIC’s complaint lacks merit.
Consider the four ways the OAIC alleges Clearview is harming consumers. First, it suggests that using Clearview may result in the “misidentification of a person of interest by law enforcement.” While there is always a risk of misidentification in law enforcement, facial recognition technology is highly accurate and can help law enforcement agencies avoid human errors in identification. Second, OAIC warns of the “risk of identity fraud that may flow from a data breach involving immutable biometric information.” The risk of data breach exists with any data and should never be the basis to prevent data collection, but considering Clearview is using images publicly available on the Internet, there is virtually no additional risk from a data breach since the images are already public. Third, OAIC notes that there could be “misuse of the Facial Recognition Tool for purposes other than law enforcement.” The only reason the Commission included this potential harm is because Clearview has a patent application that discusses non-law enforcement uses of its technology. However, Clearview only makes its tool available to law enforcement, so this risk is currently a non-issue. If the company were to use this tool in Australia for purposes other than law enforcement, the Commission could intervene.
Finally, the OAIC warns that Clearview may “adversely impact” the “personal freedoms” of Australians. But this argument is particularly weak because the marginal privacy impact of scraping and analyzing photos that are publicly available on the Internet is minimal. Again, these images are already public. To the extent there is a privacy impact, it comes from when someone first takes a photo of another person, and then when someone posts the photo to the public Internet. Indeed, the core complaint of the OAIC seems to be no different than the ones from over a century ago complaining about Kodak “fiends” that made it impossible for people to walk down the street without someone potentially taking a photograph of them. However, society accepts today that, in general, people do not have an expectation of privacy in public. And if the mere act of downloading a photo that is publicly available on the Internet creates a privacy harm, then the OAIC is going to need to ramp up its enforcement and start shutting down not only a variety of Internet services that transfer or cache photos but also individual users who exchange photos showing the faces of their friends and family.
The OAIC has ordered Clearview to cease collecting facial images from individuals in Australia and destroy all images and face prints it already has collected. It is unclear how the company can comply with such an order—while the company might be able to reasonably assume anyone riding a kangaroo or wrestling a crocodile is Australian, it will have a hard time identifying the rest. Moreover, the OAIC will quickly find it lacks jurisdiction to enforce its rules outside Australia. For example, the OAIC will likely find it is powerless to stop Chinese companies from scraping Australian photos into foreign databases with no legal consequences.
Taken together, these two rulings show the OAIC has taken an extreme position on facial recognition that has serious implications for any organization using biometrics. It also sends a message to the rest of the world that Australia is not welcoming of emerging technologies or companies using them. The OAIC should reconsider these orders, and if it refuses, Australian legislators should consider revising the Privacy Act to more narrowly tailor its restrictions on biometrics to avoid limiting useful deployment of this technology in the future.