Why Facebook’s Project Aria Makes the Case for Light-Touch Privacy Regulation

Ellysse Dick October 26, 2020
October 26, 2020

(Ed. Note: The “Innovation Fact of the Week” appears as a regular feature in each edition of ITIF’s weekly email newsletter. Sign up today.)


At Facebook’s Connect developer conference, the company unveiled Project Aria, an initiative “to help understand what hardware and software are needed to build AR glasses.” According to Facebook Reality Labs, a small set of trained employees representing different genders, abilities, and ethnicities will wear a headset in their homes and on Facebook campuses, which will gather data using external-facing cameras, as well as various sensors.

The reactionary response would be to limit or even halt this type of project in the name of privacy. But overregulation of augmented reality (AR) research will stifle innovation, hold back U.S. companies from developing these technologies, and ultimately make consumers vulnerable to privacy threats in the future.

Augmented reality is an immersive technology that overlays digitally rendered content on the user’s surrounding environment. Whether it’s showing users the best walking route, allowing them to virtually try out new products, or giving them information about an object they look at, AR combines virtual elements with the real world in real time. To complete these tasks, AR devices needs to have significant prior knowledge of the world around them – and how users interact with them – before users even turn them on. Wearable AR devices in particular have a complex challenge: they must be able to recognize objects, their cameras need to be positioned just right to see what the wearer sees, and their sensors should understand behavioral cues, such as hand or eye movements, to respond to a user’s needs.

To solve these challenges and build a robust AR ecosystem, companies need to collect a large amount of training data about locations, movements, and mannerisms. Without making these investments now, the full possibilities of AR will remain a distant goal.

Unfortunately, a muddled regulatory environment threatens to hold back this type of experimentation. Already, the unclear compliance requirements of biometric data laws such as those in Illinois and Texas are complicating this kind of research. While AR providers must deal with important privacy questions, such as securing biometric data and protecting bystanders’ privacy, state and federal policymakers should resist the urge to impose new regulations on AR. High compliance costs and regulatory uncertainty could discourage companies from making the necessary investments in AR, leaving U.S. companies behind in developing and deploying these technologies.

Because AR is still developing, it is virtually impossible to predict all the potential uses as well as risks at the outset. To uncover both, companies should have the freedom to innovate not only the technologies themselves, but also solutions to the challenges of privacy and security. Facebook’s approach already points to beneficial practices that may be adopted by other companies developing new AR products such as using trained volunteers, considering privacy and security from the outset, and soliciting input from civil society.

An enabling regulatory environment will create opportunities to build on these approaches and further develop industry-wide best practices and voluntary codes of conduct. The United States’ historic light-touch approach to privacy regulation has the potential to allow U.S. companies to emerge as leaders in AR while building off of the hard-learned lessons about how to best protect consumer privacy. Creating stricter privacy regulations, whether either aimed intentionally or unintentionally at AR, would set back efforts to develop and adopt these technologies, leaving U.S. companies at a competitive disadvantage and consumers worse off.