ITIF Logo
ITIF Search
Hypothetical Risks Should Not Get in the Way of the Rollout of Eye-Tracking in AR/VR

Hypothetical Risks Should Not Get in the Way of the Rollout of Eye-Tracking in AR/VR

September 15, 2022

Eye tracking in augmented and virtual reality (AR/VR) devices has become a hot topic in the field. Proponents believe it will enhance device performance and create more realistic virtual interactions, but detractors argue it will open the door to invasive online advertising that threatens consumer privacy. Due to the novelty of the technology, however, a lot of the threats attributed to eye tracking tend to be hypothetical, while some of its benefits are already tangible. While fears of harm are common in new technologies, it is important to focus on demonstrable risks. Prematurely restricting eye tracking based on speculative concerns will deprive users of potentially valuable features, slowing the development of the industry and yielding no demonstrable consumer benefit.

Most of the concerns about eye tracking stem from fears about what AR/VR devices might be able to infer about users. In theory, eye tracking technology could measure physical responses, such as pupil dilation and where they look, that could provide platforms with data about an individual’s emotional response to the content displayed on-screen. Those wary of this technology argue that this data could allow third parties to infer things about users that they do not want to share. For example, a metric like pupil dilation—studies show pupil size correlates with emotional states like arousal—could allow a platform to make assumptions about a user’s sexual preferences. Additionally, the involuntary nature of these bodily responses limits users’ ability to control how much of this data platforms collect, as users cannot control their pupil dilation in the same way they control “liking” a post. Privacy activists are concerned over the possibility that AR/VR devices might share such data with third parties, including advertisers.

While the concerns about eye tracking are theoretically feasible, they may never materialize. For example, studies on using pupillary responses of individuals to predict their sexual interests have shown mixed results and are unlikely to ever be very accurate at the individual level. The potential low reliability makes individual-level data not valuable for platforms and advertisers. Instead, platforms may be more likely to use aggregated and anonymized metrics, such as average gaze time, which would avoid most privacy concerns.

There are many potential benefits of implementing eye tracking technology. For example, eye tracking has the potential to optimize the performance of AR/VR devices through the use of foveated rendering, a process in which a device would only render images that are actively being looked at by the user. Foveated rendering has become a highly demanded feature for virtual reality devices, as it leads to more efficient use of the device’s processing power, allowing it to display higher resolution images and increase its battery life. At the moment, it is only in higher-end devices such as the HTC Vive Pro Eye, but future devices, such as the Pico’s Neo 4 Pro and Meta’s Cambria, will likely implement the technology.

Privacy activists have used their concerns with the implementation of eye-tracking technologies and related biometric data collection to justify more stringent privacy legislation that would limit platform’s ability to collect this data or limit a platform’s ability to restrict access to their services if a user opts out. But broad restrictions on biometric data collection could potentially limit device functionality or encourage free-riding on free-to-use, ad-supported platforms, where users who opt-in for data collection are effectively paying for those who opt-out and still receive full functionality.

Activists have expressed additional concerns over the applicability of current privacy legislation, as existing laws governing biometric data focus on personal identifiers, but it is not clear if they would cover inferred data. Some of these concerns could be tackled by broadening the scope of privacy laws to include sensitive personally identifiable information inferred from biometric data. Additionally, adjusting data security requirements for inferred data could potentially address some of the concerns regarding unauthorized third-party access in a manner that balances consumer privacy and device functionality. 

In addition, government agencies such as the Department of Health and Human Services and the Federal Trade Commission should provide guidelines that illustrate how the collection and processing of information inferred from biometric data will be governed by existing privacy legislation, such as the Health Insurance Portability and Accountability Act (HIPAA) and the Children's Online Privacy Protection Act (COPPA). Doing so will provide clarity for users and developers, particularly when these devices are used for educational and health care services.

The rollout of eye tracking technologies in AR/VR devices has sparked a conversation regarding the risks and benefits of its implementation. But due to the infancy of the technology, most of the arguments on the matter tend to be hypothetical. Prematurely acting over fears that, while plausible in a vacuum, may fail to materialize severely harms innovation and limits beneficial uses of new technologies. Policymakers should instead focus on observed and demonstrable threats, such as the legal uncertainty surrounding information inferred from biometric data, and provide guidance over how existing privacy legislation applies to immersive technologies.

Back to Top