ITIF Logo
ITIF Search

User Safety in AR/VR: Protecting Teens

February 5, 2024

Teens are some of the enthusiastic early adopters of augmented and virtual reality devices and the metaverse. Their relative lack of maturity and naivete makes them more susceptible to safety threats than adults. Yet, current policy proposals are unlikely to make AR/VR safer and would make online experiences worse overall for both teens and adults.

KEY TAKEAWAYS

Teens are one of the most enthusiastic demographics of AR/VR users. But they may be more susceptible to safety threats than adults. Ensuring the metaverse is safe for teenagers should, therefore, be a priority for industry and policymakers.
However, a variety of factors shape teens’ decision-making, risk aversion, and relationship with tech, which make each teen’s experiences unique, meaning that any trust and safety or policy prescriptions are unlikely to be a silver bullet.
Current policy approaches—mostly focusing on ID-based age-verification mandates—are unlikely to solve the issue and could erode users’ privacy, chill free speech, and stifle the development of the metaverse and AR/VR tech at large.
Congress should instead require that device makers and online platforms hosting age-restricted content establish a “child flag” system that allows platforms to safely assume everyone is an adult unless they have been marked as a child.
Device makers should have to build this child flag system into their operating systems’ parental controls, and apps and websites that serve age-restricted content should have to check for its signal before serving age-restricted content.
Policymakers also should expand digital literacy curricula in schools. This could provide teens with the information and resources to have more agency in their online experiences, and help them take the steps to make their online experiences safer.

Key Takeaways


Contents

Key Takeaways. 1

Introduction. 3

The Unique Context of Teen AR/VR Use. 4

The Current Stage of Adoption of AR/VR. 4

Teenage Years Are Usually When Independent and Unsupervised Use of Tech Increases. 5

The Overlap in Teens’ Online and Offline Social Graphs. 6

Teenage Years Are Usually a Time of Self-Discovery, Curiosity, and Socialization but Also Naivete. 6

The Teen Demographic Has a High Variance of Maturity Levels. 7

The Role of Parents in Teen Tech Use. 7

The “Model Household” or “Model Teen” Does Not Exist. 7

Overview of Current Threats for All Users of AR/VR Technology. 8

Unique Threats for Teenagers in AR/VR Technology. 11

Sexual Predation. 11

Cyberbullying and Virtual Harassment. 12

Exposure to Inappropriate Content. 12

Unhealthy Overuse of Technology. 13

Gambling Addiction From Gambling-Like Mechanics. 13

Recommendations. 14

Legislators Should Not Require Age Verification for Online Services. 14

Congress Should Require That Device Makers and Online Platforms Hosting Age-Restricted Content Establish a “Child Flag” System.. 15

Policymakers Should Direct Educators to Add AR/VR Literacy to Digital Literacy Curricula. 16

Conclusion. 16

Endnotes. 17

Introduction

From the Walkman to the iPod and Facebook to TikTok, teenagers are usually an influential demographic for the adoption of novel consumer technologies because they tend to be enthusiastic about new technologies, keep up to date with novel trends, and have fewer ingrained consumer habits of older generations.[1] For emerging consumer technologies, success usually depends on how they can convince teenagers to acquire their products, especially as they often influence other demographics to acquire these new devices.[2] Today’s teens are some of the early adopters of augmented reality (AR) and virtual reality (VR) devices and the metaverse, and the industry has taken further steps to cater to this demographic.[3] However, given teens’ relative lack of maturity and naivete, they are potentially more susceptible to safety threats than adults. Ensuring the metaverse is safe for teenagers should, therefore, be a priority for industry and policymakers.

As AR/VR technologies enter the mainstream, it is important to ensure AR/VR products and services do not cause injury, harm, or loss to users. Many safety threats in AR/VR, such as distracted driving and cyberbullying, were also present in its technological predecessors, such as smartphones, social media, and video games. In some cases, the level of immersion offered by AR/VR technology can increase the magnitude and complexity of these risks. AR/VR technologies are unique from many other digital technologies because they can provoke strong physical and psychological sensations. In addition to physical harm, there is also the potential for psychological and reputational risks.

Addressing these risks will require a mix of market innovation and thoughtful policymaking. Companies’ design decisions, content moderation practices, parental control tools, and trust and safety strategies will largely shape the safety environment in the metaverse. However, there are safety threats that can only be tackled through public policy interventions. Policymakers are already responding to teen safety concerns on “2D” platforms and devices, which has led to legislative proposals aiming to regulate teen use of tech that will likely affect AR/VR tech as well. But, before enacting these regulations, policymakers should consider the ongoing effort by AR/VR developers to provide user safety tools and ensure they do not erode the capacity of these tools to provide a safe experience. In cases where these safety tools prove insufficient or incapable of addressing a specific threat, policymakers should prioritize targeted policy interventions that aim to tackle proven harm instead of hypothetical risks.

This report is part of a series that explores user safety challenges in AR/VR technologies among three specific demographics: adults (ages 18 and older), teens (ages 13 to 17), and children (ages below 13). These divisions correspond to differences in how regulations treat each of these demographics and the unique risks they face.

This report reaches the following conclusions:

Creating a safe experience for teen users of AR/VR technology is a complex challenge. Various factors shape a teen’s decision-making, risk aversion, and relationship with tech, making each teen’s experiences unique.

Policymakers’ current approach—mostly focusing on establishing ID-based age-verification mandates—is unlikely to make teens safe and would make the online experience worse overall for both teens and adults.

The report then offers the following policy recommendations:

Legislators should not require age verification for online services. Doing so could significantly erode users’ privacy, have chilling effects on free speech, and stifle the development of the metaverse and AR/VR tech at large.

Congress should require that device makers and online platforms hosting age-restricted content establish a “child flag” system—so everyone is assumed to be an adult unless they are marked as a child. Developers would have to incorporate a standard “child flag” feature into their operating systems’ parental controls, and apps and websites would have to check for its signal before serving age-restricted content. Separately, the National Institute for Standards and Technology (NIST) can promote safe-by-design products by creating guidance for AR/VR developers to properly assess safety risks.

Policymakers should direct educators to add AR/VR literacy to digital literacy curricula by insisting that public schools help teens understand how to engage in safe online behaviors, identify common dangers online, and better understand the dynamics and social norms of the online world.

The Unique Context of Teen AR/VR Use

Multiple factors in teenage use of technology shape their AR/VR experience. Many teenagers are tech savvy and enthusiastic about new technologies, which leads to a high adoption rate and use of emerging tech such as AR/VR. About half of U.S. teens have played a game in VR, and nearly one in three of them own an AR/VR device.[4] Teenagers have seemingly opposite and contradictory behaviors online. On one side, who they interact with online overlaps significantly with whom they interact with offline.[5] As a result, online interactions tend to follow teenagers to the offline world. On the other side, at this stage of their physical and psychological development, teens are defining most of their identity, sexuality, and interests.[6] This process leads to a desire to connect with new online communities and subgroups they might not have known in the offline world. Teens’ relative inexperience in handling conflict and certain types of online interaction results in a lack of tools to navigate situations such as harassment or cyberstalking and makes them vulnerable to sexual predation.[7] This complex behavioral pattern creates a specific context that gives rise to a unique set of safety threats for teenagers. Studying and understanding this context is necessary to know how safety threats emerge and how to tackle them.

The Current Stage of Adoption of AR/VR

As previous Information Technology and Innovation Foundation (ITIF) research has highlighted, the popularity of AR/VR devices is an important factor to consider when evaluating user safety risks.[8] A more popular, ubiquitous technology usually means that each household owns multiple devices, with each household member owning a device for personal use. This personal use often translates into users storing more personal information on a device and having a larger digital footprint, accompanied by a higher expectation of privacy from users. On the other hand, when a technology is not popular enough to warrant everyone having their own device, households will often have a single device shared by different members of the household, which usually results in users sharing less personal information and being more wary about what sites and applications they use, as they may not be comfortable sharing such information with others.

While there are important uses in industries such as manufacturing, medicine, and education, AR/VR tech is primarily used for gaming, and most consumers have not integrated the technology into their day-to-day routines.[9] Compared with past technologies, AR/VR devices are in the “family computer” stage in which households own a single device shared by all members.[10] Thus, it is highly likely that an adult sets up these devices, and that adult’s account tends to be the primary account linked to the device that it is used by other household members regardless of age. Unless the adult has diligently created alternate accounts for other household members and assured that all household members effectively use their assigned accounts, anyone using the device will likely be able to access all the content available to the adult’s primary account.[11] The prominence of account sharing in AR/VR devices makes age-gating AR/VR content significantly tricky without using facial recognition or other age-validation tools.

Account sharing also erodes the effectiveness of parental controls offered by hardware and software developers. For example, parents might set screen time limits, restrict access to apps, or restrict teens’ ability to add someone to their friends list without prior authorization. However, if teenagers have access to the primary adult account, they can quickly alter parental controls, unless these are password protected.

Teenage Years Are Usually When Independent and Unsupervised Use of Tech Increases

The teenage years are a stage in which a kid’s relationship with tech and social media drastically changes. As kids become teenagers, there are substantial increases in tech adoption, and parental controls tend to loosen up, leading to more independent, unsupervised use of technology.[12]

According to a 2021 study by Common Sense Media, kids tend to ramp up their adoption of smartphones during the tween years—between 8 and 12 years old—as 43 percent of tweens reported owning a smartphone. The survey finds that about 88 percent of teens reported owning a smartphone, with the biggest jumps in adoption happening at 12 and 14 years old.[13] Teenagers are also likely to have or have access to personal computers and video game consoles. A 2022 survey by the Pew Research Center highlights that most teenagers—defined as kids between 13 and 17 years old—reported having access to a smartphone, a personal computer, and a gaming console.[14] In other words, these statistics show that the teenage years are when kids acquire their first personal use devices, which increases independent, unsupervised use of technology, especially as teens mature and grow closer to adulthood.

Another survey from Pew in 2016 shows that parental supervision of technology use and the use of parental controls tends to wane as teenagers age. For example, 68 percent of the parents of teenagers between 13 and 14 years old reported checking their kids’ browsing history, versus 56 percent of the parents of teens between 15 and 17 years old. Similarly, 46 percent of parents of teens between 13 and 14 reported using parental control tools on their kids’ devices, compared with 34 percent of parents of teens between 15 and 17 years old.[15] The survey results reflect that almost half of kids in their late teens have a relatively independent, unsupervised interaction with technology devices.

This newfound independence can have various benefits: The teens can start taking steps to define their identity and personality, do a deeper dive into their interests and find similar communities online, and in some cases, get access to valuable information that might not be provided by schools or their families, such as on sexuality or religion, or find resources for victims of abuse. But it can also mean that kids will have to deal with any negative experience on their own unless they are able or willing to ask for help. As previous studies have shown, teenage victims of bullying or harassment are usually hesitant to seek help from parents, guardians, teachers, or even their friends.[16] They can also fall prey to schemes or scams, especially if they have not seen them before. In other words, the teenage years are a stage of increased independence and responsibilities for kids, but also when they are exposed to more risks than before.

The Overlap in Teens’ Online and Offline Social Graphs

In certain cases, negative online experiences can impact teens outside the online world. For example, if a teen has an online argument with a classmate, the conflict might resurface the next day in the offline world. In cases of harassment and bullying, the border between the online and the offline world is often blurred, as bullies and harassers might continue their real-world actions through social media.

A 2012 study on social media use by teenagers concludes that teens mostly use social media and other online platforms to interact with friends they have met offline.[17] This can mainly be explained by the fact that teenagers usually do not have a very expansive social graph, as most of their friends or acquaintances are their schoolmates, neighbors, and relatives. Conversely, adults tend to have a more extensive social graph, as they usually have more spaces to socialize with new people, such as higher education institutions or work.

This high overlap between social graphs means that online interactions can quickly follow teenagers to the offline world. As a result, simple solutions such as logging off from social media or deactivating their accounts may not be sufficient responses to harassment or bullying.

Teenage Years Are Usually a Time of Self-Discovery, Curiosity, and Socialization but Also Naivete

The teenage years are a pivotal stage of an individual’s development, as it is usually during this time when kids start to ask themselves questions about their identity, their sexuality, and what their role is in their community.[18] That drive toward defining their own identity largely shapes their online experience, as the Internet and online platforms can provide them with resources to navigate questions around their identity, help them connect with people with similar questions or interests, and help them tap into new communities and subgroups they might not have known about before.

If their offline social network does not represent or support those newfound interests and identities, teenagers often rely on online forums and social media to extend their network and search for support.[19] But that search for a sense of belonging and community is also usually accompanied by naivete and a lack of experience that are characteristic of the teenage years. Teenagers often engage in reckless and unsafe behavior due to their relative immaturity or lack of self-control, or simply because they have not experienced some of these risky situations before.[20] This propensity for risky behavior makes teens more susceptible to online harms such as cyberbullying, harassment, exposure to inappropriate content, and sexual predation.[21]

The Teen Demographic Has a High Variance of Maturity Levels

While most regulations and Internet platforms establish the “teen” label for individuals as being between 13 and 17 years old, this age group is far from a homogenous group regarding psychological and physical development. The stark differences in the maturity level or cognitive development of a 13 year-old and a 17 year-old will lead to different behavioral patterns, risk assessments, and judgments.[22] These age-based differences can also be compounded by the differences in psychological development because of sex, as males and females tend to develop specific behavioral traits differently during adolescence.[23]

These differences in maturity levels and cognitive development also manifest between individuals of the same age. As people mature and develop at different rhythms, age becomes a weaker predictor of behavioral patterns and mental resilience. This high variance of maturity levels makes teenagers a particularly difficult demographic to deal with regarding safety. For example, trust and safety practices and tools—such as teen-only spaces or content moderation tools—might work to make AR/VR experiences safe for some teens but could prove insufficient for others.

The Role of Parents in Teen Tech Use

Teenagers’ use of technology is also highly shaped and conditioned by their parents’ role in their upbringing and general technological use. For example, a teen’s propensity to bully or harass another teen is influenced to some extent by their relationship with their parents and how much parental support they receive.[24] Teenagers behaving problematically in the physical world will likely bring their problematic behaviors online. Parents’ involvement in limiting, controlling, and informing teens’ use of technology is also likely to determine a teen’s propensity to behave problematically or be exposed to harmful content. A recent study explores how parents’ approach to adolescent social media use influences teens’ propensity for problematic behavior, concluding that children are less likely to engage in problematic social media use when parents are tolerant and supportive.[25]

The “Model Household” or “Model Teen” Does Not Exist

Ultimately, because of the nuances that shape teenagers’ use of technology and social media, there is no “model household” or “model teen” that policymakers and engineers can cater to when crafting safety solutions for teens. The previous sections show that while making some general assumptions about teenagers is possible, multiple external factors —beyond age— shape a teenager’s experience and relationship with technology. While an individual’s age determines some behavioral patterns—such as risk-tolerance, mental resilience, or curiosity—it is insufficient to predict an individual’s maturity level or how present their support system is in their lives.

The influence of these external factors creates challenges for trust and safety teams designing safety tools for their products and for policymakers looking to address some of these issues through policy. Trust and safety teams could introduce various parental control tools into their products. However, if teenagers’ parents are apathetic toward their children’s use of technology, they are unlikely to use them. Policymakers could craft policy under the assumption that parents are primarily alienated from their children’s use of technology and enact policies restricting teen use of social media beyond what technology-enthusiast households would like. Thus, there is no silver bullet approach to teenage safety in AR/VR. Creating a safe environment will require a mix of targeted policy interventions that tackle express harm, industry best practices and standard setting, and norm-setting by parents and teens.

Overview of Current Threats for All Users of AR/VR Technology

Previous ITIF research has identified and summarized various threats for adult users of AR/VR technology. As this report highlights, there are various safety threats in AR/VR that are inherited from 2D technology. However, the immersive nature of AR/VR technology and its potential pairing with haptic technology might bring new safety challenges that are not present in devices with 2D screens. The table below illustrates the threats to safety identified by the previous report.[26]

Table 1: Summary of known threats and potential responses[27]

Type of Threat

Where It Occurs

Potential Solutions

Distracted driving, biking, or walking

Most augmented reality devices

Car mode; Focus mode: reduce notifications from non-navigation apps

Motion sickness

Virtual reality devices

Customizable movement and camera settings: camera rotation angle, camera rotation speed, vignette/tunnel vision mode, smooth movement versus teleport movement

Obscured or limited field of view

Most extended reality devices

Establishment of digital boundaries, such as the “guardian” system

Alert systems for instances when humans or objects trespass virtual boundaries

Pop-ups and notifications before the use of apps

Stalking and “swatting”[28]

Extended reality devices

Cybersecurity investment

Technological literacy campaigns for users

Pop-ups and warnings about sharing sensitive information when livestreaming

Privacy and data stewardship standard setting

Federal privacy legislation

Leading users to dangerous or off-limits locations

Augmented reality

Geofencing

Robust reporting system

Conscious product design, which is mindful of hazardous areas

Incitation of violence or vandalism

Multi-user immersive experiences

Content moderation tools: flagging system, content removal, product throttling

Health misinformation

Multi-user immersive experiences

Content moderation tools: flagging system, content removal, pop-up systems

Sexual violence with immersive haptics

Sex-related extended reality haptic devices

User-driven responses: confirming the identity of the other user before using immersive haptics

Sexual violence

Multi-user immersive experiences

Personal safety tools for users: mute, block, report, and “safe zone” functionality

Establishment of personal boundaries or space bubbles, which prevent other avatars from violating a user’s personal space

Content moderation by platforms or community leaders

Cyberbullying and virtual harassment

Multi-user immersive experiences

Personal safety tools for users: mute, block, report, and “safe zone” functionality

Decentralized content moderation: community-driven moderation through world/room moderators

Legislation that codifies cyberbullying as a crime

Harmful content

Multi-user immersive experiences

Traditional content moderation tools: pop-ups, automated flagging systems

Decentralized content moderation: community-driven moderation through world/room moderators

Addiction and psychological impact of virtual socialization

Multi-user immersive experiences

User-established screen time restrictions, screen time reports

Pop-ups notifying users of extended use of device/platform

Content moderation tools

Appropriate categorization of content: Entertainment Software Rating Board ratings, mature/adult tags in-platform

Gambling

Multi-user immersive experiences, extended reality devices

ID verification: if not possible on-device, through the use of a companion app

In-app resources for those facing gambling addiction

Identity theft, fraud, and ransomware

Multi-user immersive experiences

Two-factor authorization

Restricting player-to-player item transfers

Integration of password managers

Adoption of password-less technology such as Zero-Trust Authentication

Impersonation and reputational damage

Multi-user immersive experiences

Impersonation report systems

Locking photorealistic avatars function for non-verified users

Since the report’s publication, multiple new AR/VR products have been launched or announced, displaying new ways to use this technology that might introduce new safety challenges that ought to be considered.

For example, new and upcoming headsets such as the Meta Quest 3 and the Apple Vision Pro focus on mixed reality (MR) allow users to switch between VR and AR quickly. The introduction of consumer-grade MR headsets has resulted in more MR and AR applications and games being developed for these headsets. As some have pointed out, there are new risks of physical injury using AR/MR applications, as virtual overlays featuring a fake step or a fake hole could easily lead users to take a wrong step and lose their balance.[29]

These new devices might also magnify previously existing safety challenges, or at least make them more common. Quest 3’s improved passthrough capacity has led some users to use it as if it were a standalone AR device. There have been mediatic cases following the device’s launch in which users have been using the headset while walking to a coffee shop or attending a crowded convention site—despite the manufacturer’s recommendation not to use the device outdoors.[30] However, as these devices’ user interface (UI) was not designed to be used while walking or being outside, the menu usually obscures significant portions of users’ field of view, making users more likely to hurt themselves or others around them.

The recently released Ray-Ban Meta smart glasses, with improved livestreaming capabilities, could also increase the frequency of doxing, stalking, or accidentally sharing sensitive location data. As mentioned in previous ITIF research, users might inadvertently share their location by watching their surroundings through the device’s cameras while livestreaming.[31] With the new glasses’ livestreaming capabilities and their hands-free, wearable form factor, users might accidentally disclose their location more often than they would with other livestreaming tools such as handheld phones or cameras. Unlike other seemingly “wearable” devices such as a GoPro strapped to a helmet or an article of clothing, smart glasses are designed to be worn all day, even when not in use. This could create a greater sense of comfort with the glasses for users, making them less cautious of where they look or how the glasses might reveal information—a street name, house/apartment number, or a nearby landmark—while livestreaming. While these glasses have external indicators that alert others when the glasses are recording or taking a picture, they do not have any internal indicators that might remind a user that they are livestreaming.

Unique Threats for Teenagers in AR/VR Technology

While focused on adult AR/VR technology users, most safety threats previously highlighted should apply to teen users. Threats such as distracted driving, motion sickness, and an obscured field of view should apply to all AR/VR tech users regardless of age. But there are other threats that only teenagers experience or could prove especially harmful for teens. Understanding these unique threats is vital for developers, parents, and policymakers looking to create a safe environment for AR/VR tech users.

Sexual Predation

Child sexual abuse online, sexual predation and exploitation, and grooming have been some of the main concerns for policymakers concerned about child and teen safety in online social media services. This concern has prompted various legislative efforts at the federal level, such as the STOP CSAM Act, the EARN IT Act, and in state laws such as Utah’s S.B. 512.[32] The first two bills seek to impose more liability on online platforms if they do not sufficiently protect children. The latter will require minors to obtain parental consent to use social media.

As previous ITIF research has highlighted, metaverse platforms are likely to inherit the unresolved content moderation issues of 2D social media, and some of these issues are likely to become more complex with the introduction of new methods of online expression.[33] Content moderation in metaverse platforms must analyze and respond to fleeting voice chats, objects, avatar clothing, locations, gestures, arm movement, avatar location, and text simultaneously. Moderating all these different types of verbal and nonverbal expressions will be a practical and technical challenge while also having to strike the right balance between ensuring user safety and safeguarding users’ privacy.[34]

Additionally, platforms will have to tackle the challenge of how identity is presented in the metaverse, as users of metaverse platforms are not expected to create avatars that look like themselves or reflect their identity, unlike most existing social media. This could make teenagers—who are at a higher risk of sexual predation than are adults—more vulnerable to sexual predation, as ill-meaning adults could try to adopt a young-passing identity in the metaverse with the help of tools such as voice-changing technology to trick teens into a false sense of security, making them more vulnerable to predation.

Numerous metaverse platforms are adopting new teen safety tools to tackle some of these challenges. For example, earlier this year, Meta lowered its age requirements to allow teens to access its Horizon Worlds platform and introduced various safety measures for teen accounts. Some of these measures include defaulting teen accounts to the highest privacy settings, hiding teens’ online status from others, adding content ratings and age-gating mature content, restricting voice chat, and limiting the ability of adult accounts to connect with unknown teen accounts.[35] Other platforms such as VRChat have banned users from accessing their platform through modified clients and software development kits (SDKs). The platform justified the ban by claiming that modified clients and SDKs were primarily responsible for harassment, leaks of users’ sensitive information, and unauthorized access to users’ accounts.[36]

Cyberbullying and Virtual Harassment

Teen mental health has been another major concern for policymakers and parents who are worried about the potential adverse effects social media could have on teen users. Cyberbullying and virtual harassment are some of those negative behaviors that many believe contribute to an alleged negative impact of social media on teens’ lives.[37] Concerns over cyberbullying and harassment are already looming around metaverse platforms, as some of them have already struggled with tackling harassment, and their user base tends to be from the gaming community, an online demographic that shows a high propensity for bullying and harassment.[38]

Platforms continuously try to innovate with new safety tools and policies that aim to disincentivize bullying and harassment online. Discord—a voice and chat messaging platform commonly used by teen and young adult gamers—has introduced a new trust and safety strategy to educate and rehabilitate offending users. This new approach deviates from the industry standard in which offending users usually receive warnings or “strikes” followed by a suspension or indefinite ban after repeated offenses.[39] The platform will now take a more granular approach in which warnings do not immediately translate into account suspensions or bans but instead limit specific actions, such as restricting users’ ability to use private messaging or post media content. When users violate the platform’s rules, Discord will contact offending users directly with a message informing them that they have received a warning, providing them with an option to get more details about the violation, the specific measures that will be taken, and a detailed explanation of how the behavior went against the platform’s guidelines. Another key difference in this system is that warnings or strikes do not hold the same value. Users could receive more than three warnings and still be able to use the platform—albeit with various interaction restrictions—or receive a single warning and be automatically banned, which is usually reserved for cases of extremist violence or sexual exploitation of minors.[40] This approach aims to let users, particularly teens, learn from their mistakes and understand why their behavior is being punished, limiting the punishment to the specific violating behavior instead of preventing the user from using the platform altogether.

Exposure to Inappropriate Content

A teen’s exposure to inappropriate or harmful content such as pornography, violent content, misinformation, or content glorifying harmful behavior could have significant effects on that teen’s health and impact future development. For example, minors exposed to pornography are more likely to perform unsafe sex practices, sexually harass someone, or hold sexist views.[41] Platforms, parents, and policymakers have taken steps to prevent teens from accessing and viewing adult or harmful content, but to this day, it remains a largely unresolved issue.[42]

Currently, most age-gating of content heavily relies on self-declaration, wherein users submit their birthday to the service provided, which then categorizes the user as an adult, teen, or, in some cases, child (defined as younger than 13 years old). The biggest shortcoming of this approach is its reliance on users’ honesty in the signup process, as there are no viable enforcement mechanisms for platforms, and the cost of lying is minimal. Currently, the most significant punishment for a minor submitting a false birthday is being banned from the platform they signed up for. If minors are already actively seeking to access restricted content, they have little incentive to use their actual birthdate, as the perceived rewards for lying far outweigh the costs. Platforms have taken further steps, implementing certain age-assuring technologies or limiting teen users’ ability to change their birthday post-signup, but this remains a largely unresolved issue.[43]

Policymakers have also proposed or passed various age-verification mandates in the United States and abroad to prevent children from accessing inappropriate content. However, these regulations could erode users’ privacy and free speech, face various practical challenges, and even stifle the development of other emergent technologies.[44]

Unhealthy Overuse of Technology

Problematic Internet media use has also been a concern for parents and policymakers. For example, a recent lawsuit filed by 33 state attorneys general accused Meta of purposely engineering its platforms to make them addictive for minors, contributing to this problematic use.[45] Experts claim that teens are particularly at risk of problematic Internet use, as the brain regions involved in decision-making, resisting temptation, and rewards have not fully developed at that age.[46]

As mentioned in previous ITIF research, technology companies have taken proactive steps to provide users with more tools to control their screen time and prompt them to take breaks when using their apps for too long.[47] In recent years, technology companies have provided even more tools to teens and parents to regulate their social media use. For example, tech companies have introduced parental control tools at both the device level and the platform level that restrict teens’ abilities to access certain apps during different timeframes, limit their screen time, provide additional “take a break” or “bedtime” prompts, or restrict autoplay and infinite scrolling.[48]

Gambling Addiction From Gambling-Like Mechanics

As online gambling, particularly sports betting, has been deregulated in recent years, gambling itself has become a more popular and socially accepted practice.[49] Despite the concerns about the adverse effects of gambling and the crippling financial effects of gambling addiction, adults are allowed to engage in the practice because it is assumed that they have the cognitive development, financial resources, and emotional development to handle the act of gambling responsibly. That is not the case with teenagers, who are prohibited from engaging in the practice. However, parents and policymakers have voiced concerns about mechanics in online experiences and video games that could provoke a gambling addiction even before a teenager is allowed to legally place a bet, as some of them are deemed too similar to actual gambling mechanics.[50]

Video games have faced recent scrutiny over “loot boxes,” a term for a “consumable virtual item which can be redeemed to receive a randomized selection of further virtual items ranging from simple customization options for a player's game character, to game-changing equipment.”[51] In some cases, loot boxes’ mechanics resemble those of slot machines, especially when they are acquired with real-world currencies.[52] This scrutiny has resulted in various legislative efforts at the state and federal levels that would restrict or even prohibit the offering of loot boxes in video games, especially for games catering to minors.[53] Other countries, such as the United Kingdom, Japan, and China, have introduced or passed regulations restricting the use of loot boxes, while others, such as Belgium and the Netherlands, have prohibited them altogether.[54]

Despite these regulations, loot boxes continue to be prevalent in the industry, particularly in mobile gaming and free-to-play games (a term for games in which players can download and play for free and often offer in-app purchases to enhance or supplement their experience). While these games usually do not charge users a fee to download their games, they heavily rely on in-app purchases—such as loot boxes—for monetization. This usually translates into design choices that prompt users to purchase these loot boxes in order to advance or enhance their in-game progress. A study estimates that in 2021, around 77 percent of mobile games offered some sort of loot boxes in games rated suitable to children ages 7 or older.[55] As the video game industry has been highly influential in the AR/VR industry, it is reasonable to expect that these trends might also manifest in AR/VR games and applications, especially as the AR/VR market matures and introduces more post-launch monetization, similar to what is seen in video games today.

The video game industry, either voluntarily or due to regulatory mandates, has introduced various measures to minimize the negative effects of gambling-like mechanics such as loot boxes. Some of these measures include setting spending limits for underage accounts to $0 by default, disclosing every item included in the loot box’s pool alongside their drop rate, and imposing hourly and daily limits on purchases of loot boxes by an individual user.[56]

Recommendations

Policymakers have expressed various concerns about teenage use of technology, particularly smartphones and social media. Recent efforts have focused on preventing or heavily restricting teen social media use. However, these approaches are likely to introduce various privacy challenges for teens and adults and are not likely to make these experiences any safer. Instead, policymakers should prioritize interventions that provide parents and teens with more tools that make their online experiences safer.

Legislators Should Not Require Age Verification for Online Services

Concerns over teen safety online spurred policymakers in many states to introduce legislation that would mandate adult sites and social media platforms to verify their users’ age, often restricting or prohibiting them from accessing their services if they are under a certain age threshold. Most of these bills would require users to submit their government-issued IDs, social security numbers, or a facial analysis that would be collected and processed by the online service provider. [57]

Despite their good intentions, these bills, as currently constructed, could significantly erode users’ privacy, have chilling effects on free speech, and stifle the development of the metaverse and AR/VR tech at large.[58] As many have pointed out, these age-verification bills would severely undermine anonymous expression on the Internet, a constitutionally protected right in the United States and a vital tool for victims of abuse and LGBTQ+ individuals in unsupportive environments.[59] These bills would also force companies to collect and store more personal data from users, increasing privacy risks and making these companies a bigger target for ill-intentioned individuals or hostile foreign governments.[60] For the few proposals wherein the government would be the one collecting, processing, and storing this data, in the absence of secure electronic IDs, there are even greater concerns over how the age verification process could erode users’ right to anonymity and make them more vulnerable to cyberattacks from hackers and hostile governments, given certain government agencies’ spotty records on data security practices and the lack of incentives government agencies would have to resist potentially overreaching search warrants.[61]

Aside from the privacy and free speech concerns with these proposals, aggressively age-gating minors could significantly stifle the development of the metaverse and AR/VR tech. Teens and young adults have historically been a crucial demographic for the development of different online platforms, as they are usually the biggest creative force behind the popularization of different social media platforms and have the highest interest in virtual socialization.[62] Excluding teenagers from nascent online metaverse and metaverse-adjacent social media would leave these platforms without their most creative and enthusiastic users, severely limiting their appeal and potential for growth.

Moreover, age verification does not solve the problem of dangerous, violent, or fraudulent online content or harmful behavior online, such as harassment or bullying, it only limits its impact on minors. Many platforms still want to proactively address this content and behavior for adult users as well, but there is no bulletproof approach that ensures they all go away. If these policies only limit minors from platforms with harmful material, when users become adults, they will be ill-prepared to deal with these threats due to a lack of exposure to online behavior and social norms.[63] These users would be more vulnerable to the harassment, fraud, and bullying some of these bills are trying to curtail, as they would not have the tools to engage in safe online content consumption, defuse an online argument, or detect spammy, misleading, and fraudulent content.

Congress Should Require That Device Makers and Online Platforms Hosting Age-Restricted Content Establish a “Child Flag” System

Instead of opting into a blanket ban for minors on Internet platforms, Congress should prioritize policy interventions that provide users with more tools to make their online experience safer. Right now, all Internet platforms assume everyone who uses their platforms is an adult, which gives minors the possibility to access content designed for adults. Age-verification proposals would flip the assumption, and platforms would have to assume everyone is underage unless they prove otherwise. As noted, this approach is incredibly problematic and could do more harm than good for users. Instead, regulations should look to establish a middle ground where platforms assume everyone is an adult unless they have been marked as a child. This could be done through a “child flag” in a device’s operating system where parents can mark a device as one being used by a minor. Websites and apps that deliver age-restricted content could then check whether a device has received the tag and, if called for, block a user from seeing the content.[64]

Regulations should look to establish a middle ground where platforms assume everyone is an adult unless they have been marked as a child.

A policy intervention that would establish this system could curtail most of the issues with ID-based verification and minimize disruption for Internet users. In this case, potential legislation would have to take two concrete actions: require operating systems to establish this “child flag” system into their parental control tool kit and require that apps and websites that serve age-restricted content check for this signal and block such content as required.[65] By implementing this opt-in, largely voluntary system, users would not face the same disruptions caused by a blanket age-gate mandate such as the ones proposed.

Age-gating should not be the only response policymakers rely on. Other policy tools, such as guidelines and advisories, could provide the industry with useful guidelines on how to make sure their products are designed with user safety in mind. As a recent paper by the Center for Growth and Opportunity highlights, the tech sector—especially small businesses and start-ups—would benefit from NIST’s guidance to help them identify the potential risks of new products and features.[66] As the paper notes, NIST already has experience developing these risk frameworks in other fields such as privacy and artificial intelligence. [67]

Policymakers Should Direct Educators to Add AR/VR Literacy to Digital Literacy Curricula

Additionally, policymakers could promote better digital literacy by insisting that public schools help teens understand how to engage in safe online behaviors, identify common dangers online, and better understand the dynamics and social norms of the online world. A recently passed Florida bill could help the state take a step in this direction by requiring public schools to provide students with a course on social media literacy starting in sixth grade, also providing parents with instructional material.[68] Other states such as Texas, California, Delaware, and New Jersey have passed similar bills, but they primarily focus on combating misinformation through media literacy curricula.[69] This educational approach provides less-tech-savvy households with the necessary information and resources to have more agency in their online experience and helps them take the necessary steps to make teens’ online experiences safer. Policymakers considering these types of bills should ensure that digital literacy courses and curricula cover safe AR/VR use and the potential ways the threats present in the 2D Internet might materialize in online immersive environments.

Conclusion

Creating a safe experience for teen users of AR/VR technology will be a complex challenge. The factors that shape teens’ decision-making, risk aversion, and relationship with tech make each teen’s online experiences unique, meaning that the usual trust and safety tools at a developer’s disposal are unlikely to be a silver bullet. Current approaches by policymakers, mostly focusing on establishing ID-based age-verification mandates, are unlikely to solve the issue and would make the online experience worse overall for both teens and adults. Policymakers should prioritize policy interventions that empower users instead, such as an opt-in system at a device’s operating system that would notify apps and websites that a user is underage. This way, parents could establish a consistent, trustworthy system that gives them peace of mind that their teens are not being exposed to inappropriate content, websites can safely establish the necessary guardrails for their underage users, and teens can still enjoy the benefits of online interaction in a safe manner.

About the Author

Juan Londoño is a policy analyst focusing on augmented and virtual reality at the Information Technology and Innovation Foundation. Prior to joining ITIF, Juan worked as a tech and innovation policy analyst at the American Action Forum, where his research focused on antitrust, content moderation, AR/VR, and the gaming economy. Juan holds an M.A. in Economics from George Mason University and a B.A. in Government & International Relations from the Universidad Externado de Colombia.

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent 501(c)(3) nonprofit, nonpartisan research and educational institute that has been recognized repeatedly as the world’s leading think tank for science and technology policy. Its mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit itif.org/about.

Endnotes

[1].       Issie Lapowsky, “Why Teens Are the Most Elusive and Valuable Customers in Tech,” Inc., accessed 10/24/2023, https://www.inc.com/issie-lapowsky/inside-massive-tech-land-grab-teenagers.html.

[2].       Ibid.

[3].       Zoe Padgett, “Momentive study: teens flock to immersive worlds,” Momentive.ai, accessed 10/24/2023, https://www.momentive.ai/en/blog/momentive-study-teens-flock-to-immersive-worlds/; “Taking Stock With Teens: Fall 2023 Survey,” Piper Sandler, accessed 10/25/2023, https://www.pipersandler.com/teens; David Meyer, “Meta’s flagship metaverse is floundering, and it wants teens to rescue it,” Fortune, accessed 10/24/2023, https://fortune.com/2023/02/07/metas-flagship-metaverse-is-floundering-and-it-wants-teens-to-rescue-it/.

[4].       Padgett, “Momentive study: teens flock to immersive worlds.”

[5].       Stephanie Reich, Kaveri Subrahmanyam, and Guadalupe Espinoza, “Friending, IMing, and hanging out face-to-face: overlap in adolescents' online and offline social networks,” Developmental psychology, 48, 2, (2012), 356–368, DOI:10.1037/a0026980.

[6].       Jennifer H. Pfeifer and Elliot T. Berkman, “The Development of Self and Identity in Adolescence: Neural Evidence and Implications for a Value-Based Choice Perspective on Motivated Behavior,” Child development perspectives, 12, 3, 2018; 158–164, DOI:10.1111/cdep.12279.

[7].       B.J. Casey, “Beyond Simple Models of Self-Control to Circuit-Based Accounts of Adolescent Behavior,” Annual Review of Psychology Vol. 66, 295-319, January, 2015, DOI:  https://doi.org/10.1146/annurev-psych-010814-015156; Elizabeth P. Shulman et al., “The dual systems model: Review, reappraisal, and reaffirmation,” Developmental Cognitive Neuroscience, Vol, 17, 2016, 103–117, DOI: https://doi.org/10.1016/j.dcn.2015.12.010; Manuel Gámez-Guadix, Erika Borrajo, and Carmen Almendros, “Risky online behaviors among adolescents: Longitudinal relations among problematic Internet use, cyberbullying perpetration, and meeting strangers online,” Journal of behavioral addictions5, 1, 2016, 100–107, DOI: https://doi.org/10.1556/2006.5.2016.013.

[8].       Juan Londoño, “User Safety in AR/VR: Protecting Adults” (ITIF, January 17, 2023), https://itif.org/publications/2023/01/17/user-safety-in-ar-vr-protecting-adults/.

[9].       “VR Games Market Report: An Overview and Outlook of Virtual Reality in Games and Beyond” (Newzoo, November 14, 2022), https://newzoo.com/resources/trend-reports/vr-games-market-report.

[10].     Londoño, “User Safety in AR/VR: Protecting Adults.”

[11].     Juan Londoño, “Lessons from Social Media for Creating a Safe Metaverse” (ITIF, April 28, 2022), https://itif.org/publications/2022/04/28/lessons-social-media-creating-safe-metaverse/

[12].     Victoria Rideout et al., “The Common Sense Census: Media Use by Tweens and Teens” (Common Sense Media, 2021), https://www.commonsensemedia.org/sites/default/files/research/report/8-18-census-integrated-report-final-web_0.pdf; Monica Anderson, “How parents monitor their teen’s digital behavior” (Pew Research Center, January 7, 2016), https://www.pewresearch.org/internet/2016/01/07/how-parents-monitor-their-teens-digital-behavior/.

[13].     Rideout et al., “The Common Sense Census: Media Use by Tweens and Teens.”

[14].     Emily A. Vogels, Risa Gelles-Watnick, and Navid Massarat, “How parents monitor their teen’s digital behavior” (Pew Research Center, August 10, 2022), https://www.pewresearch.org/internet/2022/08/10/teens-social-media-and-technology-2022/.

[15].     Monica Anderson, “How parents monitor their teen’s digital behavior” (Pew Research Center, January 7, 2016), https://www.pewresearch.org/internet/2016/01/07/how-parents-monitor-their-teens-digital-behavior/.

[16].     Yaacov B. Yablon, “Students’ Reports of Severe Violence in School as a Tool for Early Detection and Prevention.” Child development 88, no. 1, 2017, 55–67.

[17].     Reich, Subrahmanyam, and Espinoza, “Friending, IMing, and hanging out face-to-face: overlap in adolescents' online and offline social networks.”

[18].     Pfeifer and Berkman, “The Development of Self and Identity in Adolescence: Neural Evidence and Implications for a Value-Based Choice Perspective on Motivated Behavior.”.

[19].     Michele L. Ybarra et al., “Online social support as a buffer against online and offline peer and sexual victimization among U.S. LGBT and non-LGBT youth,” Child Abuse & Neglect, Vol. 39, 2015, 123–136, DOI: https://doi.org/10.1016/j.chiabu.2014.08.006; Ashley Johnson, “Utah Law to Protect Children’s Privacy Will Violate Everyone’s Privacy” (ITIF, March 15, 2023), https://itif.org/publications/2023/03/15/utah-law-to-protect-childrens-privacy-will-violate-everyones-privacy/.

[20].     Casey, “Beyond Simple Models of Self-Control to Circuit-Based Accounts of Adolescent Behavior”; Shulman et al., “The dual systems model: Review, reappraisal, and reaffirmation”; Gámez-Guadix, Borrajo, and Almendros, “Risky online behaviors among adolescents: Longitudinal relations among problematic Internet use, cyberbullying perpetration, and meeting strangers online.”

[21].     Ibid.

[22].     Grace Icenogle et al., “Adolescents' cognitive capacity reaches adult levels prior to their psychosocial maturity: Evidence for a ‘maturity gap’ in a multinational, cross-sectional sample,” Law and human behavior vol. 43,1, 2019, 69–85, DOI:10.1037/lhb0000315.

[23].     Raquel E Gur and Ruben C Gur, “Sex differences in brain and behavior in adolescence: Findings from the Philadelphia Neurodevelopmental Cohort,” Neuroscience and biobehavioral reviews vol. 70, 2016, 159–170, DOI:10.1016/j.neubiorev.2016.07.035.

[24].     Khushmand Rajendran et al., “Parenting style influences bullying: a longitudinal study comparing children with and without behavioral problems,” Journal of child psychology and psychiatry, and allied disciplines vol. 57, 2, 2016, 188–95, DOI:10.1111/jcpp.12433.

[25].     Suzanne M. Geurts et al., “Predicting Adolescents’ Problematic Social Media Use From Profiles of Internet-Specific Parenting Practices and General Parenting Dimensions,” Journal of Youth Adolescence Vol. 52, 2023, 1829–1843, DOI: https://doi.org/10.1007/s10964-023-01816-4.

[26].     Londoño, “User Safety in AR/VR: Protecting Adults.”.

[27].     Ibid.

[28].     “Swatting” refers to falsely reporting that a serious crime is in progress in order to elicit a response from law enforcement (such as the dispatch of a SWAT unit).

[29].     Ben Lang, accessed 11/15/2023: https://twitter.com/benz145/status/1715483467973005586.

[30].     Sean Hollister, ”The Meta glassholes have arrived,” The Verge, accessed 11/15/2023, https://www.theverge.com/23920102/meta-quest-3-in-public-privacy-recording-glassholes.

[31].     Londoño, “User Safety in AR/VR: Protecting Adults.”.

[32].     Ashley Johnson, “Stopping Child Sexual Abuse Online Should Start With Law Enforcement” (ITIF, May 10, 2023), https://itif.org/publications/2023/05/10/stopping-child-sexual-abuse-online-should-start-with-law-enforcement/; Johnson, “Utah Law to Protect Children’s Privacy Will Violate Everyone’s Privacy.”

[33].     Londoño, “User Safety in AR/VR: Protecting Adults”; Daniel Castro, “Content Moderation in Multi-User Immersive Experiences: AR/VR and the Future of Online Speech” (ITIF, February 28, 2022), https://itif.org/publications/2022/02/28/content-moderation-multi-user-immersive-experiences-arvr-and-future-online.

[34].     Ibid.

[35].     Juan Londoño, “Meta’s Teen Safety Features in Horizon Worlds Exemplify the Rapidly Changing Environment of the Metaverse” (ITIF, May 1, 2023), https://itif.org/publications/2023/05/01/metas-teen-safety-features-exemplify-the-rapidly-changing-environment-of-the-metaverse/.

[36].     Ibid.

[37].     Ashley Johnson, “For Teens on Social Media, the Jury Is Still Out, But the Judgment Is Already In” (ITIF, May 30, 2023), https://itif.org/publications/2023/05/30/for-teens-on-social-media-the-jury-is-still-out-but-the-judgment-is-already-in/.

[38].     Londoño, “User Safety in AR/VR: Protecting Adults”; Kellen Browning, “More Resignations, but No Sign Yet of a Change in Gaming Culture,” The New York Times, accessed 11/16/2023: https://www.nytimes.com/2020/07/19/technology/gaming-harassment.html.

[39].     Sean Hollister, ”Inside Discord’s reform movement for banned users,” The Verge, accessed 11/21/2023, https://www.theverge.com/2023/10/20/23925119/discord-moderation-reform-rehabilitation-users-servers.

[40].     Savannah Badalich, “Building A Safer Place For Teens To Hang Out,” Discord, accessed on 11/21/2023, https://discord.com/safety/safer-place-for-teens.

[41].     Antonia Quadara and Alissar El-Murr, “The effects of pornography on children and young people” (Melbourne, Australian Institute of Family Studies, 2017), https://aifs.gov.au/research/research-snapshots/effects-pornography-children-and-young-people.

[42].     Ashley Gold, “Tech platforms struggle to verify their users' age,” Axios, accessed 11/21/2023: https://www.axios.com/2023/03/06/age-checks-online-children-social-media-privacy.

[43].     Ibid.

[44].     Daniel Castro, “Protecting Children Online Does Not Require ID Checks for Everyone” (ITIF, November 21, 2023), https://itif.org/publications/2023/11/21/protecting-children-online-does-not-require-id-checks-for-everyone/; Juan Londoño, “Excluding Teenagers From Online Services Stifles the Development of the Metaverse” (ITIF, March 22, 2023), https://itif.org/publications/2023/03/22/excluding-teenagers-from-online-services-stifles-development-of-metaverse/.

[45].     Kari Paul, “Meta designed platforms to get children addicted, court documents allege,” The Guardian, accessed 11/27/2023: https://www.theguardian.com/technology/2023/nov/27/meta-instagram-facebook-kids-addicted-lawsuit.

[46].     Matt Richtel, “Is Social Media Addictive? Here’s What the Science Says.” The New York Times, accessed on 11/27/2023, https://www.nytimes.com/2023/10/25/health/social-media-addiction.html.

[47].     Londoño, “User Safety in AR/VR: Protecting Adults.”

[48].     Londoño, “Meta’s Teen Safety Features in Horizon Worlds Exemplify the Rapidly Changing Environment of the Metaverse”; Gold, “Tech platforms struggle to verify their users' age”; “Use parental controls on your child's iPhone, iPad, and iPod touch,” Apple, accessed 11/27/2023, https://support.apple.com/en-us/HT201304.

[49].     Dylan Scott, “How America became a nation of gamblers,” Vox, accessed 11/27/2023, https://www.vox.com/policy/2023/9/7/23835684/nfl-kickoff-2023-opening-day-sports-gambling-betting-odds-fantasy-football.

[50].     Whitney DeCamp and Kevin Daly, “Loot box consumption by adolescents pre- and post- pandemic lockdown.” PeerJ vol. 11, e15287, May 1, 2023, DOI:10.7717/peerj.15287.

[51]. Sebastian Schwiddessen and Philipp Karius, “Watch your loot boxes! – Recent developments and legal assessment in selected key jurisdictions from a gambling law perspective,” Interactive Entertainment Law Review 1, 1, 2018, 17–43, accessed 11/27/2023, DOI: https://doi.org/10.4337/ielr.2018.01.02.

[52].     Matthew McCaffrey, “Video Game Loot Boxes: Anatomy Of A Moral Panic” (Reason Foundation, July 2023), https://reason.org/wp-content/uploads/video-game-loot-boxes-moral-panic.pdf.

[53].     Teddy Amenabar, “Video game ‘loot boxes’ are going away and it could crush Rocket League’s black market,” The Washington Post, accessed 11/28/2023, https://www.washingtonpost.com/video-games/2019/08/27/video-game-loot-boxes-are-going-away-it-could-crush-rocket-leagues-black-market/.

[54].     Nicholas Straub, “Every Country With Laws Against Loot Boxes (& What The Rules Are),” ScreenRant, accessed 11/28/23, https://screenrant.com/lootbox-gambling-microtransactions-illegal-japan-china-belgium-netherlands/.

[55].     Leon Y. Xiao, Laura L. Henderson, and Philip W. S. Newall, “Loot boxes are more prevalent in United Kingdom video games than previously considered: updating Zendle et al. (2020)” (letter to the editor for Addiction Vol 117, 2553–2555, DOI: https://doi.org/10.1111/add.15829.

[56].     Straub, “Every Country With Laws Against Loot Boxes (& What The Rules Are).”

[57].     Castro, “Protecting Children Online Does Not Require ID Checks for Everyone.”

[58].     Ibid.; Londoño, “Excluding Teenagers From Online Services Stifles the Development of the Metaverse”; Ashley Johnson “AI Could Make Age Verification More Accurate and Less Invasive” (ITIF, April 5, 2023), https://itif.org/publications/2023/04/05/ai-could-make-age-verification-more-accurate-and-less-invasive/.

[59].     Johnson “AI Could Make Age Verification More Accurate and Less Invasive”; Castro, “Protecting Children Online Does Not Require ID Checks for Everyone”; Will Flanders and Edward Longe, “A better way to protect teenagers online,” Wisconsin State Journal, accessed 11/29/2023, https://madison.com/opinion/column/a-better-way-to-protect-teenagers-online----will-flanders-and-edward-longe/article_683a2b94-42f2-11ee-9a79-a32fcbd2fedb.html; Shoshana Weissmann, “Age-verification methods, in their current forms, threaten our First Amendment right to anonymity” (R Street Institute, June 1, 2023), https://www.rstreet.org/commentary/age-verification-methods-in-their-current-forms-threaten-our-first-amendment-right-to-anonymity/.

[60].     Castro, “Protecting Children Online Does Not Require ID Checks for Everyone”; Shoshana Weissmann, “If platforms are required to have your government IDs and face scans, hackers and enemy governments can access them too” (R Street Institute, May 22, 2023), https://www.rstreet.org/commentary/if-platforms-are-required-to-have-your-government-ids-and-face-scans-hackers-and-enemy-governments-can-access-them-too/.

[61].     Shoshana Weissmann, “Social media platforms and age verification services can also be connected to existing data and served warrants by the government” (R Street Institute, June 29, 2023), https://www.rstreet.org/commentary/social-media-platforms-and-age-verification-services-can-also-be-connected-to-existing-data-and-served-warrants-by-the-government/; Weissmann, “Age-verification methods, in their current forms, threaten our First Amendment right to anonymity.”

[62].     Londoño, “Excluding Teenagers From Online Services Stifles the Development of the Metaverse.”

[63].     Ibid.

[64].     Castro, “Protecting Children Online Does Not Require ID Checks for Everyone.”

[65].     Ibid.

[66].     Scott Brennen and Matt Perault, “Keeping Kids Safe Online: How Should Policymakers Approach Age Verification?” (The Center for Growth and Opportunity at Utah State University, June 2023), https://www.thecgo.org/research/keeping-kids-safe-online-how-should-policymakers-approach-age-verification/#recommendations.

[67].     Ibid.

[68].     Flanders and Longe, “A better way to protect teenagers online.”

[69].     Anna Merod, “California joins small, growing number of states requiring K-12 media literacy,” K-12Dive, accessed 11/29/2023: https://www.k12dive.com/news/california-media-literacy-k12-law/699911/.

Editors’ Recommendations

Back to Top