ITIF Logo
ITIF Search
Meta’s Teen Safety Features in Horizon Worlds Exemplify the Rapidly Changing Environment of the Metaverse

Meta’s Teen Safety Features in Horizon Worlds Exemplify the Rapidly Changing Environment of the Metaverse

May 1, 2023

Meta announced on April 18 the rollout of new teen safety tools for its Horizon Worlds platform as part of its plans to allow teenage users in the future. The rollout of these tools is particularly timely as policymakers and child safety activists recently expressed their concerns with the platform’s move to expand to this new demographic. The recent slate of changes in the platform’s safety practices and standards is a great example of how metaverse platforms are constantly evolving to adapt to users’ needs and demands. Policymakers should wait before intervening in a nascent industry like the metaverse.

Meta’s latest set of teen safety tools includes defaulting teen accounts to the highest privacy settings, hiding teens’ online status from others, adding content ratings and age-gating mature content, restricting voice chat, and limiting the ability of adult accounts to connect with unknown teen accounts. The Horizon Worlds platform is radically different from what it was a couple of years ago, when there were no parental controls in Meta’s immersive products and users had to use a social media account to access the device (which incentivized teens’ use of adult accounts, as parents were skeptical of enrolling their kids in social media).

Horizon Worlds is not the only metaverse platform that has revamped its safety and security practices in the last year. Last summer, the popular metaverse platform VRChat introduced its “Security Update,” which banned users from accessing their platform through modified clients and software development kits (SDK). The platform enacted the ban, finding that modified clients and SDKs were largely responsible for instances of harassment, leaks of users’ sensitive information, and unauthorized access to users’ accounts.

Despite its benefits, the update received continuous backlash from the VRChat community, especially from their accessibility community, as a lot of these modified clients provided accessibility solutions that were previously unavailable to them. The platform maintained the ban but announced that it would fast-track new accessibility features.

These examples show that policymakers should be wary of regulating an industry before it has time to experiment and develop new tools to address user concerns. Metaverse platforms have the advantage of looking to past technologies, like “2D” social media, in order to garner knowledge of safety priorities and the lessons learned from social media’s struggles with issues like content moderation and privacy. They can also understand the tradeoffs of different user safety features, and when they make mistakes, they can respond quickly to feedback from their users.

As the Australian eSafety Commissioner Julie Inman Grant mentioned in her keynote at a recent ITIF event, policymakers should have a vigilant eye on the development of the metaverse but should take care not to squash innovation in the space. Platforms have displayed a willingness to take proactive action to tackle user safety threats in the metaverse, despite policymakers’ claims to the contrary.

Instead of potentially interfering with platforms’ efforts to create a safe space for children, policymakers should prioritize other harm reduction policies. One example would be introducing social media literacy classes in school curricula, which would help teenagers avoid dangerous situations online and provide them with the skills and resources to respond to harassment or abuse.

Back to Top