ITIF Logo
ITIF Search
Lessons from Social Media for Creating a Safe Metaverse

Lessons from Social Media for Creating a Safe Metaverse

April 28, 2022

For better or worse, the metaverse—the term used to describe the 3D virtual environments that are the future of cyberspace—is being built upon the foundations of the current “2D” Internet. Therefore, the metaverse, particularly in its earlier stages, will likely inherit many of the challenges present on today’s Internet, such as privacy, content moderation, and teen and children’s safety. Challenges exclusive to the metaverse may not be fully recognized until the technology matures, and not every actor will get it right in the beginning. While scrutiny of metaverse platforms is essential to ensure a safe, equitable, and inclusive environment, policymakers should be cautious of rushing to regulate the metaverse, such as by eroding intermediary liability protections or restricting the use of “algorithms” because allowing platforms the freedom to experiment and innovate will best position them to address new challenges.

Privacy

Due to its heavy reliance on augmented and virtual reality (AR/VR) hardware, the amount of personal data captured in metaverse platforms will be substantially higher than its 2D counterparts. Addressing the privacy needs of Internet users is already a priority for tech companies, and companies will also need to address these needs in the metaverse. Past experiences with 2D social media, such as the Cambridge Analytica scandal, raised awareness about the potential for mismanagement of personal data by platforms. This creates a challenge for metaverse platforms as their experiences will have to be crafted in a way that makes users feel confident that their privacy is being respected. This will be an uphill battle: A recent survey by Morning Consult found that over half of respondents had “major concerns” about personal data management in the metaverse.

Addressing privacy has become an increasingly complex issue for metaverse platforms, as multiple aspects of their services, such as content moderation and advertising, rely on data collection, and restrictions on data collection introduce multiple trade-offs. For example, certain content moderation tools, such as monitoring and recording conversations in social VR, can raise privacy concerns. If platforms opt not to use these tools to protect privacy, there may be more undesirable content on their platforms, leading to lower revenue due to a reduced user base or reputational damage. A similar situation happens in advertising, where platforms’ decisions regarding the use of activity tracking for advertising could potentially impact the platform’s profitability and, therefore, financial sustainability.

Addressing privacy is complicated by many different privacy laws around the world. And in the United States, platforms must contend with a costly patchwork of state privacy laws due to the lack of a federal privacy law.

Content Moderation

Most of the metaverse social platforms are going to revolve around multi-user immersive experiences (MUIE), in which its users can interact with each other in “digital worlds” in a way that resembles the day-to-day interactions in the “physical” world. In MUIEs, interactions between users are short-lived conversations, usually using voice chat. Moderating content is more difficult for platforms in the metaverse than on social media because the content produced by the interaction between users is not text that will exist for long periods, but a voice chat that will need to be recorded to be able to be reviewed. There is also the challenge of new types of non-verbal speech, such as digital worlds and items, which will also face some kind of moderation by platforms.

Some platforms will likely respond to these challenges by attempting to create a robust top-down content moderation system. This will require a mix of human reviewers and automated systems. Automated systems have become necessary to moderate content at scale, as using an entirely human-led moderation process is slower and costlier. Nonetheless, these tools are not always accurate, are often incapable of understanding context, and can be circumvented with slight behavioral changes, such as “algospeak” or the use of symbols that exploit the algorithms’ lack of comprehension of context.

Other platforms will likely adopt a more decentralized approach that allows communities and sub-groups to take most of the burden of moderation of their spaces. This approach has seen some success in platforms like Reddit and VRChat, and more decentralization has been a core premise of most Web 3.0 projects. But community-led moderation can lead to a lack of platform-wide standards and moderation burnout, as community moderators are users who receive little to no compensation for taking on the demanding role of policing content inside their sub-groups.

Teen and Child Safety

One of the major concerns regarding social media is their potential harm to teenagers and children, including mental health, body image, and addiction. For example, the “Facebook Files,” a series of reports in the Wall Street Journal, raised concerns about the potential impact of social media on teenage mental health. There are related concerns regarding how children might be more susceptible to addiction when using AR/VR technology, despite the lack of scientific consensus regarding these concerns. Finally, there are concerns that children accessing both social media and the metaverse may be exposed to mature content or sexual predators.

Some metaverse platforms have responded by restricting their platforms to adults. But there are reports of minors accessing adult-only apps, such as Horizon Worlds. These reports highlight that current age verification tools on these platforms are insufficient to keep children off these services.

Metaverse platforms have started to address these concerns by developing parental controls similar to those available for social media, with Meta introducing parental controls on its Quest platform as part of its new Family Center program. This will allow parents to more closely monitor the activities of children’s accounts on their devices, lock specific apps, and restrict the download and use of adult-only applications by teenage accounts. Nonetheless, current age verification systems tend to be easy to bypass.

Implementing a more robust age verification system is especially difficult at this stage of adoption of AV/VR technologies, as it is still in the “family computer” or “computer lab” stage of adoption, where households usually own a single device, which is shared amongst members of the family. Because of this shared nature, these devices are often linked to an adult account, which is the primary account that uses the device but is used by all household members regardless of age. Additionally, when parents are skeptical of enrolling their kids in social media—a requirement for some VR devices, such as the Quest—they prompt kids to use their accounts instead. This practice is often prohibited in the platforms’ terms of service, but the platforms cannot easily enforce this rule if parents break it. There have been calls for the use of face recognition to unlock or operate the device, which raises privacy concerns regarding collecting biometric data, as mentioned above.

Addressing issues like privacy, content moderation, and child and teen safety will be a complex process. Platforms will have to make difficult decisions and recalibrate as they learn what works. The experience with social media’s development and maturation process has provided valuable knowledge, allowing platforms to get ahead of some of the potential risks to user safety and privacy in the metaverse. But as these platforms blaze these new trails, policymakers should allow them to experiment with different approaches and tools, even though that means that sometimes they will get it wrong. Policymakers should be wary of passing regulations that would limit platforms’ ability to experiment, such as eroding intermediary liability protections or restricting the use of automated systems for content moderation, which would prevent them from finding innovative methods to tackle these issues. Instead, they should tackle problems that would provide more certainty to both users and platforms, such as passing a federal data privacy law that establishes basic user data rights and data stewardship responsibilities for platforms.

Back to Top