The Erosion of Intermediary Liability Protections Can End the Metaverse Before It Even Starts
The idea of a fully digital world of robust social interaction used to reside exclusively in sci-fi movies and books such as The Matrix or Ready Player One. That idea has started to look more like a possible reality in recent years as companies such as Meta, HTC, Microsoft, Valve, and Epic Games, among many others, have signaled their intention to build platforms where users can have immersive digital social experiences—what many people are calling the metaverse. Unfortunately, the nascent metaverse may die in the cradle if policymakers carelessly change intermediary liability laws.
Content moderation in multi-user immersive experiences is particularly challenging, as detailed in a recent report by the Information Technology and Innovation Foundation. Most user interaction in these experiences will be fast-paced and short-lived, as it relies heavily upon voice chat. So, in order to be able to moderate content, platforms will need to keep some type of conversation log, which then raises privacy concerns, creating a privacy-safety trade-off. This will require platforms to decide which of those two they ought to emphasize, a decision that will undoubtedly generate pushback from some users. Finding the right approach to create safe platforms while protecting users’ privacy will require extensive experimentation.
Additionally, there are many different formats in which user interaction may occur in the metaverse. Users will be able to interact with a single user in a private room, or with multiple users in a virtual public plaza. Users’ expectations of privacy and safety protections will be different in each space, which will require platforms to adopt different standards for each scenario. Platforms will likely adopt automated content moderation systems, but these systems are not always accurate, which can lead to the removal of legitimate content, creating a new content moderation problem.
In summary, platforms will have to deal with two difficult questions when moderating content. First, what is the right balance between user safety and user privacy? Second, what is the right balance between accuracy and speed in content removal? How metaverse platforms answers these questions will translate into different content moderation strategies, which will likely need continuous tinkering and adaptation to consumer demands. This will lead to different platforms competing not only in terms of the capabilities of their hardware or software but on the quality of the content they serve to their users.
Lawmakers have proposed multiple bills targeting social media that would likely have unintended consequences on the metaverse. The current initiatives at Congress aimed at regulating social media, particularly reforming Section 230 of the Communications Decency Act, could seriously limit platforms’ capacity to experiment on content moderation by making it riskier, costlier, or technically impossible, or push them to take measures that could seriously erode users’ privacy. While most of the discourse around these bills focuses on current 2D social media, they would also impact the metaverse.
One example of a potentially harmful bill is the EARN IT Act, which would considerably increase platforms’ legal liability for hosting user-generated content. Currently, Section 230 only holds platforms liable for the content they create and publish themselves. Under EARN IT, they would be held liable for the content created by their users, even if platforms are unaware of its existence, if their moderation policies do not prevent the spread of content depicting child sexual abuse. For example, VR platforms might lose liability protection if they offer end-to-end encryption to allow users to privately communicate with one another. This could make experimentation in content moderation extraordinarily costly and risky, as platforms deemed negligent could face expensive litigation. Additionally, the bill could coerce platforms into scanning and monitoring users’ private conversations, raising privacy concerns and increasing platforms’costs related to data stewardship.
Another potential disruption is the set of bills focusing on the use of algorithm-backed automated content moderation systems, such as the Filter Bubble Transparency Act, the Justice Against Malicious Algorithms Act, the SAFE TECH Act, the Civil Rights Modernization Act, and the Protecting Americans from Dangerous Algorithms Act. Most of these bills aim to limit or strip Section 230 protections for platforms that employ algorithmic systems for content moderation. This would represent a technical challenge for platforms, as content moderation in the metaverse without the use of automated tools, due to the nature of the content being moderated, would be prohibitively costly or outright impossible.
There is a lot of uncertainty regarding what the metaverse will look like. The construction of a vibrant, thriving virtual community will be highly dependent on the hosting platforms’ ability to provide a safe and enjoyable experience to users. To do so, platforms will need to experiment with different content moderation strategies in order to find the right balance between safety, privacy, timeliness, and accuracy. Current bills in Congress aiming to regulate social media will have a spillover effect on the metaverse, making this experimentation more costly and risky by increasing platforms’ legal liability when they engage in content moderation. Experimentation is vital in the early stages of the adoption of new technologies; by restricting platforms’ ability to experiment, Congress could potentially doom the metaverse before it even gets the chance to take off.