ITIF Logo
ITIF Search
Government Should Tackle New Election Misinformation Threats on VR Platforms

Government Should Tackle New Election Misinformation Threats on VR Platforms

November 6, 2020

Despite months of planning, platforms were still racing to combat mis- and disinformation up to Election Day, and even in the days after. The magnitude of this challenge indicates that government and industry have not only underestimated the threats of online misinformation during past elections—they are still struggling to catch up with the problem. While most of the attention this cycle has focused on issues with Facebook, Twitter, and YouTube, misinformation is spreading to new platforms as political organizers find new opportunities, like online games, to engage voters.

Many of these platforms are under-prepared to tackle this growing threat, and virtual reality (VR) platforms are a particularly vulnerable target. To avoid repeating history, both government agencies and the companies building VR should act now to secure virtual environments against misinformation in future elections.

VR inherits many of the existing challenges of combatting targeted disinformation campaigns and viral conspiracy theories that undermine election integrity. The volume of content that appears on internet platforms makes it difficult to address every false claim. False news spreads significantly faster than factual information which makes it hard to correct the record. And, when moderators enabled by algorithms discover questionable content, they still struggle with drawing the line between misinformation and free speech.

VR combines features of social media, messaging services, and event platforms to allow for real-time interactions at scale. Users communicate through avatars or virtual representations of themselves, making moderation in VR more like monitoring an active conversation than reviewing static content. While platforms can use a combination of automated software and human moderators to quickly screen text posts, it is much harder to do this on images and video, and nearly impossible on huge volumes of real-time communications.

Most communication in VR is also ephemeral which means that some of the most common moderation practices, such as content warnings or fact-checks, would be largely ineffective. Already social media platforms struggle with policing this type of short-lived content, such as Snapchat messages or Instagram Stories.

Finally, VR platforms need to adapt content moderation practices to work in three-dimensional space. Consider the challenges of detecting avatars that flash racist hand signs or virtual objects that make discrete references to political conspiracy theories. And because users interact with VR content differently, traditional moderation approaches, such as labeling, may not have the necessary behavioral impact.

Existing community guidelines and enforcement mechanisms on VR platforms primarily focus on individual actions. Policies for real-time moderation of misleading information, rather than individual harmful behavior, remain under-developed. And these policies do not consider the challenges of moderating activity from government officials or public figures, whose statements may require additional consideration before taking action.

VR platforms should identify what election misinformation might look like and how it could spread, intentionally or unintentionally, in the context of their service and user base. For example, recording and user reporting features could reveal the nature of some potentially dangerous interactions. But companies building VR cannot be expected to uncover and mitigate threats to election integrity on their own. To defend VR against future threats of misinformation, platforms and government agencies should work together.

The federal government should invest in research to identify the scope of the problem and the potential attack vectors that VR introduces to election integrity. Understanding how misinformation campaigns could play out in real-time, three-dimensional space can enable government agencies to anticipate potential threats, inform possible interventions, and help VR companies identify risks to their platforms and develop new content moderation approaches to address them.

Government agencies concerned with election integrity should also work directly with VR platforms to share information about known threats and industry approaches to combatting them. Several large tech companies coordinated and shared information about election interference with government agencies ahead of the 2020 elections. Many of these companies are also working on VR and could include this in ongoing discussions while also sharing lessons learned from moderating social media platforms with other VR companies.

Right now, VR platforms have the benefit of hindsight—but the window to respond to the lessons learned from other platforms is closing. By the 2022 midterm elections, as many as 60.8 million people in the United States may be regular VR users, and by the next presidential cycle, that number will likely be even higher. If policymakers and industry leaders wait until then to address the potential for misinformation in virtual spaces, it will be too late. Investing in research and collaboration now is necessary to prepare VR platforms for the threats they will face in the future.

Back to Top