
Spotify’s Joe Rogan Controversy Proves Content Moderation Is Bigger Than Social Media
Over the course of a few days, multiple music artists and podcasters have removed their music and podcasts from popular streaming service Spotify, protesting COVID-19 misinformation on The Joe Rogan Experience. Joe Rogan’s podcast moved exclusively to Spotify in 2020 after running for multiple years on other platforms and attracting controversy over episodes featuring far-right figures.
The podcast’s latest controversy over vaccine misinformation caused musician Neil Young to announce his intention to leave Spotify if the platform did not cut its ties with Rogan. Other music artists and podcasters have since joined Young in removing their music and content from Spotify. Six days after Young’s announcement, Spotify responded by publishing its content policies and announced its intention to add content advisories to podcast episodes that discuss COVID-19, linking to a COVID-19 Hub featuring up-to-date factual information about the disease from scientists, physicians, and public health authorities. After other artists pulled their music because of Rogan’s past use of a racial slur, Spotify removed dozens of episodes of Rogan’s podcast but otherwise stood by the podcaster.
Much of the ongoing debate about content moderation and online political speech has focused on social media, particularly social media giants like Facebook and Twitter. But the backlash against Spotify for content hosted on its platform highlights what experts have been saying all along: Content moderation is a bigger and more complicated issue than it may seem on the surface.
Social media platforms are far from the only online services that make content moderation decisions. Any online service that accepts third-party contributions—posts, edits, comments, ads, reviews, apps, or any other type of content—must decide whether and how to moderate content. Some of these websites and apps are owned and operated by large companies with plenty of resources to dedicate to content moderation, and yet these companies may still struggle to handle the massive amount of content users of their services submit on a daily or even hourly basis. Additionally, many websites and apps are owned and operated by small or mid-sized companies, non-profit organizations, or individuals who cannot afford to pay thousands of human moderators or develop complex moderation algorithms.
Resources aside, many of the questions online services must answer when developing their content moderation policies are difficult questions that reasonable people disagree on the answers to. While some content is black-and-white, good-and-bad—such as illegal content that a judge has ruled illegal—there are many forms of harmful-but-legal content or content that some people believe to be harmful and others believe to be important free speech.
The controversial content on Joe Rogan’s podcast falls into this gray area, and Spotify is far from the first online service that has been caught in a controversy like this one. In the United States, a deep political divide has developed between those who believe that online services over-moderate and remove too much controversial content and those who believe online services under-moderate and do not remove enough controversial content. Without a consensus on online speech issues, online services will always fail to satisfy a significant portion of the general public with their content moderation decisions.
To be more productive, the debate surrounding online speech and content moderation should expand beyond social media giants to include all online services that rely on third-party content. While legislators often lambast Big Tech, proposals targeted at Facebook and Instagram may be ineffective at addressing the problems that exist elsewhere online, and rules that large tech companies can afford to adhere to may be too expensive and burdensome for everyone else.