
France’s TikTok Case Sets a Dangerous Content Moderation Precedent
French prosecutors opened a criminal investigation into TikTok for allegedly allowing content that promotes suicide and thereby failing to protect children sufficiently. This case sets a dangerous precedent that social media company workers might be held criminally liable for standard content moderation practices that inevitably fail to remove every trace of potentially harmful user-generated content. Pursuing criminal charges against social media companies for this reason would hinder their ability to hire top professionals, discourage nuanced content moderation, and chill legitimate online speech.
First, imposing criminal penalties on online intermediaries for third-party speech is inappropriate. Most online platforms, such as social media networks, podcast hosting services, and e-commerce marketplaces, would not exist if these companies were legally liable for content that they themselves did not create. Instead, users on these platforms are responsible for the content they produce. And it is the platforms’ responsibility to remove illegal content when notified by authorities. Only in extreme cases, such as knowingly distributing child sexual abuse material (CSAM), should platforms face criminal prosecution.
Second, imposing criminal penalties would spur platforms to remove lawful content to minimize their potential risk. This overly cautious approach could lead to the suppression of legitimate speech, such as artistic or comedic expression, or educational content. In addition, while the investigation did not detail which employees would be criminally liable, if social media company workers fear jail time over the content moderation decisions of their employer, these social media platforms will fail to attract the top talent needed to create and enforce effective content moderation policies in the first place.
Third, social media companies already have robust content moderation practices that remove the vast majority of content that violates their policies. The sheer amount of content users upload means that some of it will inevitably slip through the cracks in content moderation, but, in 2023, TikTok reported a 96.5 percent proactive removal rate—in other words, its existing efforts detected and removed more than 9 out of 10 violations on the platform before any users reported them. France’s criminal penalties seem to imply these platforms’ content moderation practices are insufficient, when in reality, they are working to remove the vast majority of harmful content.
Policymakers should only focus criminal prosecutions on platforms that violate clear red lines, such as distributing known CSAM identified to the platform by lawful authorities, rather than pursuing criminal cases for much less well-defined content moderation concerns. If France continues down the path of seeking criminal liability for a platform’s failure to remove another party’s online speech, it will likely push social media companies to impose draconian content moderation policies that restrict lawful speech, making these platforms worse for users.
Related
June 2, 2025
Germany’s Content Moderation Regulation
May 14, 2025
Turkey’s Content Moderation Regulation
June 2, 2025
