The U.S. Government Should Take the Lead in Providing Guidance on How to Moderate Social Media Content From the Taliban

Ashley Johnson September 3, 2021
September 3, 2021

(Ed. Note: The “Innovation Fact of the Week” appears as a regular feature in each edition of ITIF’s weekly email newsletter. Sign up today.)

The question of how to handle government figures posting harmful content has loomed large over social media platforms this year, and it is even more of an issue now that the Taliban are back in control of Afghanistan. So far, different platforms have taken slightly different approaches to moderating content from the Taliban, but the matter is far from settled, and as the Taliban consolidates power, the United States should work with other democratic governments and institutions to form a consensus on how social media should handle content from the Taliban and its sympathizers.

The Taliban first took control of Afghanistan in the early days of the Internet, and in those days it opposed using modern technology. But in more recent years, the Taliban has taken full advantage of social media for propaganda purposes. Its initial focus was on projecting an inflated image of its strength and ability to defeat U.S. and coalition forces and eventually retake power. Now that the Taliban actually holds power, some experts expect the group’s focus to shift to projecting a false image of ruling justly and safeguarding human rights.

The U.S. government has designated the Taliban as a terrorist organization and levied sanctions that prohibit Americans from providing the group with funding or services. However, government officials and legal experts disagree on whether these sanctions require social media companies to keep terrorist organizations off their platforms, and this uncertainty has led to platforms taking different approaches.

All of the largest social media companies have policies in place to restrict the Taliban’s online presence. Facebook prohibits the group from operating accounts and removes content praising or supporting the Taliban under its policy against dangerous individuals and organizations. YouTube, which has a similar policy against violent criminal organizations, also prohibits the Taliban from operating accounts. Like Facebook, TikTok classifies the Taliban as a terrorist organization and removes content praising or supporting the group. Meanwhile, Twitter has not explicitly banned the Taliban, although a spokesperson tweeted that the platform would continue to remove posts that violate the platform’s policies against glorifying violence.

Extremist content has always posed a difficult dilemma for social media platforms. Groups that successfully use social media to spread their messages tend to be resilient and often operate across multiple platforms. While major social media platforms have algorithms that can detect certain rules-violating content, extremist content isn’t always black-and-white. For example, even a well-trained algorithm might not be able to distinguish between a propaganda video uploaded by a terrorist organization and a news video reporting on terrorist activities. Additionally, human rights groups have cautioned against permanently deleting content that could be used as evidence of war crimes or other human rights violations.

In addition to these existing challenges, social media platforms now face an even more complex problem of how to classify the Taliban, since it has gone from an insurgency group to the de facto government of Afghanistan. The Taliban is not on any United Nations sanctions list, though sanctions remain in place from resolutions passed in 1996 and 2011. And currently, no countries officially recognize the Taliban as the legitimate government of Afghanistan, though Russia, China, and Turkey have unofficially recognized the new regime, and some countries are holding off until the Taliban makes good on its promises to uphold human rights, which most experts view with skepticism.

Long before recent events in Afghanistan, there was already a raging debate around how platforms should moderate content from government officials. Now, the stakes are even higher; social media companies will need guidance from democratic countries about how to handle content from a group most of them have designated as terrorists as they exercise political power and control of an entire society. Tech Against Terrorism, an initiative that works with the United Nations to support tech companies in responding to terrorism, added the Taliban to its Terrorist Content Analytics Platform, recommending that companies remove Taliban content from their platforms regardless of the Taliban’s new position as an acting government. But Tech Against Terrorism also acknowledged the difficulty platforms face, particularly smaller platforms, in identifying and responding to Taliban content without an international consensus on the group’s status, and called on governments to provide that consensus.

Social media platforms will also need guidance on how to handle content praising the Taliban. It is simple enough for platforms to ban official Taliban accounts, but it is less clear what platforms should do about accounts associated with individual government and military officials in the Taliban’s new regime, as well as ordinary Afghan citizens or others who post content praising or supporting the Taliban’s actions. Not all content from the Taliban, its officials, or its supporters will glorify violence; should platforms treat the sources differently, or is their content equally harmful in spreading a false, benevolent image of the Taliban? Indeed, this issue has already arisen in the United States as members of the alt-right and white supremacists have begun praising the Taliban.

Governments should not force social media platforms to make these decisions on their own, but platforms will have no choice if governments do not step up to the task. On the other hand, governments should not force social media platforms to moderate content in a certain way, as this would carry significant free speech concerns. But most social media platforms would be open to guidance that provides a consensus view on content moderation. Though organizations like Tech Against Terrorism can create norms and best practices for social media platforms to follow, official government recommendations would hold more weight and provide platforms with a blueprint to follow when the United States, the European Union, or the United Nations sanctions a group or designates that group as a terrorist organization.

As home to the world’s leading social media platforms, the United States should take the lead in the global effort to create a set of voluntary guidelines that answer the difficult questions of how they should respond to official and unofficial Taliban accounts, accounts of individuals affiliated with the Taliban, and content that glorifies both the Taliban’s violent and non-violent actions. Social media companies should have an opportunity to provide input and insight in developing these guidelines, since they can best advise on what is possible and can provide data about what they are seeing on their platforms, but the U.S. government should not absolve itself of responsibility. Nor should Congress attempt to coerce companies into following any guidelines it helps produce by threatening to withdraw Section 230 liability protections, as that would create new problems of overenforcement and potentially threaten free speech online. However, by creating guidelines for social media, the U.S. government and its democratic allies can provide expertise and knowledge about foreign affairs that social media companies do not possess while also more effectively limiting the Taliban’s reach and curbing the spread of propaganda and extremist ideologies online.