ITIF Logo
ITIF Search
Podcast: How Section 230 Shapes Content Moderation, With Daphne Keller

Podcast: How Section 230 Shapes Content Moderation, With Daphne Keller

Daphne Keller, platform regulation expert at Stanford University and former Associate General Counsel for Google, joins Ellysse and Ashley to explain Section 230’s role in shaping how large companies approach content moderation on a massive scale, and how intermediary liability protections allow platforms of all sizes to thrive.

Mentioned

Audio Transcript

Daphne Keller: There’s a perception now that the Internet somehow has reached its mature state and we don’t need that anymore, which is true if you want to accept a future Internet that mostly consists of giant incumbents. But if you want to nurture smaller market entrants who might eventually rise to challenge those incumbents, they absolutely need those protections.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we’re exploring the ins and outs of Section 230, the intermediary liability law that has drawn attention to some of tech’s biggest players. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We are a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be exploring the role of Section 230 of the Communications Decency Act in the daily operations of the social media platforms that millions of people use every day, and how changes to the law would impact the tech industry. Joining us, we have Daphne Keller, Director of the Program on Platform Regulation at Stanford Cyber Policy Center. Previously, Daphne was Director of Intermediary Liability at Stanford’s Center for Internet and Society and Associate General Counsel for Google. Welcome to the podcast, Daphne.

Daphne Keller: Thanks so much for having me.

Ellysse Dick: So Daphne, your research is focused a lot on private governance of free expression online. Can you talk a bit about the role you see private platforms playing in enabling or restricting user speech and what tools they use in this process?

Daphne Keller: Sure. So the role of private platforms setting rules for speech is more important the more a country’s social norms deviate from its laws. So if you’re in a country like Germany, where there’s relatively high concurrence, where a lot of what’s prohibited by law is also what people think is inappropriate or immoral to say, then maybe it works to use laws to tell platforms what to take down. But in the US, there’s this huge swath of speech that is definitely protected by the First Amendment—you can’t use laws to tell platforms to take it down—but nonetheless, most people find it offensive, maybe think it is dangerous, think it’s highly inappropriate for it to be online. And this includes things like barely legal harassment and threats, pro-suicide content, horrific violent content, like a lot of instances of sharing the Christchurch massacre video are legal to share.

And so if we want platforms to take that stuff down, to prevent users from seeing this so-called “lawful but awful” speech all the time, we have to rely on their private discretionary rules to do it. There isn’t a mechanism to get there by lawmakers prescribing what’s going to take down. And it’s a little bit of a devil’s bargain or a catch-22. It puts us in a situation where if we want platforms to take that stuff down, as most users do, most advertisers do, I think most politicians really do, we forfeit the mechanisms of democracy, constitutional constraints, public accountability, all these tools that we usually use to set the rules for speech in important public forums.

Ellysse Dick: So can you give our listeners a brief overview of what some of the specific tools are that platforms have historically used or are using now to do that governance?

Daphne Keller: Sure. Almost any big platform that you think of—Twitter, Facebook, YouTube—from their very earliest days, they were encountering content that they definitely didn’t want on the platform: pornography, bullying, spam and deceptive advertising efforts. Just all kinds of garbage. And so there’s an evolution where early on, a lot of them had relatively informal policies about what to take down. And as they got bigger and encountered content like this at an ever-greater scale and encountered new things they hadn’t anticipated like the Tide pod challenge, or whatever new horribleness the Internet thought of this week, they had to be nimble and keep adapting those rules to the point where now it’s like a body of statutes and common law with a lot of nuanced rules and precedent about what speech comes down or what speech might go behind a warning label, or an interstitial notice saying it’s not appropriate for children, or variants besides just taking things down.

The other thing about those rules that I think people often don’t appreciate is that because, for a platform like Facebook that has a distributed workforce of something like 30,000 hired contractors around the world in places like the Philippines or India applying these rules, you can’t make a rule that says “take down content that is too sexual,” because different people around the world are going to have wildly varying understandings of what that means.

And so you have to move toward having these incredibly specific prescriptive rules saying exactly what that means. Can you show a nipple? What percentage of a breast can you show? All of these weirdly concrete things that are necessary to get any semblance of consistency enforcing the rules at scale. And of course, it fails. They don’t actually achieve consistency, but that is the mechanism they have to use to even try.

Ashley Johnson: So you and many other Section 230 experts have said that the Internet wouldn’t exist the way that we know it currently without intermediary liability protections. Can you explain how Section 230 has enabled the proliferation of different online business models and why it has been so important for innovation online?

Daphne Keller: One thing is just whether people even bother to go into the content hosting or content intermediating business in the first place. There are surveys of venture capitalists saying, “Sure, I will invest in a new platform if there’s a situation of legal certainty and protection from expensive or uncertain liability. I’m not going to invest in those kinds of platforms if I don’t know that I have that kind of safety or the platform has that kind of certainty about its legal status.” And so it’s very hard to imagine anybody putting up the money to start a company like a YouTube or a Twitter or a Facebook without that kind of protection. And I think there’s a perception now that the Internet somehow has reached its mature state and we don’t need that anymore, which is true if you want to accept a future Internet that mostly consists of giant incumbents. But if you want to nurture smaller market entrants who might eventually rise to challenge those incumbents, they absolutely need those protections. They’re not going to go into this kind of business without them.

Ashley Johnson: So Section 230 impacts companies of all sizes, but let’s talk a bit about large platforms, the incumbents that you mentioned. Companies like Google, Facebook, and Twitter handle a wide spectrum of content. What types of content do you think pose the greatest challenges to effective moderation approaches?

Daphne Keller: Well, I’ll start with what is, in a sense, easiest to moderate: content that is recognizably illegal, easy to know that it’s illegal the second you look at it, is the most straightforward. And the strongest and most horrific example of this is child sexual abuse imagery—CSAM, as people call it, also formerly known as child pornography. There’s a reason that platforms are required by law to take that down when they see it, and expected to know when they see it and take action. And that’s not immunized by 230 and neither are other things that are federal crimes. That is in a category that is both uniquely dangerous and uniquely recognizable. There isn’t some significant category of important speech that’s likely to disappear in a system where we rely on platforms to assess and make legal judgment calls.

At the other end of the spectrum is anything that is very hard to tell whether it’s illegal. If we ask platforms to make the judgment calls under threat of liability to them, we put them at risk if they don’t err on the side of caution in taking down user speech. That gets really messy if we apply it to something like defamation. Imagine if the #MeToo movement had arisen in a world without CDA 230 and every man accused of sexual misconduct contacted Twitter or Google (because there were lists of accused men on Google Docs) and said, “Hey, that’s defamatory. What those women are saying is a lie.” If the platforms faced liability for that kind of defamation claim, they have no way of knowing which side is lying. And it’s very clear what the easiest and safest course to protect themselves would be: It would be to just take it all down.

Ashley Johnson: So what impact does the volume of content have on platforms’ ability to effectively moderate forms of harmful and illegal content? What unique challenges do large platforms face versus small platforms and how can companies best moderate and remove this content at scale?

Daphne Keller: So any platform that is seeing a very high volume of content will have a correspondingly greater challenge in moderating that content. But there are a lot of variables buried there. So first of all, we shouldn’t assume that a platform that has a high volume of content is necessarily also the platform with the personnel and capability to do the moderation. If you think of something like Wikipedia, they have a huge volume of content. They have a huge number of users. They don’t have a huge staff to do moderation, although they do have a bunch of volunteers, which makes their situation very different.

There’s really interesting research done by Jennifer Urban at Berkeley, about what different platforms do in response to takedown requests for copyright-infringing content, which is one of the categories not immunized by the CDA. Platforms do notice-and-takedown, under the DMCA, and there are all these instances of smaller platforms—meaning platforms that have fewer employees and resources, but might actually be handling a large volume of content—just honoring requests because they don’t have the time or the personnel or the money to go and research whether a particular instance is fair use or not.

Just one other thing about high volume: The greater the volume of content being moderated, the higher the temptation to use automated tools to detect speech that might violate a platform’s terms of service or violate the law. And a platform like YouTube that has a lot of money to sink into this can build things like Content ID, which YouTube spent $100 million on at last report. I’m sure that number is actually higher now. Smaller platforms don’t have those options. But there’s also the problem that automated tools make a lot more mistakes than you would get in a more hands-on system with humans looking at the content. An automated tool that is supposed to detect violent extremism or terrorism can’t tell the difference between a video posted by ISIS for purposes of recruitment and that exact same video being used for news or counter-speech or academic research.

And so, you enter an area of much greater risk of over-removal once you rely on those automated means. And we saw this during COVID, as platforms sent moderators home, and were pretty public about the fact that they were relying more and more on automation. The other problem with automation—beyond harming speech rights by taking down the wrong thing, and posing a competition problem by being something larger platforms can afford and smaller platforms can’t—is, there’s growing evidence of disparate impact on users based on things like race, language spoken, and so forth. There was a study, for example, showing that speakers of African American English were falsely flagged for engaging in hate speech at a much higher rate than other people by an automated tool that the researchers studied. So there are a whole lot of reasons to be really careful about moving toward either platforms voluntarily using automation and certainly toward lawmakers requiring them to do so.

Ellysse Dick: So that actually brings me right to the next question I was going to ask you, which is, should users have the right to human review of an automated decision? And if so, what should that process look like?

Daphne Keller: Well, that runs smack into the competition question, right? Because again, Facebook and YouTube can afford to have humans around the globe doing reviews of automated takedown decisions and smaller platforms can’t. That said, an automated takedown decision with human review is probably going to lead to a more accurate outcome than an automated decision without human review. And so, certainly in Europe where there are more moves toward requiring automated decisions, adding this layer of requiring human review is a component that they have considered adding.

Ellysse Dick: So do you think this human review should be a requirement or should it be optional based on the company’s capabilities and capacity to do so?

Daphne Keller: I don’t think lawmakers should require automation if that comes at the cost of also requiring human review from smaller platforms that can’t afford to do it. I don’t think we should start with the predicate that automated takedowns are going to happen and then move straight to talking about “and then how are we going to fix them?”

Ellysse Dick: That’s a great point. And going on the different approaches to moderation and what we’re requiring or asking of companies, do you think there should be more transparency in content moderation decisions, whether they’re made by a human or a computer? And is that even possible at the scale we’re talking about with these larger platforms?

Daphne Keller: I do think that for the larger platforms, there absolutely should be more transparency. There again, I think the rules should be different for smaller platforms. And I don’t think it’s impossible at this scale. In fact, I think that the mechanisms put in place to make automation at scale possible can be devised to spit out data that goes into a transparency report, or that feeds this very significant public need for better understanding of what’s actually going on. Now in terms of what data is actually useful, I think there are really important questions there. Those of us who follow this field are accustomed to looking at platform transparency reports now, which are basically aggregated data with platforms saying, “We took down this many things under the hate speech policy and this many things under the nudity policy.” And if we’re lucky, they add in, “And we got this many appeals contesting our decision, and then this number of those appeals were accurate, meaning our initial decision was incorrect.”

That’s really useful data, but it only tells us what the platform thinks happens. There’s no way to have independent evaluation of all of that, unless the evaluators can actually look at the specific content that was taken down and maybe reinstated to figure out if there is a pattern of mistake on the part of accusers or on the part of the platform, what kind of speech it’s affecting, whether there is disparate impact on people based on things like race or language or sexual orientation, whether there is disparate impact based on political beliefs as conservatives in the U.S. have alleged. These are all questions that are really driven by anecdote today, and that we can’t really assess correctly unless we know more about what’s really happening.

Ellysse Dick: So let’s talk a little bit about how Section 230 ties into all of this. Without Section 230, how do you think large platforms specifically would change the services they offer and the way they moderate content, and do you think it would force more moderation or less?

Daphne Keller: I imagine you’ve heard this answer before, but I think it would drive platforms to one of two extremes. Either they become much more cautious, act much more as gatekeepers, have lawyers reviewing what people post, maybe function as walled gardens, just excluding speakers who are too risky. Probably just use their terms of service to prohibit a big swath of speech beyond what the law would to keep themselves safe. Some of them move into this space of being much, much, much more restrictive and much less of a functional public forum, particularly for already marginalized speakers.

And then some of them go in the opposite direction of taking such a hands-off approach that they hope to avoid facing liability because courts would say, “Well, you weren’t acting like an editor or a publisher. You were just being passive. You’re functioning kind of like an ISP.” But platforms that go in that direction are not going to have products that very many users want to see, or that very many advertisers want to provide revenue for, because they’ll be full of this sort of tide of garbage, of bullying and hate speech and porn and spam and pro-suicide content, all of this stuff that is garbage at best and deeply dangerous and scary at worst.

Ashley Johnson: So far, Congress has amended Section 230, just a few years ago with FOSTA-SESTA, which creates an exception to Section 230’s liability shield for sex trafficking. Can you explain how, in your opinion, FOSTA-SESTA is an example of what not to do as Congress looks to potentially change Section 230 even more?

Daphne Keller: Sure. And I should point out that I am one of the outside counsel in the Woodhull case, which is challenging the constitutionality of SESTA-FOSTA. So I think that law made kind of two categories of mistakes. One is this very human mistake of lawmakers thinking they could solve a problem without talking to the people who were going to be affected. And we are seeing a lot of attention to this recently through groups like Hacking//Hustling, academics like Kendra Albert at Harvard Law School. There’s been an effort within Congress, driven mostly by Ro Khanna, to review the impact that SESTA-FOSTA wound up actually having on sex workers, which is that they are being kicked off of platforms, losing the ability to vet potential clients for safety, maybe being driven back into street work. There’s the growing recognition that, “wait, this might’ve been a problem because we did not talk to these people.”

So that’s the human piece. Then there’s the piece I work on, which is much more lawyerly and persnickety, which is like, why on earth would you draft a law like this and not recognize that platforms are going to be over-cautious and take down the wrong things, not put in these obvious mechanics to try to respond to that, like letting users contest takedown decisions and defend themselves and say, “Hey, I wasn’t violating the law.” Or limiting the application of the law to edge platforms like the Facebooks and Twitters and so forth of the world so that it’s clear that underlying entities like CloudFlare, or companies like MailChimp, these more infrastructural components of the Internet, are not themselves facing those same incentives. Because when they shut somebody down, they shut down entire sites and services. They don’t have the ability to respond in a nuanced way.

I actually put out a paper when SESTA-FOSTA was pending, listing—I think it was six—different specific things like this that are just obvious correction points you could put into the law to make it a little better. And they would not necessarily make it constitutional at the end of the day. But the fact that Congress didn’t even try to put in these fixes is kind of indicative of the very bad state of the discussion of CDA 230 and intermediary liability in Washington.

And I just had an op-ed in The Hill recently saying, “Hey Congress, you should study this stuff more and understand that there are a bunch of models and we actually know what works and doesn’t and what leads to what potential unintended consequences in intermediary liability regulation. But if you don’t want to do the work, you can crib some answers from Europe.” Because European civil servants and lawmakers have very seriously been doing the work and studying potential models for platform regulation, going back to at least 2012. And so the Digital Services Act, this piece of legislation they have pending, while far from perfect, has a bunch of really simple pieces that would make sense here also.

Ashley Johnson: So what other considerations do you think Congress and the Biden administration should keep in mind concerning proposals to amend Section 230?

Daphne Keller: Well, I would start with this human part of, if you’re going to regulate something that will perceivably hit a set of people, talk to those people, really understand their concerns and look at the ways that laws like this have succeeded and failed around the world, understand the models that are out there. And then I think there are a set of constitutional constraints that are really important. Whether or not you like them, whether or not you agree with Supreme Court First Amendment jurisprudence, it’s useful to pass laws that won’t be struck down by courts. Right? So looking at where the caselaw imposes restrictions and ties Congress’ hands will help law lawmakers who are serious about this navigate through and figure out what they can do. A thing I’m particularly concerned about right now is the idea that Congress can’t regulate users’ speech, but they can regulate the reach that is created by platforms in amplifying or recommending or ranking particular content.

And this distinction is something that my Stanford colleague, Renee DiResta, has talked about a lot that’s very useful if you’re a platform. It’s useful if you have the discretion to set one set of rules for what you will host, and one set of rules for what can be in recommendations, and weighting factors for how highly something appears in recommendations. If you’re Congress though, the limits on regulating speech apply just as much if you are regulating the amplification of that speech. You don’t get to dodge the First Amendment by invoking amplification as somehow a separate concern. And I’m very worried that we may be seeing a batch of legislation coming down the road that fails to notice that constitutional restriction, and sort of wastes time as a result.

Ashley Johnson: So for our final question that we ask all of our guests, we want to know what your verdict is on Section 230. Should we keep it, amend it, or repeal it?

Daphne Keller: It is not a perfect law. If a set of informed experts could go in and tinker with it, I think there are things that could be improved. But the people who will tinker with it, if Congress amends it, are probably not experts. And the odds of them introducing something that has serious unintended consequences for vulnerable populations as with SESTA-FOSTA, for competition, for all kinds of concerns that they’re not thinking about right now, that risk seems very high. And so in my ideal world, we would make CDA 230 better in some perfect way. In this world, I am very worried about what CDA 230 changes might mean.

Ashley Johnson: Thanks so much for joining us, Daphne. If you want to hear more from Daphne about intermediary liability, platform regulation and other tech policy topics, you can follow her on Twitter  @daphnehk. That’s it for this episode. If you liked it, then please be sure to rate us and tell friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for our weekly email newsletter on our website, ITIF.org. Be sure to follow us on Twitter, Facebook, and LinkedIn too @ITIFdc.

Back to Top