ITIF Logo
ITIF Search
Podcast: How Section 230 Safeguards Civil Liberties, With Jennifer Huddleston

Podcast: How Section 230 Safeguards Civil Liberties, With Jennifer Huddleston

Jennifer Huddleston, tech policy expert at the American Action Forum, joins Ellysse and Ashley to highlight the benefits of Section 230 for free speech, competition, and innovation and explore the potential implications of new regulations for civil liberties.

Mentioned

Audio Transcript

Jennifer Huddleston: In many cases, these proposed changes likely wouldn’t survive First Amendment scrutiny, because again, those platforms themselves have First Amendment rights.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, where we’re exploring the ins and outs of Section 230, the contentious law at the center of the debate about online speech in the U.S. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We’re a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be discussing the benefits of Section 230’s intermediary liability protections for civil liberties and the potential impact of new regulations. Joining us, we have Jennifer Huddleston, Director of Technology and Innovation Policy at the American Action Forum. Jennifer has written and spoken extensively on a variety of tech policy issues, including Section 230. Welcome to the podcast, Jennifer.

Ellysse Dick: All right Jennifer, to get started, let’s talk a little bit about the work that you’re doing at the American Action Forum. The American Action Forum is an important voice among the center-right. So, can you tell us a little bit about why Section 230 and intermediary liability are important to those in the conservative movement, and how it plays into the work you do as Director of Technology and Innovation Policy?

Jennifer Huddleston: So, broadly speaking, my research is interested on the intersection of law and technology. That includes Section 230 as well as other issues, such as antitrust, and even things like the regulatory state and transportation innovation. So, how do the laws and the policies that we choose to set, set up a country for more innovation? Or how can those laws place barriers to entry on innovation? And also, what is the role of the courts and the common law and some of these basic constitutional principles, like free speech, in allowing innovation to evolve?

Section 230 is really interesting because what it really did was make sure that users could have their content hosted online by making the platforms that hosted that content not liable for what a third-party chose to do there. As a result, we’ve seen a huge explosion in user-generated content, not just in the way a lot of us think about it with social media and platforms like Twitter and Facebook, but with a wide range of websites like review sites like Yelp, information sites like Wikipedia that rely entirely on user-generated content, and even things like the comment section on blogs.

For conservative voices especially, this has been a way to overcome some of the barriers that they may have experienced if they weren’t able to get their ideas out there. And really for any kind of idea that might have, in the past, had to go through a more traditional platform. Because platforms are able to host user-generated content and are able to make the decisions about what user-generated content to host, we as consumers have a huge range of choices when it comes to how we can get our voices out.

Ashley Johnson: So, you’ve sort of touched on this already in your explanation, but to dig a little deeper: We’ve talked on previous episodes about how Section 230 benefits online platforms large and small. But we want to talk with you about how it benefits users.

Jennifer Huddleston: Right. So, a lot of times we think about this as a law that benefits platforms. And some policymakers will even try and make it out to be a special privilege for these platforms. I, in a paper with my former Mercatus colleague Brent Skorup, argue that actually, you can trace back a lot of the roots of Section 230 to what we’ve seen with more traditional publisher liability and what we’ve seen with things like libraries and newspapers when it comes to running third-party content. But for users, what this has really done is enabled these platforms to feel comfortable hosting user-generated content, particularly content that at times might’ve been seen as a little more controversial. So, for something like the #MeToo Movement that really got going on social media, this was something that would have, in the past, maybe had some platforms—if they didn’t know that they were going to be protected—engage in some more editing or really question whether this was a conversation they want to allow to take place on their platforms.

But because platforms know that third-party content is protected, that they’re protected from liability for third-party content, they may be able to host some of those conversations that, in the past, would have raised more questions. The same thing when it comes to users wanting to engage in, say, political debate. Now we can all discuss whether or not we really want political debates on our social media feeds, but it provides an opportunity for people to engage with ideas, to have conversations, and to find other people like them, to form their own communities when in the past, they might’ve found themselves very isolated.

Billy Easley at Americans for Prosperity has a wonderful article about this and his experience in several different communities that don’t normally intersect, and how the Internet really gave him a way to find other voices like his and to connect with those voices and to ask questions and to really explore different parts of his own identity. And I think a lot of people have had similar experiences where you may have felt like you were the only one who was interested in something or the only one who had an idea and because you were able to interact with people online, actually able to explore those ideas or to find other people out there who had a similar passion.

Ellysse Dick: Great. And that actually goes to the another question that we had for you, which is: why does the Internet need Section 230 if we already have the First Amendment to guarantee our freedom of speech? In these smaller groups that are finding communities, aren’t they already protected by the First Amendment? So, what does Section 230 do that the First Amendment might not necessarily achieve?

Jennifer Huddleston: I think it’s interesting to really discuss how Section 230 and the First Amendment interact, because oftentimes we just hear all these things thrown together and there’s some confusion that occurs. So, one thing to point out is that these platforms, these intermediaries, also have First Amendment rights. And why that matters is, when we’re talking about some of the controversies that have occurred over recent content moderation decisions, they really are probably protected under the platforms’ First Amendment rights.

So, for example, when a platform chooses to attach a fact-check to a post, that’s not really discussing Section 230, that’s discussing the platform itself’s speech and what rights that platform has. Same thing when a platform decides not to carry a certain type of content at all. The same way that we would allow a coffee shop owner to decide what they could and couldn’t put up on a bulletin board, we also allow these platforms to make those decisions under the First Amendment. Now, when it comes to individual users and the ability of those communities to form, what Section 230 does is, by providing this legal certainty of liability protection, it enables platforms to be more comfortable carrying user-generated content.

So, because platforms have those private decision rights—and they still maintain those under Section 230—Section 230, though, with its liability protection, means that if you’re a small platform getting off the ground, you don’t have to be worried about being sued into oblivion because someone uses your platform to say something you never intended to be said. If you’re a blogger and somebody goes in your comment section and says something horrible or defamatory about someone else, you don’t have to worry that you’re the one who’s going to be sued over that comment. Instead, it really makes them sue the person who said it. So, you can imagine this—particularly for startups and for small and mid-sized platforms that don't have a lot of resources—it really enables them to carry user-generated content in a way that they might be more skeptical of without this liability protection, where lawyers or investors might advise them that you’re better off engaging in really intense content moderation that could silence a lot of voices, or better off not just carrying any user-generated content at all.

Ellysse Dick: So, I guess one of the things that people talk about when they’re talking about Section 230 is the content moderation aspect of it. Could you talk a little bit more about the extent of content moderation that’s enabled under Section 230—versus without it—and what that function actually is of that moderation?

Jennifer Huddleston: Section 230 was really set out to solve what’s often referred to as the Moderator’s Dilemma, which is the idea that if you didn’t have a law like Section 230 that provided some sort of protection from lawsuits—given America’s incredibly litigious society—that you would have platforms have one of two choices. Either they would have to engage in very heavy-handed content moderation, where, say, you submitted a post to go up on your social media page and you had to wait for someone at the platform to go through and make sure it wasn’t going to be considered defamatory or violate any of the community standards and then approve it and allow it to be on your page. That’s not something that most of us would find very enjoyable or really beneficial in the way that these platforms have engaged in allowing communications. You can certainly imagine, in a review site, how that could slow down the process of real-time reviews. Or something like Wikipedia again, to get that information updated quickly would be much more slowed down in that kind of process.

Without Section 230, the other option that platforms would have is the “buyer beware” model of saying, “We are not going to engage in any content moderation whatever, we have put this device out into the universe, go forth at your own risk, and we are not going to take down any of the content that a lot of us don’t want to see: the harassing content, the things that end up on the Internet that are not illegal but not very good either.”

There’s a lot of gray area there that different platforms may make different choices over right now and different people may have different tolerance levels for. The current system allows individuals and platforms to choose that. Whereas in a world without Section 230, most platforms, if they chose to carry user-generated content at all, would find themselves in one extreme or the other: either very, very intense content moderation, really limiting the ability of the Internet to connect people and for people to have that voice, or no content moderation at all, which is likely to lead to some interactions that most of us don’t want to deal with on a day-to-day basis.

Ashley Johnson: You mentioned the highly influential article that you co-wrote on how, prior to Section 230, similar liability protections were already naturally developing in the courts. Can you tell us more about how you arrived at that conclusion and what that would mean for online platforms if Section 230 protections were weakened or the law was repealed?

Jennifer Huddleston: Right. So, my coauthor and I often say, “This is not the conclusion we expected to come to when we set out to write this paper,” which is always a fun thing as a researcher, when your research leads you a different way than maybe you thought it was going to. But when you look at the erosion of liability—not just on the online space, but in other more traditional forms of media as well—it’s likely that eventually we would have gotten to something that protected a lot of what is protected by Section 230 in the courts. So, again, if we look at cases involving libraries and whether or not they could be held liable for the content and the books that they carried. If we look at cases involving radios and newspapers that may be carrying wire service information—so, things like the Associated Press news articles that the newspaper itself didn’t write, but that they’re getting out of a service, or we even see op-eds that are placed this way—that we were seeing that there was a general understanding emerging that third-party content was not the same as a staff writer at the newspaper or an author of a book and the publisher of that book’s relationship. That there was a growing awareness that you wouldn’t really expect to hold someone liable for what someone else did that they just happened to be the one that carried.

And if we think about it, this makes a lot of sense, which is one of the interesting things when it comes to Section 230. If you take a step back from it and you talk to people about it by what the law largely actually does, which is make you sue the person that actually said it, that’s a lot of common sense there. That we would expect that the person who should be responsible for something defamatory or something that you found to be a violation of some other sort of rights would be the one that would be legally responsible for it.

But what Section 230 did was after a couple of bad case rulings that had started to say, “No, we are going to potentially hold these platforms liable,” it accelerated us to establishing what this law should be, which is that the person who wrote the content is the one that’s liable, not the intermediary that happened to carry it. That’s good because it enabled the Internet to flourish a lot sooner and enabled us to get these user-generated content services—that a lot of us are really relying on right now during the COVID-19 pandemic—a lot more quickly and a lot easier than we probably would have without it.

What that means though if Section 230 goes away is a bit concerning, and in my opinion, a bit more concerning than even a world where we had never had Section 230 altogether. And that’s because you would now have platforms who had this huge head start because they had this liability protection in the beginning. The Twitters, the Facebooks, the YouTubes, they’re going to have an army of lawyers who are going to be constantly having to respond to these complaints—which is not to say that it’s good because there are probably going to be plenty of complaints, and a lot more content moderation that has to go on as a result, and a lot more silencing of speech—but the large guys will probably be able to afford the compliance burden.

On the other hand, if you’re a new platform trying to offer user-generated content and you don’t have Section 230 anymore, you’re going to have an additional barrier to overcome. You’re going to have to figure out what to do about this new liability and risk the fact that, before you gained popularity, that someone could use your platform for something you didn’t intend, and that you could then not be able to really gain that foothold and compete with the big guys. So, really in a lot of ways, Section 230 has become an incredibly pro-competitive law, particularly at this time where we hear a lot of people expressing concerns about the size of platforms and about whether or not Big Tech’s too big. This should really be a time that we’re embracing this part of Section 230. That what Section 230 does is keep the barriers low for platforms that want to carry user-generated content, enabling new platforms to enter the market without this additional burden.

Ashley Johnson: Well said. And I think, especially over the past few congressional committee hearings that we have had on Section 230, it has become increasingly clear that there are two sort of opposing groups of critiques when it comes to Section 230: the first that online platforms don’t do enough to remove potentially harmful content and the second being that platforms are removing too much content that isn’t legitimately harmful. So, I want to focus on each of those. First, what do you think the potential consequences for online speech are of restricting Section 230’s liability protections for platforms that fail to remove content, that these critics would say are not removing enough content?

Jennifer Huddleston: So, what we often hear is that because of Section 230 platforms aren’t doing enough to remove, say, hate speech or what people may deem to be misinformation. And the question is, is Section 230 providing an inappropriate incentive? Is it providing an incentive not to engage in moderation for these things that a lot of people think are bad? The issue with changing Section 230 to create carveouts for things that are honestly bad like hate speech or like misinformation is that those terms are very hard to define. And we’ve seen this play out in some of the hearings as well, where in one hearing a Congressman asked one platform, “Why haven’t you taken down this video about coronavirus and hydroxychloroquine?” (And I apologize if I’ve mispronounced that.) "It clearly should be considered misinformation.” And then, in the next set of questions, a different Congressman asks a different platform, “Why did you take this down? Why didn’t you leave this up? This is part of the same part of the debate.”

So, when we have policymakers who can’t agree on whether a single piece of information should be considered misinformation or not, imagine if you have that at a huge scale, at a scale where all of us are trying to have our opinion. It’s not going to be very easy to define these terms. And a lot of times, context can matter. And there are so many other elements that go into play that content moderation at scale is a very difficult job.

Now, I do think it’s important to point out that there are some things that Section 230 already doesn’t cover. And that includes things that are illegal on a federal level. So, when we’re talking about things like child sexual abuse material, when we’re talking about certain other materials that have been brought up as potentially needing additional carveouts, those things are often already carved out where platforms do have an incentive to moderate them.

A lot of times the new call we hear for carveouts to Section 230 to respond to various ills on the Internet are about these things that seem straightforward until you start to really think about them a little bit more. And so, when we’re talking about things like hate speech, like election misinformation, it’s oftentimes very easy to get into a gray area very quickly where we don’t all agree on where that line is. So, the advantage of Section 230 is it lets those platforms make slightly different calls and it lets the market respond to that. So, if we’re uncomfortable with Facebook’s hate speech policy, we don’t have to use Facebook. We can go and use a different social media platform. And if enough people are uncomfortable with their policy, there may even be an opportunity for a competitor to come in with a different policy.

Versus if you create a liability carveout on that, you’re going to have a lot more questions rather than just this smooth clear-cut answer. And the question is also, what might be lost in the meantime? So, we saw this with the FOSTA-SESTA bill that was aimed at targeting sex trafficking. Most, if not all, of us would agree that sex trafficking is a horrible thing that we should be everything we can to stop. Now, again, there was already a carveout for federal crime, so this was largely already covered, which we saw in that the government was able to take down Backpage.com before this law was actually signed into law. But after SESTA-FOSTA, we’ve seen various platforms remove things that are not necessarily what people thought would be removed. So, for example, Craigslist had to shut down its Personal Ads section out of concern that it might be used to cover this up and they didn’t want the additional liability.

We’ve also seen platforms be sued under SESTA-FOSTA that are not the platforms I think most people thought would be. So, for example, Salesforce was subject to a lawsuit under SESTA-FOSTA because someone who was engaged in this horrible act had also been using their product, probably as some sort of email management service, and therefore they were able to be sued under the liability. We can imagine how, if you look at some of these broader questions, this could easily become much more than just a small change where you could see silencing of, for example, a lot of political conversations or a lot of conversations around some controversial ideas where the Internet has been a place for people to connect and have those debates and conversations at times.

Ashley Johnson: And then moving to the other side of the critiques against Section 230, what would you say the potential consequences are of restricting the types of content that platforms can remove without potentially facing liability?

Jennifer Huddleston: So, one of the somewhat strange to me criticisms that we hear about Section 230 sometimes is that Section 230 was intended to require neutrality. Now, both the authors of Section 230, former Representative Cox and now-Senator Wyden, have been very clear that they never intended neutrality to be involved in this. Additionally, these kinds of calls for neutrality resemble what was largely considered an outdated doctrine of the Fairness Doctrine, where the FCC used to force equal airtime in certain circumstances. And it was doing away with the Fairness Doctrine that really allowed talk radio and, later, certain TV programs to become real thought leaders in various elements of American discourse.

The problem with requiring neutrality or telling platforms that they can’t moderate certain things is what does that do, again, for broader questions when it comes to the way that interacts with the First Amendment? In many cases, these proposed changes likely wouldn’t survive First Amendment scrutiny, because again, those platforms themselves have First Amendment rights. Do we really want a world where we’re forcing any kind of platform, whether it’s a newspaper or an online platform or a cake shop, to carry content that they don’t want to carry?

So, for people pushing for that, I think that’s the question there too, is that would be a pretty major shift away from the way we’ve typically interpreted various First Amendment doctrines. And so, either we’re going to require neutrality, which may result in actually some voices that are popular online now being silenced to make room for other voices, or it may result in this kind of stifling of conversation out of a duty of fairness and equality. And it could potentially change a lot of things beyond the online space when it comes to how we’ve interpreted the First Amendment.

Ellysse Dick: Yeah, I think that’s a great point, that Section 230 and intermediary liability online is really just a continuation of the conversations we’ve been having for decades about civil liberties and free speech. And looking at it as a separate entity entirely is going to be detrimental to a lot of those liberties that we talk about.

Let’s talk a little bit about what’s next, what the future of intermediary liability and speech online will look like. So, first of all, right now in the 230 debate, we’ve talked about some of the perspectives, but do you think there are any perspectives or arguments that are missing from the mainstream debate right now?

Jennifer Huddleston: I think it’s important that we really look not just at how this impacts the large players. I think oftentimes that’s where the Section 230 debate is focused. But in reality, we need to look at: what does this mean for the future of innovation and what does this mean for small players and midsize players? And it’s really hard at times to draw those lines of who belongs in what size when it comes to tech companies.

For example, something like Wikipedia, many of us use on a regular basis. It’s the first spot we may go to, but it’s actually only a few hundred employees. So, depending on how we define things around Section 230, is it a small company or a large company? Reddit is another great example of this. Reddit has a bunch of users, but a very small actual employee base because it relies a lot on its volunteer moderators. Is that a large company or a small company?

And then I think the other element, again, is that we talk about this a lot in the social media space—what would Section 230 changes mean for Facebook or Twitter or YouTube—but Section 230 impacts so many more areas online and we need to talk about what any changes would also mean for things like review sites and even Airbnb.

Ellysse Dick: Those are all great points and several of them we are hoping to address in this podcast, so I’m glad you brought them up. So, what recommendations would you have for the policymakers in any part of government who are looking at this issue? How would you suggest they approach this in a way that still considers civil liberties and speech issues when they’re considering intermediary liability regulations?

Jennifer Huddleston: So, I think the first thing that needs to be asked is: does this pass the First Amendment test? Is this going to get the government more involved in speech in a way that is constitutionally prohibited? A lot of proposed Section 230 changes fail right there.

I think the second question should be also: what does this mean in terms of small players and in terms of the next wave of innovation? If we’re looking at regulation, are we going to lock in our existing technology dynamic because only the big players can afford to comply? Or is this a regulation that enables small players to continue to carry user-generated content in a way that connects us all more than we were in previous technological ages?

And then finally, I think there needs to be a look at, what are the unintended consequences? Going back to the SESTA-FOSTA example of even a well-intentioned law aimed at changing online content for something that would be widely regarded as a bad, if it is poorly drafted or if it has different requirements in terms of what is that level of knowledge or how are different terms defined, can have much broader consequences than intended.

The final thing I will add is, we should also be equally as cautious about this kind of piecemeal approach as calls for broad changes to Section 230 because lots of little carveouts, lots of little kind of papercuts at Section 230 in some ways could be even harder to comply with, again, particularly for those smaller players, than something that was a broad regime change, particularly given the First Amendment backstop. Now that’s not to say the latter would be good, but I think at times it can be seen as, “Well, this is just addressing this one really bad thing.” But if you have enough of those little carveouts, then you’ve undercut the principle of the law in a way that is almost even more damaging.

And finally, what I would say is—I know I said finally earlier—but I would add that we also need to ask: what is the problem that you’re actually upset about and trying to address? Because oftentimes these are not actually Section 230 questions. They are broader questions that either are things we’re struggling with as a society or are actually people really questioning some of the underpinnings of the First Amendment.

Ashley Johnson: Very well said. And for our final question, to wrap it all up, we want to ask all of our guests, what is your verdict on Section 230? Keep it, change it, or repeal it?

Jennifer Huddleston: I would say keep it. I will say that I think it serves a very integral purpose to enabling speech online. And I think that we should really look at the benefits of what Section 230 has brought and be skeptical of any changes. I will say though, that I think the reason we should keep it is because of what it does. Not just because Section 230 is some magic legislation that created the Internet. It’s because it really affirms these underlying principles when it comes to who we consider liable for speech and because it’s strongly supports the First Amendment approach to online speech as well.

Ashley Johnson: Excellent. Thank you so much for joining us, Jennifer. If you want to hear more from Jennifer about Section 230 and all the other tech policy issues that she works on, you can follow her on Twitter @jrhuddles. That’s it for this episode. If you liked it, please be sure to rate us and tell your friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for our weekly email newsletter on our website at itif.org. Be sure to follow us on Twitter, Facebook, and LinkedIn too @ITIFdc.

Back to Top