ITIF Logo
ITIF Search
Podcast: What the Future Holds for Section 230, With Neil Chilson

Podcast: What the Future Holds for Section 230, With Neil Chilson

Neil Chilson, tech policy expert at the Charles Koch Institute and former FTC chief technologist, joins Ellysse and Ashley to forecast where the debate surrounding Section 230 is heading and present a vision for the future of content and online speech regulation.

Mentioned

Audio Transcript

Neil Chilson: It’s not that the words of Section 230 are perfect and sacrosanct, but that the principles that it protects are really important.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we’re exploring the ins and outs of Section 230, the law from the ’90s that may, or may not, shape the future of the Internet.I'm Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We are a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be looking into the future of intermediary liability and Section 230 of the Communications Decency Act. Joining us, we have Neil Chilson, senior research fellow for technology and innovation at the Charles Koch Institute and former chief technologist of the Federal Trade Commission. Neil has written and spoken extensively on Section 230, and we’re so glad to have him. Welcome to the podcast, Neil.

Neil Chilson: Thank you so much for having me.

Ashley Johnson: To start off, a very general question that will hopefully guide our conversation. The Internet has changed a lot since 1996, and do you think Section 230 has kept up?

Neil Chilson: So the Internet has changed a lot since 1996 and in part, a lot of those changes were made possible by Section 230. Section 230 allowed for the user-generated content platforms that we all use every day. And while the Internet has changed a lot, the basic problem that Section 230 tried to solve, which is called the Moderator’s Dilemma, has not. If anything, it’s gotten worse.

The challenge of content moderation has increased quite a lot since the days of Prodigy and CompuServe. Twitter deals with a volume of content every second that probably dwarfed what Prodigy had in a day, content-wise. And so, I think the volume has changed quite a lot, and the Moderator’s Dilemma has gotten, if anything, worse. And so Section 230 is still a really key part of keeping the user-generated content platforms that we have and enjoy today.

Ashley Johnson: I want to talk about the statement that you gave at the Department of Justice Workshop on Section 230 in February of 2020. You outlined seven key principles for evaluating proposed changes to Section 230, and I’d like to talk about a few of those.

Neil Chilson: Sure.

Ashley Johnson: First you emphasize that intermediary liability laws must not target constitutionally protected speech. Can you give an example of proposed changes that would target constitutionally protected speech and how that would play out?

Neil Chilson: So some of the proposed changes that we’ve seen around political speech that would, say, require a neutral treatment of political speech in order to get the protections of Section 230—political speech is at the heart of free speech, and the ability to associate yourself with political commentary that you support and to disassociate yourself from political commentary that you don't support is a pretty core First Amendment right. And so some of these suggested approaches that would condition Section 230 protections on not being able to do that go pretty much to the heart of the First Amendment protections and raise a lot of constitutional issues. So that might be one example.

Ashley Johnson: And if one of those proposed changes were to be passed into law, do you think that the courts would uphold them or that they would not pass First Amendment scrutiny?

Neil Chilson: Well, I think they would very likely strike down that type of condition, but there are some other types that are even more directly problematic. That one, at least, tries to say, “Hey, we’re just trying to have you be neutral.” But there have been other proposals that would straight out prohibit certain types of speech that are constitutionally protected and those would be much more, even more likely to be struck down.

Ashley Johnson: You have touched on, and you emphasized in your seven principles, that Section 230 does not and should not require neutrality. Do you know where the misconception that Section 230 requires neutrality came from, and why do you think it’s so important that it doesn’t?

Neil Chilson: I’m not really sure about the origin of the idea that Section 230 requires neutrality. I think, if I had to guess, it’s, in part, because a lot of the platforms have long claimed to try to be open to as much speech as possible, and sometimes have couched that in the language of neutrality, and so have set sort of an expectation that people have that has never really been fully true, right? The platforms have never been wide open spaces where people can say whatever they want, in part, because consumers would not like that very much. Users would hate a platform that’s like that, for the most part. There are some places, 8chan, et cetera, that look a little bit more like that, but even they take down content when it’s clearly illegal. And so, I think that I’m not 100 percent sure where that came from, except it’s a sort of convenient way to try to push the platforms in a direction that certain people want.

And what was the second part of your question? I’m sorry.

Ashley Johnson: Why do you think it’s so important that Section 230 does not require neutrality?

Neil Chilson: Well, not only does it not require a neutrality, it’s created to allow people to edit without raising a bunch of risk of spurious litigation. And so a requirement of neutrality, first of all, is very difficult to define, even in a limited space like political neutrality. Things like satire can be very hard to figure out whether it’s criticizing one side or supporting it.

And so enforcing neutrality, requiring neutrality, if there was a law that did that, it’s very hard to pull off just practically. And so the importance of having the ability for platforms to moderate gets back to that Moderator’s Dilemma. They’re trying to create an environment that is supportive for the community of their users. A neutrality requirement would just get in the way of that. If it worked at all, it would turn every platform into the sort of same space, and we wouldn’t have the ability to have different platforms that have different communities and different standards for and purposes for what we want.

So just to take a really extreme example, a requirement of neutrality might say something like, a church website would then have to leave up any posts that had nothing to do, or may even be sort of offensive to, many of the users of that website. If they were to take a First Amendment-style approach to the content, where many things that are distasteful to not just people who might go to church websites, but to probably the average American, are protected by the First Amendment. And so requiring a First Amendment approach or a neutral approach would create websites that are much less friendly to the users of the communities that want to use them.

Ashley Johnson: And are there any other objectives that you think policymakers and other stakeholders should really prioritize when considering amendments or alternatives to Section 230?

Neil Chilson: Well, I think one of the key things is to consider the impact on innovation. In tech policy, that’s always a key consideration because the tech industry moves forward in leaps and bounds in innovation. And the questions around how Section 230 and changes to it would affect not just the largest players, but some of the smaller players, and not just platforms that have users on them that are directly consumer-facing, but other types of Internet services such as caching services or spam-filtering services, I think we have to think really hard about what the impact of the change in how liability works for these types of companies would affect their ability to come up with new services, to come up with things that make consumers better off, and so that’s a key one. I think that’s a really important, critical one.

Ellysse Dick: Great. Well, I think now is the time to start taking a look in our crystal ball and talking about the future of 230. So going off of what you just said, looking into the more immediate future, how would you like to see the Biden administration approach intermediary liability and platform regulation, and specifically, who do you think should be involved in that process, and who should not be involved in that process as far as government actors or other stakeholders?

Neil Chilson: So I wrote a piece for Protocol that digs into Section 230 and talks a little bit about the history and the way that Section 230, in some ways, arguably shortcut an evolutionary common law approach to defamation, and that had some pluses and minuses. But I’ll be the first to say that Section 230 is words written by Congress. It’s not perfect. I think it even has a typo in it, right? And so it’s not that the words of Section 230 are perfect and sacrosanct, but that the principles that it protects are really important and pretty common sense that users, the person who does something bad online, should be the person responsible and not the tool that they use. I would hope that the Biden administration would continue to see the need for that type of common sense approach to liability online.

But more importantly, changes to Section 230, to the extent that they are necessary—and I think there’s some places where we could talk about third-party liability for sales of dangerous objects online—there’s some places to talk about this, and clearly there are harms that can happen that seem unjust or unfair when somebody who’s harmed can’t get at the bad actor who actually did something. There are some things we can talk about, but the people that should be looking at that is Congress. I mean, statutes are written by Congress. Statues should be modified by Congress. The idea that an agency like the Federal Communications Commission, for example, who has never professed to have any authority over Section 230 would go in and essentially correct the courts on what the statute says or how the statute should be interpreted, that’s a bad idea. It’s a dangerous idea. And it also seems unlikely to make any of the people who have been pushing for that in the recent past happy with the results.

It’s just sort of mind-boggling to me that the Trump administration would be pushing this off to an agency that is going to be run by Biden appointees in the near future and would expect that the result would be good for conservatives or for Trump himself. So Congress should work this out. There’s some difficult problems here. Congress is the right place to change the law if the law needs to be changed.

Ellysse Dick: So Section 230 is sort of the hot topic right now, but obviously, it’s not the only topic in tech policy. Do you think there are any lessons we can learn from Section 230 and the surrounding case law that we could apply to some of the other areas of tech policy that are starting to enter the political and legislative conversation?

Neil Chilson: Well, the big one that I’d point the listeners to the Protocol article again, is that Section 230, in many ways, even though it is a statute, it’s had a very evolutionary approach, right? The language, the general principles of Section 230 have been applied, case by case, hundreds of times to different factual situations. And we’ve gotten to see how the law works and how it doesn’t, and the courts have made it work over time. And so it’s been able to adapt to an Internet that has changed quite a lot. It’s been able to still convey those important principles, even though the technology that it’s working on has changed quite a lot.

So I think that idea that incremental case-by-case approaches, even though they’re not these big grand schemes of regulation to solve problems, that they can often keep up with technology better than we might think, in part because you don’t have to predict the future nearly as much. You just have to set the regulatory goal that you’re trying to achieve–in this case, who’s responsible for content and who isn’t–and then see how that works out over time. And there’s always a possibility for Congress to tweak something as it needs to be.

But the chances of a general principle-based approach like Section 230 going out of date is much lower than for some very detailed, specific approach around content moderation or any of the other tech issues that we’re looking at, whether it be antitrust or privacy and maybe data security and some other issues. There’s times when a detail-prescriptive rule can be appropriate when an industry is maybe not as fast moving, and it’s extremely well-understood by the regulator or the legislator. But that’s not really the current condition of the Internet—and that’s to the benefit of consumers, actually, that fast innovation cycle—and so I think we should try to look to regulatory and legislative approaches that continue to enable that type of rapid innovation.

Ellysse Dick: So it’s not just other areas of policy, but there’s also a lot of new emerging technologies that we could talk about. The current debate, obviously, is focused on social media platforms. Do you think there are other emerging technologies that might raise similar concerns or that 230 might apply to in the near or more distant future?

Neil Chilson: I’m not sure that the legal issues are that different for a bunch of these new technologies. Like I was saying, Section 230, because it sets out a sort of general high-level principle has been able to adapt to new technologies. But it will be interesting to see how these issues play out, for example, in virtual reality. Because of the tangibility of a space like that, in some ways there will be additional concerns that will be raised around how people use those tools, how the users use those tools, in ways that could be offensive or perhaps threatening to other users on those platforms, and there will be questions about who should be responsible for that. And I think Section 230 will be an important tool to allow the huge benefits.

And I think this is the thing that I emphasized a lot in my DOJ testimony is the benefits aren’t just to the individual user or to the platforms. What we get with these platforms is the ability to form these small communities that we could never—if we had to run it by Facebook’s lawyers or by the VR platforms’ lawyers, to have a community that talked about, for example, some sort of rare medical issue, that was a support group, these platforms wouldn’t take on that type of liability. It’d be too risky for them to have people providing advice back and forth, even though those types of communities can be very, very helpful and very supportive to people who are going through something that’s unusual, and where they don’t know anybody in real life who’s gone through it.

And so I am excited about the potential of new technologies to allow that type of small communities that address niche problems and niche interests to continue to develop. And I just worry that without Section 230, those types of interactions online would be, if not eliminated completely, they would be much harder to achieve. The legal frictions and the commercial frictions would be much higher.

Ellysse Dick: Definitely. I think that’s a really great point. And Ashley knows that I’m very happy we got a VR plug in there. It’s very important.

So our last question on the future of 230: what would your ideal scenario be for the future of intermediary liability, content moderation, online speech—all of the things that get wrapped up in the 230 debate—and what will it take to get there, if we can at all?

Neil Chilson: I don’t know that there are a ton of changes to Section 230 that are super important to take place. I think there are some edge issues that perhaps Congress could address. I think there are a lot of opportunities for innovation in how content moderation works, however, and in a way that helps diffuse some of the issues, perhaps. Right now, a lot of the approaches to content moderation are very centralized. And if you know anything about me or my past writings, I’m a big fan of decentralized solutions. I’m writing a book, actually, on emergent order right now that’s going to talk a little bit about content moderation.

I think there’s a lot of opportunities for the platforms to look to the communities themselves that are forming on their platforms for guidance on how to do content moderation well. Some of the platforms do this a lot right now already. I mean, Reddit has a really strong role for its participants and its users in self-governance. And I think providing more tools that look like that, I think there’s a big opportunity to increase the responsiveness of content moderation to the needs of users while showing regulators in D.C. that something is being done around content that they’re worried about and that these platforms are still places where people can speak their minds to an audience. Not without consequence, because that’s not the way speech should be. I mean, speech has consequences. If it didn’t, it wouldn’t be very important. So these platforms have been very powerful in letting people speak to audiences. And I think there’s a lot of innovation that can happen around decentralized content moderation that could make that even more powerful.

Ashley Johnson: For our final question that we’re going to ask all of our guests, we want to ask you your verdict on Section 230 overall. Should we keep it, should we amend it, or should we repeal it?

Neil Chilson: We should keep Section 230. Perhaps in an alternate universe where Section 230 wasn’t passed, the courts would have come to something that looks sort of Section 230-ish in First Amendment law. But, I think Professor Goldman has made a good case that, in many ways, Section 230 built on the First Amendment in a way that has really opened up an entirely new way for people to speak to broad audiences without having to go through gatekeepers. And that’s been to the huge benefit of so many people, so many minority or niche ideas. Pluses and minuses to that, but overall Section 230 has been a great boon to a speech online, and in the world that we’re in right now, it makes a ton of sense to keep it around.

Ashley Johnson: Thank you so much. Very well said. Thanks for joining us, Neil.If you want to hear more from Neil, you can follow him on Twitter @neil_chilson. That’s it for this episode. If you liked it, then please be sure to rate us and tell your friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for our weekly email newsletter on our website, itif.org. And be sure to follow us on Twitter, Facebook, and LinkedIn at @ITIFdc.

Back to Top