ITIF Logo
ITIF Search
Podcast: The Political Debate Over Section 230, With Klon Kitchen

Podcast: The Political Debate Over Section 230, With Klon Kitchen

Klon Kitchen, a tech policy expert at the American Enterprise Institute who authored the Heritage Foundation’s Section 230 reform proposal, joins Ellysse and Ashley to unpack the political debate surrounding Section 230 and the treatment of political speech online.

Mentioned

Related

Audio Transcript

 

 


Klon Kitchen:
Oftentimes people will be frustrated with a content moderation decision that is actually quite consistent with the rules that have been set. They just don’t like the rules.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we are exploring the ins and outs of Section 230, a law that has raised important questions about the nature of civic discourse and online speech. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We are a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I am Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be examining proposed legislation to amend Section 230 of the Communications Decency Act that aims to prevent online platforms from censoring certain political viewpoints, as well as other proposed reforms. Joining us, we have Klon Kitchen, senior fellow at the American Enterprise Institute. Klon was previously Director of the Heritage Foundation’s Center for Technology and Policy and authored the Heritage Foundation’s Section 230 reform proposal, “Section 230—Mend It, Don’t End It.” Welcome to the podcast, Klon.

Klon Kitchen: I’m happy to be here.

Ellysse Dick: So Klon, this past year has shown everyone, not just policy wonks like us, just how tough it is to get content moderation right. What do you see as the greatest challenges companies face when it comes to user generated content, and how well are they addressing these issues?

Klon Kitchen: Yeah, that’s the question of the hour. I would say not well, is the short answer to your question. And there is a pretty big divide in terms of why people think they’re not doing well. I don’t think this has to be a partisan issue, but it tends to go down that road sometimes. And I think many on the political left feel as though the platforms, social media platforms particularly, are failing to address a proliferation of hate speech and other incitements to violence and things like that, and so they actually are concerned about a lack of moderation, whereas many on the political right are concerned about a perception of over-content moderation, feeling like their voices are being constrained online, that they’re not being able to see the news that they want to see, or they’re not able to share the news or share the opinions that they want to share.

And so essentially everybody is pretty frustrated and these companies, on one side, have got a hard job and it’s understandable, but in another sense they’ve failed to frankly explain themselves well and to appropriately build expectations. And I think that’s what’s led us to this point right here.

Ellysse Dick: So you bring up the different perceptions of what content is and is not online and I’m curious whether you think, whether it’s intentional or unintentional, do you see that current approaches to content moderation favor certain types of speech or disproportionately amplify certain types of content? Or do you think that’s more of a perception on the user side?

Klon Kitchen: Well, I think the most important thing to understand when having this conversation is to understand the sheer scale and scope of content that’s on these platforms. And I think every year there’s this interesting infographic called “Every Minute of the Day.” And I think most recently it talked about how on YouTube, for example, there are 500 hours of new video uploaded to YouTube every minute of every day, right? And that’s just one platform. That’s not tweets and everything else. And so just the sheer scale of this problem is sometimes lost in this conversation. So there’s a lot. So we’re asking these platforms to do a lot.

Nevertheless, the concerns that those on the political right raise about feeling as though they are being marginalized or not allowed to participate in a way that they would want, that’s hard because it’s wrapped up in a lot of things like the underlying algorithms that are used to streamline content moderation. And on the one hand, the companies say, “No, no, that’s crazy. We would never do that because we’re not political. We’re providing a service for everyone.” But then when they talk about themselves, they use morally laden language like “right side of history,” and “we’re trying to make the world a better place.” And then when, I think it’s inarguable that most of these companies advance a left-of-center or liberal worldview about how they view themselves and the world they’re trying to create, it’s not entirely crazy for people on the right to say, “Well, okay, we’re just going to take you at your word that one, you have this worldview and two, you intend to use your platform to advance that worldview. And then when we see a content moderation decision taken that feels like it disadvantages us, we’re going to interpret it through that lens.” So that’s not crazy. I don’t think, however, that it’s quite that crystal clear.

And I don’t think the platforms have helped themselves any, because what often is the case is a confusion. One, about expectations; so often people on both sides of the aisle will talk about a First Amendment standard online or they’ll decry censorship. Well, the problem with that is that First Amendment and censorship apply to the government, not private sector entities. And so that’s a little bit of a category mistake I think. But then two, oftentimes people will be frustrated with a content moderation decision that is actually quite consistent with the rules that have been set. They just don’t like the rules, right? So they think, “Hey, you shouldn’t take me down; I should be able to say X.” Well, that may or may not be true, but Facebook or whomever has made a decision that you can’t say X and they’re actually enforcing the decision quite consistently, but you just think that they shouldn’t have that decision to begin with, right?

But then that’s a different thing. That’s something that’s worth debating and I don’t think users should just take it I guess, but let’s fight the fight that that actually is. Okay, so that’s the right. On the political left side, it’s the same thing where a platform has made a decision that they’re going to allow certain types of conversation and they think that you shouldn’t, and that that type of conversation is out of bounds and shouldn’t be allowed and they get frustrated with that. And then they go against the platform and rail on them for adopting the standard that they’ve adopted.

Ellysse Dick: Building off of that point, do you think platforms should be able to define their own parameters of acceptable speech or should there be limits on their ability to do so?

Klon Kitchen: I think they absolutely should be able to set their own standards for speech. They are a private institution, they are offering a platform, they have the right, within the law, within the bounds of the law, they should absolutely have the right to determine what content they will and will not allow. And it’s important for people to realize that the rules we make for these platforms will bind every platform that’s online. So what the American Enterprise Institute can and cannot post online is determined by them. If we change these rules, that affects what AEI can do online, same thing with ITIF or anyone else. And so there’s no one set of special rules just for the Big Tech guys, right? We’re talking about the rules that govern online speech, period.

Ashley Johnson: So you have advocated in the past for reform to Section 230, but argue that the law itself should remain; there shouldn’t be a full repeal. So let’s start with the positive. What do you think that Section 230 gets right?

Klon Kitchen: Well, I think the original intent of Section 230 was to make the Internet a less awful place. It was the idea of incentivizing platforms or online entities to be able to freely take action against some of the worst content that’s out there without being concerned that they’re going to be sued into oblivion. I think that’s still a good policy goal. I think that we want online platforms to have the freedom to police content like pornography or sexual child exploitation or online harassment or all kinds of other things aggressively and freely. I also think that as a free speech issue, and also as an intellectual property issue, they ought to be able to use these platforms the way they see fit. So I think the original intent of Section 230 is still a good policy objective. I think it does have the net benefit of enabling a greater agility when it comes to innovation and things like that. And so I think that gets right and I think that those benefits are worth trying to preserve if we can.

Ashley Johnson: So moving on to reforms. There are quite a few proposed reforms and your report that you wrote for the Heritage Foundation actually included one such reform that would change or clarify the language in Section 230(c)(2), which shields online services from liability when they act “in good faith” to remove content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” So why does this provision need further clarification? What problems does it pose for online speech and content moderation in its current form?

Klon Kitchen: Yeah. So there’s two parts of that. So the first is what’s called the good faith provision and then the second part is the otherwise objectionable section. I’ll take those in turn. Good faith, it’s a fairly common term in law and it’s I think reasonably understood, but in the case of Section 230, I think some further refinement would be helpful. I take my cue from the recommendation by DOJ, but I don’t think that’s the only good option, I just use that as a point of reference in the paper that I wrote. But specifically the idea of the good faith clause and how we can refine that a little bit more is to make clear that good faith is going to include that they act with the specific intention of not deliberately disenfranchising any particular group. And so it will fold into it the idea of “establish whatever rules you want, that’s your freedom, but whatever rules you establish, enforce them fairly.”

And look, there’s always going to be some nuance here and there’s no silver bullet language that’s going to make it all perfect, but I think providing some definitional fortitude to that notion of good faith would be helpful just because—and this goes to the issues on the political right—there are so many concerns about the unfair treatment of speech online or the biased application of these rules. I think that could go a long way.

On the otherwise objectionable portion of the language, where you have this list of offensive things that can be removed, then you have this other language tacked on at the bottom called “otherwise objectionable,” which has subsequently been interpreted by multiple courts to essentially mean whatever. Just whatever.

And in the one sense, I like it because you want to give a certain level of freedom to these platforms. You don’t want to have them completely bound only to the language that’s in the text in terms of what actions they can take. But that has been so broadly interpreted that it has allowed things that frankly go in the exact opposite direction of the intent of Section 230. So for example, the protections afforded by Section 230—not only in this provision but including this provision—have enabled a revenge porn website that is devoted to posting nude images without the consent of those whose images are being posted. It has included a message board that knowingly facilitated illegal activity and then refused to collect information about that activity. It included a website hosting sex trade advertisements whose design and the way they actually set themselves up was specifically intended to prevent the detection of the trafficking and a whole host of other things. And all of those things were in part—not exclusively, but in part—enabled by this notion of the freedom that comes with “otherwise objectionable.” And I think that flies in the face of the intent of the provision. And I think some refinement there would be helpful.

Ashley Johnson: So other proposed reforms would aim to prevent bad actors from taking advantage of Section 230’s liability shield. How might Congress change the law to carve out bad actors? And do you think that this would be effective and solve some of the problems that Section 230’s critics have with the law?

Klon Kitchen: Yeah, I think that could go a long way. So in my paper, I divide this into two parts. First, I say we should actually create what I call a Bad Samaritan carveout. And I simply say that the provision specifically should remove liability protections for any service that acts purposefully with the conscious object to promote, solicit, or facilitate material activity that they know or should know violates federal law, right? So if a company or platform sets themselves up with the express purpose of, or knowingly (or should knowingly) facilitates illegal activity, well then there’s no way they should have Section 230 protections, right? So if you’re, just to make something up, if you’re an online provider who’s trafficking child pornography, under no circumstances whatsoever should they receive any kind of protections. I mean, that would be a federal crime already, but something like the facilitation of drug sales. There’s a whole host of activities that would fall under that.

The second part that’s related is, I actually want to clarify in the statute itself that there would be no effect on any antiterrorism, child sexual abuse, or cyber stalking laws. So as we reform Section 230, I don’t want online providers to be in any way concerned about being aggressive against those types of activities on their platform. I want Facebook, I want YouTube, I want Twitter and all the others to continue being very aggressive in rooting out and getting rid of any terrorism related content, child sex abuse content, cyber stalking content, and I think that provision should just be made very clear in the law so as to enable that type of activity.

Ashley Johnson: So jumping off of that, there are a lot of proposed reforms and particularly bills that have been introduced in Congress that carve out certain types of activity or content from Section 230 so that the liability shield and Section 230 would no longer apply in the case of that type of content. What do you think of these kinds of proposals, and is this the right approach, or is it not the right approach?

Klon Kitchen: Well, so oftentimes it depends on the specific carveout. So what I just talked about, I’ve given you two carveouts there most recently, and I obviously think that those are good ideas, but I think that they’re good ideas in part because I think they go to the core of the intent of Section 230, right? I think they’re bound and rooted in the original intent of the legislation. To the degree that other carveouts are being offered and are similarly tied to the intent, I’m very happy to engage those and think about them and that kind of thing. However, there is a tendency, and this may be getting more at what you were talking about, there is a tendency to view Section 230 as the end-all be-all silver bullet, we’re going to fix the Internet.

And I think that is a mistake. I don’t think it is. I don’t think the provision is that. And I think that if we go trouncing through the Section 230 garden without thinking about where we’re walking and what we’re doing, we’re going to create a lot more problems than we’re actually going to fix. And so I think, again, the benefits of 230 are worth trying to preserve if we can. But I think part of that preservation effort is going to involve limiting what we’re doing, trying to keep it nice and tight, trying to do what we think is necessary to get the provision back to its original intent and to deliberately narrow it scope, not broaden it.

Ashley Johnson: And then onto a different type of reform that, again, that you have touched on in your work: What about proposed sunset provisions for Section 230? What would be the potential benefits of this? And would there be any potential risks such as for new entrants into the market, like startups with business models that rely on third-party content?

Klon Kitchen: Yeah. I do propose a sunset provision. It’s the provision I hold the most loosely of the ones that I offered because it comes with some tradeoffs. I offered a seven-year sunset because I thought that was the Goldilocks point where it’s long enough to where it provides a level of stability that people can plan against, companies can plan against and presume, but it’s frequent enough to where, frankly, you keep the pressure on the companies not to grow complacent, not to begin presuming on these things and getting a little lax in terms of the way they go about the content moderation and other decisions that would affect us. I’m not wedded to it. I think it’s a good idea in part because of the way technology evolves.

So when this was originally written in 1996, man, the Internet was a fundamentally different thing. And so one of the challenges we found is that trying to have modern policy conversations about 230, we’re really shoehorning a lot of things into it that just don’t fit into the language very well. I imagine that is going to continue to be the case going forward. So if we have a regular sunset every seven years, that’s a pretty good generational evolution timeframe for the Internet. And so yes, there’s some efficiency tradeoffs there, but if the net result is one where we have a provision like 230 that is routinely updated to match the Internet that we have, I think there may be some good benefits to that.

Ashley Johnson: And then finally on the topic of proposed reforms, are there any other reforms that I haven’t touched on that you think Congress should consider or that you don’t think are going in the right direction?

Klon Kitchen: Well, the one big one that I have made, I don’t know that many others have made, is I do not think that Section 230 protections should be made contingent on what they call exceptional access or, frankly, any other law enforcement cooperation. So there’ve been a number of legislative proposals that would do just that, that would take Section 230 and require online platforms to be more aggressive in their cooperation with law enforcement or with the intelligence community if they were to receive Section 230 protections. I think that confuses the issue. When we talk about exceptional access, we’re talking specifically about providing law enforcement with exceptional access to encrypted devices or data. It’s not necessarily backdoors, but that’s the shorthand that’s often used, law enforcement backdoors to data. I think that that issue is just fundamentally different and separate from what’s going on in 230.

Similarly, people often use Section 230 and antitrust in the same breath. They’re related but completely different and I don’t think that those two things should be dealt from a legislative or policy standpoint in the same way or at the same time, even. So just keeping those distinctions I think is pretty important and is not always the case when that conversation’s happening. The final thing I’ll say that I think really goes to the heart of a lot of the issues, from a politics standpoint with 230, is we do need to clarify the line between acceptable editing and normal editing and labeling and becoming a publisher because the whole presupposition on 230 is that these online platforms, they’re not publishers and therefore they shouldn’t be held accountable for specific content. But there are a number of common practices like labeling and delisting and context commentaries and things like that that are not technically understood as content editing but clearly affect how content is accessed or understood or shared. And so I think if we’re going to update Section 230, we should address that issue and draw a clear line on those practices and what does and does not violate the editorial preconditions of 230 protections.

Ashley Johnson: So for our final question that we ask all of our guests, we want to know what your verdict is on Section 230. Should we keep it, amend it or repeal it?

Klon Kitchen: Well, I’m very clear on this. I think we should mend it. I think that there are some on the political left who want to use 230 as a type of social experimentation. There are some on the right who want to use Section 230 as a way of political score-settling. And I think in the middle are most Americans who just want to get online and not be manipulated or abused. And I don’t think that’s too much to ask. I think 230 is a great way for us to find that middle ground and I think it’s worth protecting if we can. And so my preferred political outcome is to mend it, not end it.

Ashley Johnson: Excellent. Thank you so much for joining us, Klon. If you want to hear more from Klon, you can follow him on Twitter @klonkitchen. That’s it for this episode. If you liked it, then please be sure to rate us and tell friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for our weekly email newsletter on our website, itif.org. Be sure to follow us on Twitter, Facebook, and LinkedIn too, @ITIFdc.

Back to Top