ITIF Logo
ITIF Search
Podcast: A News Media Perspective on Section 230, With David Chavern

Podcast: A News Media Perspective on Section 230, With David Chavern

David Chavern, CEO of a news industry trade association representing nearly 2,000 publishers, joins Ellysse and Ashley to discuss the impact of Section 230 on traditional media and the spread of misinformation, as well as how the news industry handles the issue of intermediary liability.

Mentioned

Related

Audio Transcript

David Chavern: We are forced to compete in that marketplace that the platforms have set up, where we have expensive, quality content and we’re competing against the worst, cheapest garbage you can have.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we’re exploring the ins and outs of Section 230, one of the foundational laws that shaped the way information is shared in the digital age. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We’re a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be taking a look at Section 230 of the Communications Decency Act from a traditional media perspective and exploring the impact online platforms and Section 230 have on news publishers. Joining us, we have David Chavern, president and CEO of the News Media Alliance, a trade association that represents the news industry. David has added his voice to the recent calls for Section 230 reform, testifying at a Department of Justice workshop on Section 230 and writing an opinion piece published in Wired on Section 230’s impact on disinformation. Welcome to the podcast, David.

David Chavern: Great. Thank you very much for having me. Very happy to be here.

Ellysse Dick: So David, to get started off, let’s talk a little bit about the role of news media in the digital age. Can you talk about some of the obstacles and opportunities the news industry has faced in the digital age and adapting to it, and how Section 230 has helped or hindered that process?

David Chavern: Interestingly, in the industry—and I represent 2,000 or so news publishers whose businesses now are primarily digital—there’s been a lot of really good pluses. So we tend to focus on the negatives in terms of finances, but the audience for our product is bigger than ever by many multiples. Peak print era, roughly, was about 2005/2006, and we have many times as many readers as we ever did then. So there is huge consumer demand for what we create. The challenges are all financial and in particular this is an industry that used to control its distribution; we created a physical product and brought it up your driveway and handed to you in your bathrobe every morning. In the digital space, we don’t control the distribution. There are companies that primarily sit between us and our readers and the value chain about getting value back to news publishing is stressed, and in some cases broken.

So it’s a weird dilemma that we have bigger, more passionate audiences than ever, and the finances are more constrained than ever. And in particular, my members invest a lot of money in what they make. Journalism, actually, is an expensive kind of content to create; you have to pay people to go do things, oftentimes for long periods of time. A lot of people have seen the movie Spotlight, where there’s this journalism about abuse in the Catholic Church. If you sit back and look at that movie, the Boston Globe invested in that journalism for years before it came about. So journalism is an expensive kind of content, but it’s really important. If you don’t have good journalism society breaks at the end of the day.

One of our big challenges is, the way our content is distributed out into the world—we don’t think—appropriately values the investments in quality, and really forces a drive to the bottom and disinvestment in journalism. We think that’s not only bad for us, but also bad for society at large.

So I’ll stop talking for a second and happy to take that in any way you’d like.

Ashley Johnson: I would like to talk a little bit more about when you spoke at the Department of Justice workshop on Section 230 in February 2020. You mentioned how the news industry has managed to survive and thrive for hundreds of years without liability protections like those in Section 230. What are some of the issues or the lessons that you think digital platforms could learn from the news industry in handling third-party content?

David Chavern: Section 230 is a really unusual kind of protection. It’s short, and simple, and vastly expansive protection from liability. And it’s not just us that’s never benefited from that. In most industries, you’re subject to liability for your products and for your services. I highlighted a lot of times there was this idea that if you are legally, financially responsible for what you do, business-wise, than it somehow destroys your businesses. The first legal liabilities that we could find out that attached to news publishing started in 1730. So we’ve been at this 300 years or so, and what has developed from that? We continue to be responsible for our content, which means we have editorial decisions and we’re careful about what we publish and don’t publish and we vet things and there’s a lot of activity related to that, but also note it didn’t keep news publishing from becoming a vastly successful industry for a really long time.

You could say the same thing about TV or other things: that responsibility is not necessarily equivalent to death in an industry. Actually most countries in the world do not have Section 230 and they’ve somehow figured out how to get along with the world. So one, I was just trying to center the conversation back from, we get into this binary thing with 230, where we have to have a hundred percent or nothing, and I think that conversation’s a lot more subtle than that. Then what I try to focus on is whenever you get into the 230 issue, there’s always this immediate focus on what other people do, notably what people post. People post bad things, and then how do we respond to that bad thing? Do we take it down? Do we keep it up? It gets to these very binary things about what other people do basically, which is post crazy stuff, or bad stuff, or horrific things.

I think it’s important in this circumstance to focus—if we’re talking about Internet service providers and the big platform companies in particular—let’s focus on what they do. They’re not the ones creating the bad piece of content. And controlling for that, it’s really hard and we can talk about that, but when you then have this massive pile of content, and then you start making decisions about it, who gets to see it? Do I see it? Do you see it? How’s it amplified? How’s it distributed? How is it recommended to me? Those are all decisions that the platforms make—that’s what they do—and yet Section 230 has been interpreted in way in which they’re kind of opted out of responsibility for what they do. It’s notably in Sacha Baron Cohen’s famous comment about “freedom of speech is not freedom of reach,” but it really gets to this idea of: We need to talk about what the companies do, because that’s very different than 1996, and focus less on sort of trying to control individual behavior.

Ellysse Dick: So do you think that Section 230 incentivizes different types of content moderation or do you think that without 230 or without those intermediary liability protections, we might be seeing more nuanced moderation practices? Because platforms aren’t just, like you said, taking down or leaving up; there’s a lot in between. Do you think those protections play a role there?

David Chavern: Yeah. So let’s talk about the news business, which is, how do we benefit from Section 230? The thing that’s always noted is comment sections on news publisher sites. We do benefit from 230 with regard to comment sections but we actively moderate those. The industry decided a long time ago that letting people put up whatever crazy stuff they wanted was bad and inconsistent with their brand and other things, so we actively moderate them. What we don’t do, is actively do bad moderation: take somebody’s horrible comments and highlight them, take the worst comments and put it up at the top. It turns out if you do that, that’s protected by Section 230 as well. So if you have marketplaces where you’re exempted from legal liability for these kinds of decisions about amplification, then where does that lead them?

That leads them to, okay, let’s emphasize the content that keeps you most engaged and most locked in. And that could be really bad content, that could be crazy content, that could be vaccine disinformation, that could be the most horrible and aggressive kind of content that either you like, or that you hate, or whatever, but keeps you there. Well then, we’ve implicitly set up a parameter where we have a marketplace that ends up valuing engagement to the detriment of all other vectors, including, by the way—and this is where news publishers get hurt—quality. If I invest in quality, nuanced journalism—whose purpose is not to get you infuriated, but to inform you—and then I’m in a marketplace that values infuriating content, because that is the value that they’re allowed to value and that they make money from, that’s again, bad for us, but bad for everybody else.

So Section 230 was designed and it says to promote actively good moderation. People say this a lot. Weirdly, it also protects actively bad moderation, which is, not only am I not going to restrain the worst stuff—for example, in 230 itself, it talks about harassment—not only am I not going to restrain harassment, I may promote it because that keeps people involved, and that’s not sustainable positioning and it’s not good for us as news publishers, not good for the country and ultimately, I don’t think, good for the platforms.

Ashley Johnson: So do you think that social media is unique in a way that it warrants its own set of rules, or should it have the same set of rules that constrain other forms of media?

David Chavern: The best take on this was Mark Zuckerberg’s take that social media is somewhere between a telecom company and a newspaper. If you look at those two models, what I don’t like about his phrasing of that is he wants to be constrained by neither. So a newspaper, we’re responsible for our content, and we undertake editorial activities to deal with that responsibility. We manage the content very actively with news publishing. If you think of a telecom company, your cell phone, they’re not responsible for any content at all, there’s no moderation. You can get on the phone and say the craziest possible thing you want to your friend, or to 5,000 people on a conference call. It’s not AT&T’s responsibility. But also what AT&T doesn’t do is, when you say something interesting on a phone call, they don’t then decide to send that to other people.

They don't amplify, they’re not like, “Wow, Ellysse said that crazy thing, I bet a bunch of her friends would like to hear it.” Social media companies want the ability to do that because that is how they keep other people engaged. So they are something between a telecom and a newspaper, we probably do need some rules and understanding that reflect that, but what we can’t have is you get to be neither. You get to do both, amplify people’s content and not be responsible for it, and there has to be some space in the middle for that.

Ashley Johnson: So overall, what do you see as the main impacts, both positive and negative, of Section 230 as it relates to sharing information online—and you’ve touched on this question, but I want to hear what you think the main points are.

David Chavern: The plus is that it turbocharged the growth of these industries and there’s been huge benefits from that; it’s allowed just this explosion of growth in these companies. But there are a lot of externalities related to that, that we now have to deal with. Listen, I grew up in Pittsburgh, Pennsylvania, and for about a hundred years, companies in Pittsburgh were able to dump their waste in the river. It wasn’t a secret, it wasn’t illegal, it was what you did because the rivers were infinite and you could dump your shit. And that allowed the growth of amazing industries, a lot employment, and it paid a lot of bills. Pittsburgh exploded with commercial growth.

But after a while somebody says, “Man, this is really bad, this part is really bad for society, for the environment, for culture, for industry, it makes people sick, and we got to start to constrain and regulate how these things are managed for our long-term success.” We’re kind of in that moment. 230 has flattened the liability space so that these things could explode. That doesn’t mean that bad stuff isn't happening, though, that we have to then manage. We’re kind of in this moment where there are clear externalities, which we have to try to constrain. Otherwise we’re all living with horrible rivers.

Ellysse Dick: So you’ve touched a little bit about this already, but I want to dive a little bit more into the free speech justification that’s used a lot for 230, which is obviously something that’s important to news media as well. How would you describe the relationship between free speech and the way platforms versus, or as well as, news media share information?

David Chavern: Obviously we’re really into free speech; we’re the only business mentioned in the First Amendment and free speech is key to the whole ethos and business. I do tend to think that the free speech discussion around 230, again, tends to focus on what individuals do as opposed to what the platform companies do. We should be very careful about limiting the capacity for individuals to express something, and then there being some government constraint on those expressions. That’s bad, right? But one—the classic “fire in a movie theater”—­the First Amendment does have some limits attached to it. Not many, thankfully. But also I think that whole debate tends to just avoid the question of what the platforms do, which is decide who to highlight and not to highlight.

That happens for a couple of reasons. One, we don’t see these algorithms, this AI process, working. Nobody really knows how they work, they’re not transparent at all. Interestingly, we get no choice about it. You don’t get any choice in how the algorithm feeds you content. It’s just the magic black box and it’s very sub-rosa. So we tend not to focus on it. And I think that’s the hard problem; that’s where we have to focus on. If you’re a company that decides who gets to see what content—that is what you do, that’s your product—what are your responsibilities in liabilities that attach to those decisions?

I don’t think it’s a free speech question to say, “Oh, you get to do that for free, there’s no liabilities attached to it whatsoever.” What we see in the Internet broadly is it’s been a huge opportunity for free expression; individuals get to not only express themselves freely, connect with others. It’s been the golden age of free expression, and we don’t want to lose that. But also if somebody is making money, deciding who gets to be seen or heard and who doesn’t, I don’t know why you need a get-out-of-jail-free card for that activity.

Ellysse Dick: So what I’m hearing you say is basically that intermediary liability isn’t so much of an issue when it comes to content moderation or specific types of content, but more when it comes to information ordering of that content.

David Chavern: Yeah. Listen, there are always going to be moderation decisions, and the question of how to control for billions of people dumping content into things is a really hard one, and it’s never going to be perfect and there’s no perfect answers to it. We’ve invited a couple of billion people to a party and it turns out that a bunch of them are jerks. And we’re trying to figure out how to figure that out. I think that whole question is missing the debate about, once you decide to then take an active role in ordering, amplifying, understand that when I search for news, I get a different answer than when you do. Or that when you go on your Facebook page, you get a very different set of content than your mother does. There’s no need for a free pass regarding those kinds of decisions.

Ashley Johnson: Then to turn back to something that you touched on earlier in our discussion in terms of the Internet not necessarily incentivizing quality content, how does Section 230’s intermediary liability protections for online platforms impact U.S. news media’s competitiveness in global information markets?

David Chavern: I talked earlier about, there are now companies that stand between us and our readers. In the traditional print era, we had the most direct relationship you could have. I even said, “Yeah we would bring your newspaper up the driveway”—the guy who sells you tires doesn’t do that. There’s three people who show up at your driveway: the mailman, and the newspaper guy, and now the Amazon guy. It was extremely direct. So now most people get to their news online through Google or Facebook (and a few other intermediaries, but those are the biggest ones).

I often say they’re our regulators. The government doesn’t get to regulate the news business—thank goodness, First Amendment—but these guys can. They decide who gets to see our content and how, whether it’s monetized. And if they have a free pass, which means, in making those decisions about what people will see, then quality doesn’t matter. Then all of their incentives are to deliver what keeps you most engaged, quality doesn’t matter. Then we are forced to compete in that marketplace that the platforms have set up, where we have expensive, quality content and we’re competing against the worst, cheapest garbage you can have.

In its own sense, it’s a huge subsidy for disinformation, because our stuff’s expensive; disinformation is free. Anybody can make it up, whatever they want. So if you’re saying, well, that’s fine, that’s the market, your expensive stuff can compete with the free garbage, again that’s bad for us, continually incentivizes disinvesting in journalism, but it’s also bad for society because you’re getting fed whatever. I was talking to somebody the other day, he was actually in agriculture and I said, “Imagine a grain market where quality wasn’t a value; you just got whatever got dumped.” Well you wouldn’t eat very well because quality wasn’t a value, so you got whatever. And from an information perspective, that’s where we're getting to.

Ashley Johnson: So to start to wrap up this conversation, what policies or practices do you think are necessary to mitigate some of the negative impacts of Section 230 that you’ve talked about?

David Chavern: I think we have to look at a huge focus on algorithms and AI in the algorithmic amplification. And actually there was a bill recently, Anna Eshoo put out a bill that started to get at this question. It’s not a perfect bill, really I’ve mostly just read articles about it, but it gets at this idea that when you have algorithms that amplify extremism, there’s got to be liabilities attached to that. This is really a bigger question than Section 230. It gets vested in Section 230, because it’s sort of this magic place where we go to debate these issues. But one, if you algorithmically decide what content people get, there have to be responsibilities attached to that; there needs to be transparency around how you’re deciding that. I think it’s very dangerous that there’s not transparency. And really you need to get the consumer choice: How come I don’t have any choice in the algorithms in my media diet?

It’s a hard, complicated problem, but the policy answers are algorithmic transparency and consumer choice. And at the end of the day where it comes into 230—and Roger McNamee and other people have talked about this—if you’re not going to act responsibly, we don’t have transparency and consumer choice and other things, then you don’t get the benefit of 230 if you amplify. People say, what are the implications of that? When all these services first started out, when there wasn’t amplification, it was just a timeline. You got to see in your feed whatever was the most recent thing posted. By the way, that wasn’t a horrible world. It was a less lucrative world for the platforms, because they want to move it around to keep you engaged, but it was transparent, it was even, and the platforms themselves were not interposing themselves between people and content.

What I would come down to is ultimately, it’s inconsistent to do an algorithmic amplification without responsibility. We have to look at 230, yes, but we also have to look at AI or algorithm transparency and consumer choice.

Ashley Johnson: And for our final question that we ask all of our guests, we want to know what your final verdict is on Section 230: keep it, amend it or repeal it?

David Chavern: Oh, amend it. We’re not in the “repeal it” category. And I think the “keep it” part, when people say “just keep it as is,” what that sort of avoids is: Section 230 as originally written was short and actually pretty focused, and it avoids all the court decisions since then that have really massively expanded it in ways that nobody could have seen. So when people say, “just keep it,” they’re not only saying “keep the law as it was originally written,” they’re saying, “keep all of these court decisions since then that nobody foresaw.” And I don’t think that's a justifiable position. Certainly it’s a foundational component of the Internet as it is, and you just can’t pull the rug out from the whole Internet, so I think you have to amend it and make people responsible for what they do, the decisions they make and where they make their money. Just like any other product that companies produce. They have liabilities attached to that.

Ashley Johnson: Thanks so much for joining us, David, if you’d like to hear more from David, you can follow him on Twitter @NewsCEO. That’s it for this episode. If you liked it, then please be sure to rate us and tell your friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for a weekly email newsletter on our website, ITIF.org. Be sure to follow us on Twitter, Facebook, and LinkedIn too, @ITIFdc.

Back to Top