ITIF Logo
ITIF Search
Podcast: A Brief History of Section 230, With Patrick Carome

Podcast: A Brief History of Section 230, With Patrick Carome

 

Patrick Carome, one of the leading advocates in Section 230 litigation famous for representing AOL in the landmark Section 230 case, Zeran v. AOL, joins Ellysse and Ashley to explore the history behind Section 230, Congress' intentions in passing it, and the foundational case that set the precedent for how courts interpret it.

Mentioned

Related

Audio Transcript

Patrick Carome: We now have a world in which some of the big platforms have tens of thousands of employees who do nothing but attempt to police and control and remove the worst of bad content. We wouldn’t have that, I would submit, without Section 230 and its removal of disincentives from doing exactly that.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we are exploring the ins and outs of Section 230, a law Congress passed in 1996 to address issues that are still relevant today. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We are a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be exploring the history and context of Section 230 of the Communications Decency Act. Joining us, we have Patrick Carome, partner at WilmerHale. Pat represented AOL in the landmark Section 230 case, Zeran v. AOL, and is well known for his defense of social media and other online services in Section 230 and First Amendment cases. Welcome to the podcast, Pat.

Patrick Carome: Thanks. Great to be here.

Ellysse Dick: So let’s start with just the foundations of Section 230 to give our listeners a bit of historical background. We’ve talked a lot about what Section 230 is on this podcast, but not so much about why it exists. So how did it come to be and why did policymakers decide to tackle intermediary liability at such an early stage of the Internet?

Patrick Carome: Well, in the mid 90s, there was a lot of uncertainty about how traditional laws, defamation laws, and other laws would apply to online platforms. Courts were just trying to sort out those issues. There were in the lower courts, a couple of early decisions that at the very lowest level of courts—one federal, one state—in which the courts seemed to be confused as to the responsibility of the platform for other people’s speech.

There was a particularly, one of those two cases that’s often talked about in this context is called Stratton Oakmont v. Prodigy. That case essentially held that the Prodigy platform, which was then one of the two or three main platforms that was out there in the United States, that the Prodigy platform was in a worse position legally with respect to allegedly defamatory content, because it had actually tried to do things to make its platform a more family-friendly place. Particularly, it was employing filters to screen out content for certain four-letter words and the like, and was doing some other very, very limited moderation to try to keep the dialogue at a family-friendly level.

The court found that Prodigy was more exposed to liability for a defamatory third-party posting because it had engaged in that sort of self-policing, very limited self-policing. That was in contrast to another decision involving the CompuServe platform. There was a case called Cubby v. CompuServe. That case seemed to suggest that platforms did not have a liability, or at least didn’t have the liability that newspapers and the like had for the content within their pages.

So this was quickly recognized including by Congress, I think, as a very problematic situation that the message of the Stratton Oakmont case was, if you engage in self-policing of your site, you actually are more exposed and directly responsible for third-party content. That was perceived as sending exactly the wrong message to platforms and making them liable when they were trying to be a good corporate citizens.

Ellysse Dick: So before we get into some of the specific cases and decisions that have shaped our understanding of Section 230, post-its passage, can you talk a bit about the role case law has played in its evolution more generally? And why does Section 230 enforcement and adaptation rely so heavily on case law rather than, say, a more structured legislative process?

Patrick Carome: Well, I mean, I think that the role that courts have played with Section 230 over the past two decades, or nearly two decades, more than two decades actually, is not dissimilar to the role that courts play in applying any federal statute. It is typically the case that Congress enacts a law, and it’s a somewhat abstract law and it’s only when it is applied to specific facts and circumstances that the meaning of the law ultimately becomes clear.

So I think there’s anything special about the role of courts in Section 230. In fact, what we’ve seen is that the courts from the beginning, from the Zeran case, have largely been consistent across the board, federal and state, in holding that Section 230 is a broad immunity, not complete, but a broad immunity from liability with respect to third-party content immunity for platforms. Of course, just because the platforms are immune doesn’t mean that the original posters, the original authors of harmful horrible content, that they’re not subject to full liability.

Ellysse Dick: So can you just give a brief breakdown for our listeners who might not be familiar with the distinction of how this intermediary liability works? I know you mentioned the platforms, can you discuss just a little bit the distinction between the platforms and the users in this case law?

Patrick Carome: Sure. So the users, the original authors of content, the original creators of content, as to their own content, they receive no protection at all from Section 230 to the extent that their speech is defamatory, unlawful, or otherwise tortious. They’re on the hook. Now there may be issues about whether they can be found and that’s another piece you may want to explore, but the user is fully on the hook and gets zero protection from Section 230 with respect to their own speech.

Platforms, on the other hand, so long as they didn’t themselves participate in the actual creation of the user speech, and that’s been an issue that courts have had to sort through, but the platforms, as long as it really is third-party content and not the platform’s own content, they receive the protections of Section 230. Section 230(c)(1) is the provision that has gotten by far the most play in the courts. It has been a very robust protection allowing operators of online platforms to get out of this litigation pretty much at the front end of the case so long as it’s clear that what the case is about is either harm arising from another person’s speech, or content, or the harm, or the suit is about the platform’s moderation of the user’s content, such as taking down that content. Section 230(c)(1) has been read by the courts to protect online platforms, whether they are not taking down user content, or they are taking down user content, and making the user unhappy because they’ve taken it down. So it’s a protection sort of in two directions.

Ashley Johnson: So you have played an active role in Section 230 litigation from the outset, perhaps most famously in Zeran v. AOL in 1997, which was one of the first cases to interpret Section 230. Can you give us some background on the case and the precedent that it set?

Patrick Carome: Sure. Zeran actually was the very first case in which any court construed what Section 230 meant. The defendant in the case was America Online, which was at that time, really at the very top of the heap of the world of Internet platforms. The facts were really horrible for the individual victim of the activity, which was the subject matter of the case. A gentleman named Ken Zeran lived in Seattle, relatively unknown person, for some reason that is, to this day, never known was the victim of a really gross and horrible hoax. It was just six days after the Oklahoma City bombing, which happened in, I believe, April 1995, which was a horrible event that shook the nation.

Six days after that bombing, someone, and to this day, it’s not known who, began posting on America Online bulletin boards, what appeared to be ads for souvenir t-shirts and other memorabilia grossly celebrating the Oklahoma City bombing. For example, there were t-shirts talking about, finally, a childcare center where the kids don’t cry and scream anymore. As you may know, there was a childcare center in that building, and there were a number of children who were killed.

To make it tied to Mr. Zeran, these postings stated that to get this memorabilia or souvenir t-shirts, one would call Ken, and it gave Mr. Zeran’s real phone number in the posting. So it appeared that Mr. Zeran was grossly celebrating this horrible event. Mr. Zeran immediately began to receive phone calls, threatening phone calls, angry phone calls, and even at some point death threats from the fact that this material, these hoaxster posters were online. To make matters even worse, maybe it wouldn’t have gotten so much attention, but somehow, an Oklahoma DriveTime talk show host got word of these postings, apparently from somebody named Ken Z. at this real phone number of Mr. Zeran. On this radio talk show, they reported the presence of these souvenir ads and incited people to call Ken Z. and tell him what you think. And so it got a huge amount of additional attention in the very community that was most directly impacted by the Oklahoma City bombing.

Mr. Zeran did attempt to reach AOL and get AOL to take down these postings. In fact, this wasn’t really clear in the case as pled in court, but in fact, AOL was taking down the postings, but the poster was creating new accounts with very similar screen names based on Ken Z. And so it was a bit of a cat and mouse game between AOL and the poster to try to get that content down.

So those were the facts. Mr. Zeran brought suit against both AOL and the Oklahoma City radio station, and originally brought the case in Oklahoma City, suing AOL for negligently allowing defamatory false content about Mr. Zeran to be on the platform, and also separately in a separate suit suing the radio station. I can give you more details if you want. That may have been more than you were ready to receive all at once.

Ashley Johnson: No, that’s great. Can you tell us a bit about how the court decided the case and how they decided to apply Section 230?

Patrick Carome: Sure. So it’s interesting, I was not involved in the case at the very front end when the case was pending in federal court in Oklahoma City. Interestingly, the facts of the case, the postings, and when they were up, and all of the death threats, and other calls to Mr. Zeran, those all happened in April and May of 1995, which actually was before Section 230 was enacted. But by the time Mr. Zeran brought his lawsuits, Section 230 had just been enacted.

So it wasn’t even clear for that reason whether Section 230 would be available to AOL in the case based on the timing. There was sort of a retroactivity question and did the statute apply to cases based on events that predated the statute. Funny that that actually was one of the main disputes that the Zeran in court had to sort out. It’s sort of a historical, not very important point anymore, because there weren’t many cases that had this weird timing issue.

In any event, the Oklahoma lawyers who were representing AOL did two things at the very front of the case. They moved to have the case transferred from the federal court in Oklahoma City to the federal court in Eastern District of Virginia, which is where AOL was headquartered. They also moved to dismiss the case on the merits, basically saying the AOL didn’t have any legal duty to take down content, didn't owe any duty to Mr. Zeran, and some other defenses. But the original defenses that those lawyers asserted behalf of AOL in the Oklahoma City court, didn’t suggest that Section 230 controlled the case or applied to the case directly. They seem to have assumed that because of the timing issue I mentioned, that maybe Section 230 didn’t apply.

I got involved in the case after the court granted AOL’s motion to transfer the case to Virginia, which is the Eastern District of Virginia right outside Washington where my office is and nearby to where AOL is based.

So at that point, I had never done work for AOL before that case, but AOL viewed this as a new opportunity to get new counsel and a new legal strategy for the case. They reached out to several law firms, including WilmerHale, which was then known as Wilmer, Cutler & Pickering, and asked for proposals as to how would you defend this case? Our proposal was the proposal that the company selected as a defense strategy and it was to, even though the lawyers in Oklahoma City had suggested that Section 230 probably didn’t apply, we said, no, we should defend this case based on Section 230. At that point had never been applied in any court and there was to be sure quite a bit of uncertainty about what did the statute in fact mean.

We ran the argument that Section 230 was a complete defense to Mr. Zeran’s claims against AOL, because it would treat AOL as the publisher or speaker of the hoaxster’s awful bulletin board postings. We had to argue the retroactivity issue about the timing issue, as well as just what does Section 230 mean and does it protect AOL in these circumstances. We were before a very bright and able district court judge, Judge T.S. Ellis, in the district court in the Eastern District of Virginia. Despite some real skepticism at oral argument about our position, he ultimately agreed that the statute applied to the case, notwithstanding that the facts had occurred before the statute’s enactment, and that the statute immunized AOL from liability for a third person’s defamatory and awful speech.

Ashley Johnson: So you have mentioned earlier that the courts have sort of continued to carry on this broad interpretation of Section 230 since the Zeran case. What is your reaction to arguments that courts should interpret Section 230 more narrowly? And what are the potential risks of a narrow interpretation?

Patrick Carome: Well, there’s several questions there. I’ll try to unpack them. One, we should probably note that it wasn’t Judge Ellis who had the final word on what Section 230 meant in the Zeran case. The case ultimately was appealed to the Fourth Circuit and a unanimous court led by then Chief Judge Harvie Wilkinson ruled unanimously that AOL was immune. Judge Wilkinson wrote up a decision that was quite sweeping in its interpretation of the protections provided by the statute.

Courts, I think, have recognized the cogency of Judge Wilkinson’s analysis in that case, and have pretty much uniformly, as I said earlier, given a broad interpretation to the protection provided by the statute. One of the key issues was, if the online platform allegedly knew of the harmful or unlawful content and still didn’t immediately take down the content, did that knowledge take the platform outside the protection of the statute? Those were the facts of Zeran, and though at least the facts as alleged in Zeran. Judge Wilkinson said, no, that that doesn’t make a difference. You’d still be treating AOL as the publisher of the hoaxster’s postings if you held AOL liable on a defamation or a negligence theory for carrying those postings. I think that actually this is what the plain language of Section 230 provides, and court’s jobs are to interpret the statute.

Interestingly, a few years after Zeran and a few other cases had come down, a congressional committee in the course of enacting something called the Dot Kids Act, which extended the protections of Section 230 to a new Internet domain that Congress was establishing specifically directly to kids, that legislative report specifically talked about Zeran and a few of the other very early cases and said courts have got it right. Zeran is correctly decided. So there was that additional affirmation, at least from a congressional committee that this is exactly what Congress intended.

What would happen if the statute were narrowed either by new court rulings or by legislation? There’s a lot of room for debate about that. I would say that Congress got it right in enacting the statute, and that the statute serves two really important goals. One is that it actually is an incentive, or at least a removal of disincentives, to Good Samaritan self-policing of content by platforms. Why do I say that? If you didn’t have Section 230, it’s quite clear that the main, or at least a main defense that platforms would have with respect to unlawful or harmful third-party content would be the First Amendment. It would be the First Amendment protections that apply to other clearinghouses for huge quantities of information—newsstands, bookstores, libraries—because they handle so much information, although nowhere near as much information as online platforms do, those more traditional information clearinghouses have special First Amendment protections that say that they cannot be liable unless they knew or really, really, really had a good reason to know of unlawful content and didn’t take it down or remove it from their shelves or their newsstands.

That First Amendment principle, that legal regime of just First Amendment protection actually creates a situation where a platform that wants to avoid a liability for third-party speech, the best strategy for doing so is to stick your head in the sand, not become aware of the content coursing through your networks, and not have systems for users to notify you of harmful content and other programs in place to address harmful content. That’s the safest course. So actually for platforms carrying so much information, and it’s only gotten more huge, the quantities of information at issue as the Internet has grown, without Section 230, there is a strong disincentive to avoid policing, to avoid care and attention to what’s happening on the platform.

Section 230 has done a wonderful job of providing operators of online platforms with the breathing space to do what by far most platforms will choose to do based on their own self-interest and good community-spiritedness, which is to attempt to control and remove harmful, lawful content. So we now have a world in which some of the big platforms have tens of thousands of employees who do nothing but attempt to police and controls and remove the worst of bad content. We wouldn’t have that, I would submit, without Section 230 and its removal of disincentives from doing exactly that. That’s, I think, one of the key reasons, and it’s clear from the legislative history, and the enacted preamble of Section 230, that that was one of the main goals of the statute and the history about trying to avoid the perverse results of Stratton Oakmont, that also makes very clear that that was one of the main goals of Section 230 and the statute has served that goal very well.

Secondly, the other key policy goal that Section 230 serves is allowing platforms to get off the ground and thrive and be what is the best of the Internet, which is a platform for the speech of many people, and robust discussion, educational events, and the like. Given the huge volume of the literal firehose of content that flows through these platforms, the threat of being of being legally responsible for each and every bad thing that people do online—there’s always going to be bad apples—to be responsible for all of the potentially harmful content, would prevent companies from getting off the ground and from surviving.

Now, maybe it’s true that some of the huge platforms today could find a way now that they really have become behemoths of industry to live without Section 230, although I would suggest that the disincentive concerns that I expressed earlier would be a big problem for them. But Section 230 continues to exist today as a key enabler of brand new platforms that we can’t even imagine yet to develop and get off the ground, because new platforms, just with their seed money and the like, the idea that they could launch at a time when they’re going to be potentially liable for all the speech that comes on their platforms, they’re just not going to get off the ground at all because of that huge overhang of potential liability.

So those two goals, removing disincentives to responsible self-policing and enabling platforms to thrive and to serve what are what are the very, very positive functions of online platforms have been very good. Section 230 has enabled them and narrowing Section 230 would disserve those goals and I think we’d have actually a worse set of problems on the Internet if Section 230 were narrowed or removed as some are calling for.

Ashley Johnson: So in the context of this greater debate around Section 230, for our final question that we ask all of our guests, I would like to ask your verdict on Section 230. Should we keep it, amend it, or repeal it?

Patrick Carome: I think we should keep it. Any important law has trade-offs. In a world where there are good people and bad people, unfortunately, just as there are bad people in Central Park from time to time who do bad things, that doesn’t mean we shouldn’t have a Central Park, or that we should be holding New York City liable for the crimes that unfortunately happen because not all human beings are good.

I think that the statute has served a lot of good. It has enabled the positives of the growth of the Internet that society has, and those benefits are enormous, and that there are other ways to address the problems of creeps and people like the hoaxster in the Zeran case through other means. The statute properly recognizes that it’s the individual sources of harmful or unlawful speech who should be responsible, and that’s where our enforcement efforts should be directed, and that society is much better off as a result of not subjecting online platforms to the flood of liability that would otherwise flow.

Ashley Johnson: Thanks so much for joining us, Pat. If you want to hear more from Pat, you can follow him on Twitter @pjcarome. That’s it for this episode. If you liked it, then please be sure to rate us and tell friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for our weekly email newsletter on our website, itif.org. Be sure to follow us on Twitter, Facebook, and LinkedIn, too, @ITIFdc.

Back to Top