ITIF Logo
ITIF Search
An International Perspective on Section 230, With David Kaye

An International Perspective on Section 230, With David Kaye

David Kaye, free speech expert at the University of California, Irvine, joins Ellysse and Ashley to explore the challenges of developing effective and culturally relevant content moderation policies in different countries and how intermediary liability laws like Section 230 impact online speech for billions of users around the world.

Mentioned

Related

Audio Transcript

David Kaye: Section 230, and more generally the First Amendment approach to freedom of expression, is certainly a kind of U.S.-centric approach to Internet governance. But it’s had, I think, a real impact globally.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we’re exploring the ins and outs of Section 230, a U.S. law shaping online platforms with global userbases. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We are a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be looking beyond the United States at how intermediary liability and content moderation policies can impact speech on a global scale. Joining us, we have David Kaye, director of the International Justice Clinic at University of California, Irvine School of Law, and former UN Special Rapporteur on Promotion and Protection of the Right to Freedom of Opinion and Expression. He is an expert in online speech around the world, and the author of Speech Police: The Global Struggle to Govern the Internet. Welcome to the podcast, David.

David Kaye: Great, thank you.

Ellysse Dick: So David, to get started, let’s talk a little bit about the global environment for online speech. Can you tell us a little bit how the free expression landscape has changed or evolved in the digital age specifically?

David Kaye: Sure. I think if you look around the world, essentially what you’re seeing are the same kinds of radical changes in the information environment that you’ve seen in the United States. I mean, in some places they’ve been more pronounced. So in a transitional country, a place that’s moving from maybe authoritarian rule—like a place say Ethiopia, or Myanmar, or any number of other places, Sudan might be an example right now—they’re moving from a place of repressed freedom of expression, repressed journalism, journalists in jail and so forth into kind of an open environment. And that open environment, like it is in Europe or the United States, is often dominated by these big digital platforms. A lot of speech, a lot of information-sharing, a lot of disinformation and harassment and incitement is happening in online space. And so I think that as you look globally, the trends are the same kinds of trends that you see here, the kind of centralization on a few platforms of all sorts of information, giving them a whole lot of power.

I think the one change or the one distinction between the way we might see these issues in the United States versus the way people see them outside the United States is that the major platforms in some respects have even more power outside the United States. And yet for the vast majority of people, these are foreign governors, basically; they’re foreign companies that have this power over domestic speech and communication. And so that puts kind of a different kind of valence over the way the debates unfold in those places.

Ellysse Dick: So do you think when you’re talking about the power that these platforms have, is that mostly the content moderation aspect, or are there other parts of their services or business models that are playing into that?

David Kaye: Yeah, I mean, it could be everything really. So if you think about, maybe just to give an example, I remember being in Myanmar, must’ve been, it’s almost six years ago or so. And I remember talking to some journalists there. So in Myanmar—and this is before the effort at ethnic cleansing of the Rohingya, the Muslim community in Myanmar—early on in the move to democratization in Myanmar, there was an opening of journalism. Suddenly the media was able to report in a way that it couldn’t for 30 or 40 years. And that was a pretty big deal, but it also was taking place in an environment of pretty significant poverty. And so what you ended up seeing was journalists, who, because they couldn’t get paid, or their outlets just weren’t really able to generate a lot of income, they started to move their reporting to posting on Facebook. And that had a real significant impact in how the information space developed there.

And I think you see that in many places around the world, where these companies have such power that they really shaped the information environment. But it certainly isn’t limited only to that. It’s not just the content moderation, it’s also messaging services. WhatsApp has massive power around the world, in part because it’s free, in part because it has become such an incredibly useful networked tool for people around the world. But that involves some kind of content moderation, or should, but it’s really about point-to-point messaging or small-group messaging. And you can see that in all sorts of ways: political advertising, retail space, big retailers that have comment pages. That involves content moderation, but it also involves the economy. So it’s pretty massive. It’s not just the narrow “what can stay up and what should come down on Facebook or Twitter or YouTube.”

Ellysse Dick: Yeah. I think that’s a great point. And I mean, like you said, there are so many stakeholders involved. It’s not just the companies, it’s not just governments. So who do you think should be sort of the primary responsibility for governing these online spaces? Is it governments? Is it platforms? Is it civil society? What role do all these stakeholders play?

David Kaye: Yeah. I mean, it’s a space that absolutely requires what we think of as multi-stakeholder governance. I mean, not to get too jargony here, but multi-stakeholderism, the idea that the Internet requires involving so many different actors—whether they’re governments or technologists or civil society, or the companies themselves—that all of those actors need to be a part of the governance space. And that’s been a big part of governance for 20, 30 years, right? I mean, the IETF, for example, the standard setting organization. So the IETF is the Internet Engineering Task Force. An organization that makes sure that networks can speak to each other across borders. That requires a kind of governance that isn’t just governmental. So there’s all these different actors that have to be playing a role. And so we can’t think of this only as “government should do this, and companies should do that.”

But that being said, there’s been this massive increase in private power over the last, I’d say 10 to 20 years, and governments are pushing back. And they legitimately, I think at least in democratic space, have a legitimate claim to saying “this is space that needs to be governed according to democratic rules. Our citizens and our representative governance needs to have a role in determining what is happening in this space.” And I think that’s the story of the last couple of years, government pushback. And it’s almost certainly going to be the story of the next, I mean, starting now and over the next five to 10 years, which is, who should be responsible for governing this space? What is the responsibility of government, both to promote freedom of expression, and privacy, and so forth, and to protect individuals against companies, but also to promote innovation, so that companies are really able to both create new models and to protect their own users? So it’s a complicated dance in a way. And I think this is going to be the major public policy discussion for Internet governance over the next several years.

Ashley Johnson: So we talk a lot about the impact of Section 230 on U.S. politics and society, but the platforms under scrutiny have global user bases. How has the US approach to intermediary liability affected how other countries approach this issue?

David Kaye: Yeah, Section 230, and more generally the First Amendment approach to freedom of expression is certainly a kind of U.S.-centric approach to Internet governance. But it’s had, I think, a real impact globally. It wasn’t long after the adoption of Section 230 that Europe also adopted something, it’s the e-Commerce Directive, which has some similarities to Section 230, which is, it’s basically a form of protecting the companies against any responsibility to regularly monitor the content on the platform. And whether the e-Commerce Directive is a response to, or is modeled on, Section 230 isn’t really the point as much as to say that democratic institutions, that in democratic societies, the United States and Europe basically approach this in the same way, which is to say, “We see Internet space”—and this is pre-social media—“we see the Internet as a place for expansive enjoyment and exercise a freedom of expression. And we want to encourage that, and we don’t want the companies to be in a position where they have to worry about moderating every particular piece of content that comes over the transom, that is posted onto their platforms, but also gives them the tools, as rights-holding entities themselves, to determine what their platforms should look like, and what they can and what they should moderate—what they should regulate, essentially—on their platforms.”

And so Section 230 has had that impact as being a model around the world. And maybe to give an example beyond the U.S. and Europe, when India faced a question in 2014, when its Supreme Court faced the question of whether Indian law can obligate the companies to take down particular content, and to regulate hate speech and other forms of content and harassment as well, the Indian Supreme Court, in aligning itself with the model of Section 230 and the e-Commerce Directive, basically drew from American law in reaching its decision. And I think that might be surprising to some people, but it’s an interesting reflection on the fact that U.S. law has a kind of modeling impact around the world.

But there is one other part of this that I think is useful to think about, which is, as we think about and enter this discussion, and not just we meaning Americans, because Europeans are having this same discussion. In fact, Europeans are more advanced in this discussion than we are right now in the United States. As we have this discussion, we do need to think, are the tools that we adopt to regulate Internet space, or to regulate the companies, are the tools that we adopt subject to abuse by non-democratic societies? And how are we articulating what the rules are, and are we articulating them in a way that says to governments around the world, “Look, we understand that you have a right, and maybe even a responsibility to regulate this space, but you need to do it keeping freedom of expression and privacy and other human rights front and center.” How do you do that? And there’s a big risk that as we rethink regulation, that we do it in a way that doesn’t think about the global marketplace.

Ashley Johnson: Very well said. To jump off of that, how does Section 230 impact how U.S. tech companies operate in other countries?

David Kaye: That’s a really good question. Section 230, first of all, at a certain level, it’s hard to answer that question, because It’s not as if there’s any significant case law where those overseas have tried to sue the American companies in American courts. And so generally, if they were to try to do that, Section 230, I think, would shield the companies from liability. There’s no question about that. So the Section 230 question is really more, to my mind, how have other governments adopted something like a Section 230, or an intermediary liability regime in their own country, so that individuals within that country can have tools against the companies? And this part is important for sure, is that as much as Section 230 provides the companies with a shield from liability in U.S. courts, it provides nothing for the companies overseas.

So for example, if Germany, as it did, wants to adopt its Network Enforcement Act the NetzDG—or “Netz-D-G” I guess if you're doing the German correctly, which I probably mangled that too—Germany has every right to adopt its own rules and to impose liability on the companies with respect to their operations in Germany. And that’s true for every country around the world. Tim Wu and Jack Goldsmith wrote a book in, I think it was like 2007, 2008: Who Controls the Internet. And they were early on highlighting this for people. We think of the Internet as this borderless space with global jurisdiction, and yet free from country jurisdiction. And they pointed out really early on that’s just not true. Countries will want to have some authority to exercise, and Section 230 doesn’t do anything to protect the companies outside of the borders of the United States.

The companies need to be looking at and they have a responsibility to observe local law, national law around the world. And that’s in a sense what we’re seeing right now, is the companies—it’s taken a while—they’ve been alive to the regulatory space in Europe, but now I think they’re recognizing more and more that regulatory space isn’t just Europe. It’s not just the United States. It’s how every country around the world is going to seek to impose liability on them for failures, or for things that we think of as non-failures, as what they should and should not have responsibility for. And that’s going to be pretty complicated for companies that operate in up to well over 100, 150 countries.

Ashley Johnson: Jumping off of something that you’ve said before, this is something you've touched on already: the overarching trends and patterns in other countries approaches to intermediary liability and online speech. What sort of trends and patterns have you seen beyond the ones that you’ve touched on in terms of the U.S. First Amendment approach?

David Kaye: Yeah, so the one to pay most attention to right now, because it’s going to be changing the most is Europe, and what it has tabled—I mean, it really just tabled in the middle of December, so just before we all went on our holiday breaks, which might just have meant closing your laptop and going to another room during the pandemic—they just tabled the Digital Services Act. And this is a pretty massive piece of draft legislation that is going to be, I think it’s setting up the framework for company liability for content moving forward in Europe. And this isn’t probably the space for us to talk about all the ins and outs of that legislation, but my quick read of it, and kind of generic take, is that it makes a pretty significant effort to both impose new responsibilities on the companies—responsibilities mainly of transparency—to highlight companies are doing a lot of the work of content moderation, but most people see it as pretty opaque.

And so Europe is saying, you can’t do this in the shadows anymore. You need to highlight for the public what it is you’re doing. And those rules need to be consistent with human rights standards, and with the standards of public speech in Europe. And so that will move in an interesting direction, but it’s going to be a years-long debate. GDPR, the privacy regulation, also took years to come to a conclusion. And I’m sure this will be the same way.

You do see, unfortunately, around the world, approaches that are not nearly as thoughtful or democratic. You see approaches—and some of them have been driven or maybe expanded during the pandemic—where governments have said companies have a responsibility to take down content within a very short amount of time, sometimes within an hour, sometimes pushing the companies to basically create what are known as upload filters. So that as soon as bad—whatever “bad” is—content is posted, it gets taken down automatically according to basically tools of artificial intelligence. There is a move, including in Europe, to do that. We’ve already seen that actually in Europe, in the context of copyright.

But we see in terms of content moderation in the context of harassment, hate speech, incitement to violence, terrorist content, or extremist content. We’ve seen countries around the world try to really increase the pressure on the companies to deal with that content. And oftentimes those rules are just not consistent with fundamental human rights standards. Because what they try to do is impose a real significant cost on the companies. Or, alternatively, they impose this requirement on the companies to locate their servers—all of their data for a particular country and the country’s citizens—in that country’s territory, which creates a real risk of privacy infringement.

So there are all sorts of things and trends taking place around the world that are really problematic, not just from the content moderation standpoint, but from the privacy standpoint, from the surveillance perspective, from the ability of individuals to enjoy basic rights to protest, to organize, to associate, and so forth. But we see those in different places, particularly in transitional countries like I mentioned before, those that are moving out of authoritarian space, or that are just reinforcing authoritarian or populous trends in their own countries.

Ellysse Dick: So just bouncing off of that, I mean, like you said, especially between Europe and the United States, conceptions of what is dangerous content is obviously going to be a little bit different. And then you also have authoritarian countries that have anti-defamation laws that are used to restrict content well beyond legal speech. Are you worried that this will somehow either fragment online speech to the extent that you have allowed speech in some countries and not others, or because it is these global platforms, will it play into the lowest common denominator and have chilling effects even here in the States?

David Kaye: Yeah, that is a really great question. I’ve been thinking about this for a while. So you could think about it from a bunch of different perspectives, but let me start with the first point of your question. I actually think that we often overstate the distinctions between speech norms in the United States and Europe. It’s true there are distinctions, and Europe takes a different approach more or less to hate speech, although it’s not as draconian against freedom of expression as some Americans often like to think about it. I think it’s generally pretty thoughtful and just involves more balancing of rights, and in a way thinks about the impact of speech in a way that we don’t often think about it in the United States. So I wouldn’t overstate the distinctions as a matter of free speech principles, but I do think there’s something interesting about the latter part of your question, which is, because the companies are global, and they operate at scale, it’s in their business interest, in the interest of efficiency, for them to adopt rules that apply across the platforms.

And so I think you’re right to be concerned, or that there is a concern, about a kind of lowest common denominator approach. So that if Europe—I don’t think this is as true in the context of authoritarian governments that don’t have the same kind of reach—but as Europe is thinking about the rules that it adopts, in part because when we talk about Europe, we’re talking about a market of like 400 million people. Although I don't know if it’s 400 million anymore, now that the UK is officially out, but that’s another podcast. Anyway, as the European Union adopts new rules, if those rules are inconsistent with freedom of expression principles, the companies are likely to not only be subject to them, but they’re likely to change even their own content moderation standards to be consistent with those rules.

And that means that those new standards will apply to American users, to users in Australia, New Zealand, Japan, South Korea, other democratic societies. And that could be a problem for people, but it also in a way suggests that the power over content moderation principles isn’t something that only the companies enjoy and only Americans enjoy. It’s become a very globalized kind of regulatory regime, and Americans will face the impact of this, even though they have no input into the European regulatory space. And I think in a way that’s a challenge for the incoming Biden administration as well, that they’re going to want to engage with Europeans on their rules, not just because this is about protecting American companies. I mean, I think the Biden administration will want to certainly protect American companies, but also protect basic American values that are shared values with Europeans. But they’re going to have to engage with Europe on this to ensure that American rights are also maintained.

I do think, I mean, your point about other countries is also important, that other countries outside of democratic space have been really pursuing an agenda to kind of force the companies often to take down defamatory content. But they define defamation as “oh, an individual criticized the government,” like in Thailand, the Lèse-majesté rules of criticizing the Royal Family. Well, those are totally inconsistent with human rights standards and freedom of expression. And yet, the companies are constantly under pressure to take down that kind of content. You don’t see that expanding into democratic space so much. Although you do see analogs to that in the way Americans and Europeans often treat extremist content, but on those core kinds of problems of dealing with being critical of the government and critical of government officials, that’s a real concern, but it’s not a concern that the companies will use that to deal with how Americans or Europeans criticize their government. It’s more a question of, how can we reinforce the companies’ ability to stand up in the face of the kind of pressure they face from authoritarian governments?

Ellysse Dick: So talking a little bit about the kind of content that we’re looking at; you mentioned extremist content, obviously there’s all kinds of harmful content that are involved in these content moderation questions. So looking at the question of who’s posting that content, you have public officials, you have high-level celebrities, influencers, stakeholders who have something to say in this debate. How do you think platforms should approach specifically public officials and public debates who might post or be involved with harmful content on these platforms, in the U.S. but also, like you said, there’s countries around the world that are engaging in this kind of content?

David Kaye: Yeah. Pretty timely to talk about it, given that we’re recording this after Facebook announced a kind of suspension, I suppose, indefinite suspension of Donald Trump. So my view is, I would say a couple of things. One is when the companies adopt rules, they should be rules that apply across the platform, across all users. And there shouldn’t be special exemptions for particular users, or influencers, or public figures. It’s true that context may vary: So if you're the president of Brazil, or the president of the United States, or whatever your public position is, and you’ve got tens of millions of followers, that changes the context a bit. President Trump’s incitement to violence is a lot different than some random individual who has 20 followers saying the same thing. There’s a difference. But the rules don’t change, it’s that the context changes for the application of those rules. So I think the rules should be the same across the board.

The second thing is that I think one of the unfortunate things about the nature of the enforcement of the rules recently has been that for the highest level, or the highest profile individuals, it’s come down to Jack Dorsey or Mark Zuckerberg making the call, or at least announcing the call. And that’s a problem. That suggests that the rule-makers or rule-enforcers are somehow subject to political influence. You want these decisions to be made in a kind of bureaucratic way, in a consistent way. And that doesn’t really seem to be the case if it’s an 11th-hour decision by a CEO of a company making the call. And so I think that the companies really need to reinforce the positive aspects of the last few years of their development and the thickening of their rules, and the implementation of their rules, and to do it without the influence of the business, and without the influence of the CEOs, without the influence of the board of directors, and so forth. But my perception is that at the high levels of the kinds of people you’re talking about, it gets very politicized, and it shouldn’t be.

Ellysse Dick: So do you think that different proposals in Section 230 debates to maybe specify the type of content to remove or when to remove it, would that impact this issue that you’ve discussed with the public leaders and the enforcement of rules? Or do you think that should mostly the responsibility of the companies to enforce and manage?

David Kaye: Yeah, that’s a great question. My inclination for most of these things is that, while we want to ensure democratic control over questions of public expression and privacy, we don’t want to give governments any more power than they already have to determine what kind of content is kosher, and what kind is not. We don’t want countries, governments, to have the authority to say “you can speak and you can’t.” Which is why I think government should be in the position of increasing the obligations of transparency, and also maybe setting up some baseline rules. So for example, government could say in analyzing questions of content, we want to encourage or even require the companies to adopt standards of human rights law, or standards of our domestic law to the extent they’re consistent with human rights law. And that could be really positive. But I do think that the decision-maker, the enforcers, should be the companies themselves.

But that being said, this goes back to one of the earlier questions about who should be making these decisions.It should be transparent, and there should be some oversight. I just don’t think that that oversight should necessarily be government oversight. So you could see oversight taking place in maybe two different ways. So one would be to involve the courts. Right now, courts are just not really involved in the context of individuals who have claims about the company interfering with their privacy, or in particular, their freedom of expression. And there could be a place for that. And I think particularly as Europe is thinking through this regulation, they could put in a role for the courts. There’ve been interesting ideas in Europe over things like eCourts, Internet courts. And they sound a little bit far-fetched at first, but I think there is a role for courts to play here. And the reason for courts is that then you can at least have a sense of rule of law in this space, as opposed to just the rule-of-the-terms-of-service.

But the other part of it is, Facebook has adopted this Oversight Board, which is really just a mechanism of self-regulation. I think if we could imagine a broader kind of industry-wide oversight mechanism that is similar in structure to the Oversight Board, but involves something that we’re not used to in the United States—but people around the world are—which are press councils that are nongovernmental bodies to basically allow people to bring grievances about what the press has done to these independent bodies, and for these independent bodies to kind of make decisions about those grievances.They’re not governmental. They involve the newspapers, or the media outlets. Here too, you could have the companies, you could have civil society, you could have academics and others involved in answering hard questions about whether the companies should be taking a different approach to harassment, or taking a different approach to hate speech, or incitement, and so forth. And helping to develop a kind of soft law around these issues.

It’s just that if that were all in the hands of government actors, my fear would be that you would end up with a system of restriction basically, and of clamping down on expression in a formal way, rather than the social and socializing approach that you get from involving all different kinds of stakeholders.

Ashley Johnson: So this has been a very international-focused episode, but for our final question, we have a sort of a U.S.-based question that we are going to ask all of our guests. We want to know what your verdict is on Section 230. Should we keep it, amended it, or repeal it?

David Kaye: Yeah, I definitely think we should keep it. But I don’t think that’s inconsistent with saying that there are some modifications of the regulatory regime, the regulatory environment that we should be thinking about. And for me, the most important thing to think about is, how do we ensure that the companies are more transparent? Because right now, in the absence of any liability, there’s no carrot and there’s no stick to be transparent about their rules. And I think that we should really put front and center in our regulatory conversation the question of: how can law mandate transparency, and include penalties for failure of transparency? And what does that transparency look like? To my mind, that would be the direction we would head.

I mean, I do think there’s also some value in the antitrust, the competition space as well, but one part that I would connect to our global conversation on that aspect is that we should always be thinking about—particularly because these are global companies—how do domestic regulatory steps have an impact on the hundreds of millions of users outside the United States? And the competition question is a part of that, as is the transparency one. Less so Section 230 directly, but it can be a model for how others are taking on these decisions as well.

Ashley Johnson: Great. Thank you so much for joining us, David. If you want to hear more from David, you can follow him on Twitter @davidakaye. That’s it for this episode. If you liked it, then please be sure to rate us and tell friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for a weekly email newsletter on our website, ITIF.org. Be sure to follow us on Twitter, Facebook, and LinkedIn, too, @ITIFdc.

Back to Top