ITIF Logo
ITIF Search
Podcast: Section 230 and the Need for Change, With Andrew Bolson

Podcast: Section 230 and the Need for Change, With Andrew Bolson

Andrew Bolson, privacy lawyer advocating for Section 230 reform, joins Ellysse and Ashley to evaluate the need for Section 230 reform in order to protect consumers and limit online abuse, suggest what form that should take, and explain the risks of taking a subjective approach to reforming online intermediary liability.

Mentioned

Related

Audio Transcript

Andrew Bolson: All right, are we getting lost here? Are we forgetting what this is all about? Because this all got started because people’s lives are being impacted.

Ellysse Dick: Welcome to Ellysse and Ashley Break the Internet, a series where we’re exploring the ins and outs of Section 230, a law that has shaped how users interact on an open Internet. I’m Ellysse Dick, Research Fellow at the Information Technology and Innovation Foundation. We are a tech policy think tank based in Washington, D.C.

Ashley Johnson: And I’m Ashley Johnson. And I’m a Research Analyst covering Internet policy at ITIF. In this episode, we’ll be discussing the ways lawmakers could update Section 230 of the Communications Decency Act to meet current and future challenges of a digital world. Joining us, we have Andrew Bolson, a partner with Meyerson, Fox, Mancinelli & Conte, where he practices privacy and Internet law. Andrew has advocated for balanced Section 230 reform, including in his article “Flawed but Fixable: Section 230 of the Communications Decency Act at 20,” and he is a Certified Information Privacy Professional by the International Association of Privacy Professionals. Welcome to the podcast, Andrew.

Andrew Bolson: Thank you for having me.

Ellysse Dick: So Andrew, to get started, let’s talk a little bit about Section 230 as a whole. The Internet has changed a lot since 1996. Has Section 230 kept up?

Andrew Bolson: It has not. This is one of my biggest issues with the law, is that simply, it just was designed for different Internet. I like to say that when Section 230 came out—I look at the Internet in a more tangible forum—so, think of the Internet in 1996 as a little bit of a village. Okay. You didn’t have that many people around, everyone knew each other. And then over the last 25 or so years, it’s gone from a village to like a town to really now, what it is a metropolis. And the laws that you need to govern a little town are going to be a lot different than you’re going to need to govern a very large city. And some of these platforms are obviously almost nation-states unto themselves. So the laws just have not kept up at all. That’s the biggest problem.

Ellysse Dick: So one of the key arguments from proponents of Section 230 is that it upholds free speech online, but this naturally is going to come with some tradeoffs. Do you think that current intermediary liability law in the U.S. sufficiently balances free speech with other potential harms online?

Andrew Bolson: I mean, that’s a big question.

Ellysse Dick: That is fair.

Andrew Bolson: Obviously, there’s no easy answer. And this is the lawyer of me coming out, where I think that the part of the problem with the conversation of 230 is that you have two fields of thought and not much in between. It’s become this binary conversation between: if you eliminate Section 230, basically the Internet as we know it is going to collapse into itself, and that if you keep Section 230, all these harms—Internet extremism, abuse, all these issues—are just going to overwhelm the Internet. And I think that we need to have more of a balanced approach. That’s something that I’ve been advocating frankly for years. And so I don’t think the liabilities situation right now as it stands is sustainable. I think it needs to be addressed. I think it needs to be reformed. But I think that the overall concept of it, is a good concept. I just think it needs to be reexamined.

Ashley Johnson: You’ve argued that Section 230 in its current form presents a danger to Americans’ privacy. Can you explain how Section 230 is a privacy issue and why you believe that amending it should be a priority for privacy professionals?

Andrew Bolson: Yeah, absolutely. So, what I think that gets lost in the conversation is, privacy really in its heart is all about protection. It’s about protecting people’s privacy rights. And to me that’s about, most importantly, their safety, emotional wellbeing—so obviously like bullying situations—but I think primarily I look at it as an online safety issue. And what you have a situation now is that websites are just not incentivized, or basically you have no floor to standards that could protect online safety. And that’s where the law is really flawed. So we need to create incentives to create more standardized mechanisms, to increase online safety and therefore online privacy. And one of the things that I focus as an advocate, and also as an attorney—I bring my attorney hat to every conversation I have regarding Section 230—because I’m a big fan of what I call process-oriented or procedural-oriented reforms.

And what I mean by that is, a lot of people talk about incentivizing social media platforms and publishers, effectively online publishers, to condition their Section 230 immunity based upon them removing certain objectionable conduct. I do think that’s somewhat of a difficult ask for them, because anytime you’re asking someone to remove subjective content, that creates issues. However, I do think that you can make more of a system to incentivize based upon objective standards to improve safety and therefore privacy. If that makes sense, if you want me to clarify on that, go on.

Ashley Johnson: Sure. If you can give any specifics for what this reform would look like.

Andrew Bolson: Yeah, absolutely. I’d love to. So one of the big issues as a practitioner, I speak to a lot of people who frankly have been victimized online and their biggest situation is they’re being victimized twice, is what I typically find. They’re victimized online through the abuse that they suffer. And they’re also victimized by a process and a judicial process that’s really very difficult for them to navigate. With dealing with online publishers and on these online platforms, you’re typically dealing with someone who is anonymous, the website that might be hosting this content might be hosted in a foreign country. And you might even get lucky and go through the entire process, get a court order, and then a website might not even honor that court order. So I believe that you need to have more procedural certainty.

And that’s where conditioning Section 230 can be really improved, is by creating procedures saying, “All right, you’re going to get your immunity. Okay. But in order to get that, you’re going have to preserve IP address information for a certain period of time to allow a victim to be able to have the time to decide whether they need to file a lawsuit or go to law enforcement and be sure that that information is still going to be there by the time they get around to it.” Or they’re going to have to honor court orders. Or they’re going to have to basically, if you promote or say, “You will come onto my website and you could never be found, everything’s going to be anonymous,” you know what, at that point, if you’re promoting the anonymity, then you should be held liable. So you can’t have it both ways is my point. And so I think that people need more access to the courts, more access to systems that allows them to get judicial relief, and relief in general, easier.

And that’s when I talk about this as really a privacy issue because their protections are at stake, their safety is at stake. And we need procedures in order to prove that. Another issue is Section 230 also gets lost in other areas that really need improvement. And that’s certainly areas like data broker reform. Hard to talk about improving online safety and online privacy without talking about the problems of online data brokers and just simply how much public information is out there about all of us and how easy it is to be found. And that’s not necessarily a Section 230 problem. That’s just a general issue about what should be public in the Internet age. Should criminal records have to be expunged off the Internet? Should there be a limited right to be forgotten like you have in Europe? So there’s just a lot of other questions that need to be answered.

Ashley Johnson: So what should policymakers avoid when looking to amend Section 230 and, on the flip side, what considerations should they definitely keep in mind?

Andrew Bolson: I think that policymakers should look to avoid being too subjective. I think when you get to a situation where you’re saying, “Remove terroristic content. In order to get Section 230 immunity, you need to remove all terroristic content from your website within seven days or 24 hours,” whatever the case may be, that puts platforms in a real unattainable position. And typically when you’re faced with liability, you’re going to be much more likely just to remove wholesale amounts of content. So I would generally try to steer away the conversation from subjectivity and try to encourage them to look at things that can create objective standards. And again, objective standards would be a court order. It’s objective, a court issuing that order, yes, it could be subject to abuse, but there’re ways to dealing with that, you can deal with that.

Non-consensual pornography or revenge porn. That’s not subjective. If someone’s telling you that, “This is me,” and you could verify it, and that, “It wasn’t done with my consent,” that should be removed. Address information. There should be more procedures in terms of a takedown request for personal information, address information. Those are objective things. Same things with subpoenas. Another big issue in terms of the process-oriented approach that I try to take is looking at subpoenas. Typically websites are going to be in different states, different countries. As a practitioner, serving a subpoena in a different state is actually not an easy process. And the law was not designed, or the judicial system was really not designed, for these types of situations. So you have to, in state court, domesticate a subpoena in the state where the website is hosted or based. That’s time-consuming, very expensive, and an impediment to getting judicial relief. If you can create processes to remove that impediment, that could hopefully allow people to get more judicial relief.

Ellysse Dick: So let’s talk a little bit about the future of Section 230 and intermediary liability law. What are future developments, in terms of both policy and any emerging technologies beyond just social media platforms or web hosting services, that could challenge the way we currently approach content moderation and intermediary liability? Do you see any transitions or changes coming up now or in the near future?

Andrew Bolson: Well, I mean, I think that the change is really going to be, there’s a tremendous amount of interest and pressure to change what’s going on now. I do think that the status quo is just not working right now, because I think it creates a situation where victims are, again, victimized twice. So I think there’s going to be a tremendous effort to reform it. Really the question out there is, what does that look like? Technology certainly has a role to play in content moderation. And whether you remove Section 230, reform it, or whatever happens, content moderation is still going to play an important role. I mean, I don’t think Facebook and Twitter and all these platforms, they’re not going away anytime soon. So whether that be AI or other technologies, certainly going to have a role in content moderation. What that role is in terms of Section 230 reform effort, that I’m not terribly clear about.

I think an exciting area of Internet intermediary liability though, is really on the marketplace. I think that’s where you’re going to see a tremendous interest and opportunity. And what do I mean by that? I mean, I look at it in terms of what’s happening with Parler, and the quote-unquote, “the stack.” What you have here is a private enterprise that other companies basically said, “You don’t have baseline content moderation. Your website is so dangerous that we just don’t want to do business with you.” And I think people get confused between private and public enterprises. And in this situation, these are private enterprises—AWS, Apple, Google—who say, “You know what, we just don’t want to be associated with you.” And to me, I think that’s fine. And I actually think that’s not cancel culture. I think that’s being a good steward of their corporations.

And I think that there’s a certainly a big role to play in the marketplace. And I’ve advocated for, again, when we talk, when as I say, “I think it’s hard to reform Section 230 based upon a subjective analysis.” I’m true to my word on that. However, I do think that subjective analysis can be played more in the marketplace where companies can say, “You’re not doing enough to moderate terroristic content, you’re not doing enough to remove abusive content, and I’m not going to do business with you,” or as an advertiser, “I’m not going to put my money with you,” or as a user, “I’m not going to frequent your site.” So I think there’s a huge benefit in the marketplace.

I’ve done some research regarding this and creating a grading system for websites, essentially creating a scoring system. I look at it like, I don't know if you’ve been to New York, but in New York all the restaurants have grades based upon their sanitary conditions. You go by the restaurants either A, B, C, or a “grade pending,” and God forbid you go eat at a grade-pending. If I have a choice between an A next door and a “grade pending,” I'm going to that A.

And I think that websites would benefit from a similar situation where if advertisers and users know that this website is really a grade pending—that on a baseline standard, it falls below their peers in terms of how they moderate content and what they’re doing to remove content and so forth—they may think twice about doing business. And what that could do is raise the standard for the entire industry and creating a new baseline of safety. And that’s really what I promote. But I do think there’s a big difference between that, the marketplace response, and the government response. And I think the government needs to be looking at more objective standards, procedural-based responses, versus a subjective analysis.

Ellysse Dick: That actually brings me right into my next question. So looking specifically at the next couple of years, how would you like to see the Biden administration and this Congress approach intermediary liability and platform regulation in the next two, four years? And are there some problems related to online speech that we shouldn’t expect Congress or the administration to fix?

Andrew Bolson: Yeah, I mean, I wouldn’t get my hopes up in terms of, there is no panacea. I think that’s also a really important thing to keep in mind with this Section 230 debate. The Internet is always going to have problems. When you ever have a free exchange of ideas, there’s going to be, certainly, issues with it. What we can do is try to make the rules to allow more protections for people to be improved. And I think that’s where the emphasis should be placed. In terms of the next couple of years, I do think there’s going to be a lot of reform or at least attempts at reform. I mean, already, there’s been some bills proposed, some of which have elements that are not bad. I would encourage more discussion, more hearings, more idea-sharing, more forums like this, where people can get out their ideas.

One of my, I guess, bones of contentions as a tech outsider, I like to say—I personally, I’ve never worked for a tech company, I’m not an academic, I’m a practitioner—and I do think that one of the things that the administration should be wise to do would be to take feedback from others and not just solely rely on the tech industry and the academia and the academics out there. Because I think that they come at it from a more theoretical standpoint, and it’s a very important standpoint, but that being said, I think you need victim rights groups to be involved in the process. I think you need to have attorneys who are practicing in this field and what it’s like, what impediments do we encounter in terms of dealing with abusive content, and trying to get it removed, and seeking rights and compensation for our clients. So, I think we should be brought more into the conversation in terms of how we can improve the status quo.

So it’s really just bringing more stakeholders to the field. I think law enforcement really needs to also be involved in terms of, what resources do they need? An issue that you see a lot with online content and people being abused online is, you hear people going to law enforcement, and either them not understanding the technology, or not having the resources to deal with the issue, or just try law enforcement basically saying, “Go to this department, we don’t really handle this, you should go here or there.” And people are just kind of left helpless and feeling confused. So I think it would be very beneficial to bring all stakeholders together, to understand all different perspectives.

I also think that discussion needs to be held about a couple other items. Notably, the FTC gets a lot of talk in terms of regulating Internet companies. And the FTC was formed in the early 1900s to tackle monopolies and trust. And clearly there’s going to be a large wave of antitrust action in the Internet space. However, the Internet regulatory environment, in terms of dealing with these processes that I’m talking about that need improvement or the content moderation, that feels a little bit outside the FTC’s area. And I think there’s not enough issue expertise in the FTC, that I start believing that maybe a new department is really necessary to handle the unique aspects of the Internet, just as the FTC grew out of a need based upon the development of trusts and monopolies.

Here, you have an Internet that really is completely new and you need more issue expertise. And to have a department dedicated to that, I think, would be beneficial. Likewise, I think there’s going to be a conversation that needs to be had as to whether we need, either in that new department or within the court system, a separate court system to handle online disputes, because online disputes are really focused on speed, process, and the rules are really unique to the issues. And so I think the concept of Internet courts may get more play in the next couple of years.

Ashley Johnson: Going back to your expertise with privacy, are there any lessons from Section 230 and the surrounding debate that we can apply to the ongoing debate on data privacy or vice versa?

Andrew Bolson: Yes, but not in the way I think you may think. I would say the concept is empathy. I think there needs to be much more empathy in the debate. Privacy, data privacy, Section 230: It is all about protecting people. At the end of the day, these industries are about protecting the rights of people, the wellbeing of people, the safety of people, and that can never be forgotten as these industries mature. The first and foremost is, every law that you look at—every data privacy law, I should say—“What is your aim? What is your target? How are you protecting someone?” Not just privacy interests, but going beyond that and say, “How does this safeguard them? How does this improve their lives?” And I think that’s where the issues are really interconnected. I think we sometimes get lost in the weeds of these laws and these statutes, and who’s getting fined, and these regulatory actions.

And I sometimes shake my head and say, “All right, are we getting lost here? Are we forgetting what this is all about? Because this all got started because people’s lives are being impacted.” And as a practitioner, that’s what I deal with. I get calls and people are crying to me on the phone saying, “My life is being destroyed here and I don’t know what to do.” And too often I find myself saying—I’m an empathetic ear, I always listen—but I unfortunately say, “There’s not much I can do, because under the current regulatory scheme and the current laws, it just is going to be very expensive for you to take action. And then ultimately, I don’t even know who you could hold accountable.” And so, I think we need to be more empathetic and really listen to the people impacted and let that guide the conversation.

Ellysse Dick: So you’ve laid out some really great ideas and your vision for what this could look like in the future, both on the government side and on the private sector side. Could you summarize what your ideal scenario for the future of intermediary liability and content moderation would be, and some ideas for what it might take to get there?

Andrew Bolson: Yeah, absolutely. My ideal world, I would see a reform of Section 230, not a wholesale removal. And reform based upon, again, objective standards where, whether it be a new regulatory body or someone comes in and says, “Website, platform, you need to meet these standards for online impersonation. You need to install these safeguards. And if you don’t have these safeguards, we’re going to remove that Internet liability in case something happens,” “Website, you need to honor court orders,” “Website, you need to respect this procedure for how subpoenas are issued.” So it’s all about objective standards in terms of the regulatory reform for me. And again, on the private side, it’s the marketplace. So let’s steer that discussion. Let’s create more accountability through transparency. Let’s hold websites, platforms, advertisers accountable. And let’s bring up the entire baseline of what it means to moderate content, because ultimately content out there needs to be more moderated. There are problems.

And I think that there needs to be more accountability and that can be held through a scoring system, a grading system, certifying websites, where you can create a race to the top, versus now you have a race to the bottom. Because there are websites now that sell the concept of, “Hey, you can say anything you want on this website, free speech is great and you can never be held accountable.” And that’s not acceptable. And so if you switched that paradigm and say, “Let’s create a race to the top where websites and platforms now advertise their content moderation policies, because they want to get those advertisers,” I think that would change the conversation. I’m hoping to steer the conversation that direction, because I think that’s really where it’s needed.

And I do think that we do need to, again, bring more stakeholders to the conversation and think about not just Section 230, but other laws and other areas that would improve both privacy and, most importantly, user safety. So again, reforming the data broker industry is critically important. Reforming, possibly thinking about public record laws and what should be made public or how they should be made public, that goes on to address confidentiality issues, and whether some address information either be allowed to be taken down, or other issues. I mean, I think that this is also a conversation of equity. There's a national conversation going on about income equality, inequities, and also as all such things, racial inequities. I think that that has to be part of the conversation, because wealthy people who can afford to buy a house without a mortgage can put their house in an LLC and have that privacy protection they crave. But people who need a mortgage, it’s very difficult to do that. So it’s removing those barriers because ultimately privacy and protection is all about the user.

Ashley Johnson: And for our final question—you’ve definitely touched on this, but we want to wrap it up neatly—we want to ask what your verdict is on Section 230. Should we keep it amended or repeal it?

Andrew Bolson: I’m an “amend.” I’m a reformer. I believe that, as an attorney, I always suggest mediation as the best policy. I think that, again, this binary choice between wholesale removal and keeping it as the only option, I just think it’s a fallacy. I think it’s not innovative. I think that there’s something that can be done, because the status quo—I think everyone agrees: enough research, enough conversations about it—the status quo just isn’t working. And so if the status quo is not working. You have to do something. Doing nothing, I think, the cost of doing nothing costs more than changing it.

But the question is how? And it’s really, how do we make those changes intelligently and thinking through the consequences of those decisions, to make sure that you’re not doing things that will create unintended consequences to make things worse. And I think that’s really important. For example, there’s a been a lot of debate about FOSTA, which was passed a couple of years ago. And some people debating and saying, “This situation, because of FOSTA, has been worse for people within the sex industry than pre-FOSTA.” So do you have these unintended consequences? And a lot of times, you have to think through those doors and people are going to make mistakes.

And that’s why I’m also advocating for more of a regulatory answer where possibly a regulatory body can make those regulations and change them more easily versus having to make the changes through statute, which typically, it’s a lot harder to change the statute than it is to change a regulation. So I think that you don’t want to kick it always to an agency, but in this case, maybe that may make sense to create some baseline standards that we can work off of.

Ashley Johnson: Thank you so much for joining us, Andrew. If you want to hear more from Andrew, you can follow him on Twitter at @NJPrivacy. That’s it for this episode. If you liked it, then please be sure to rate us and tell friends and colleagues to subscribe.

Ellysse Dick: You can find the show notes and sign up for our weekly email newsletter on our website, itif.org. Be sure to follow us on Twitter, Facebook, and LinkedIn too, at @ITIFdc.

Back to Top