cover of episode Will Killing Section 230 Kill the Internet?

Will Killing Section 230 Kill the Internet?

Publish Date: 2023/2/23
logo of podcast On with Kara Swisher

On with Kara Swisher

Chapters

Shownotes Transcript

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.

It's on!

Hi, everyone. From New York Magazine and the Vox Media Podcast Network, this is Project Veritas' disgraced James O'Keefe, but with 100% less private plane abuse and just assholitude in general. Just kidding. This is On with Kara Swisher, and I'm Kara Swisher. And I'm Naima Raza.

Trump Jr. is still backing James O'Keefe, by the way. He loves that Project Veritas. Ugh, whatever. He's such a loser. I'm told there are a lot of Project Veritas people crawling around South by Southwest, where we're going to be in a couple weeks. I don't care if they catch me saying they suck. I don't care. I don't care. I don't talk to people I don't know, so fine.

Do you not? Not really. You literally have a career where you talk to everybody you don't know. I understand, but I don't like, if I say it publicly, I say it publicly. Yeah, you wear your opinions on your sleeve. That is correct. You sound very sick still. I've been sick for days. I've been sick for days. Again, that was before the last sickness. I had rotavirus, which was repulsive. And now I have this chest cold that I got from my kids. And it's really, you know, having toddlers is a...

It's a great way to lose weight and feel terrible all the time. It's just been one big Petri dish of crap. I don't know what else to say. It's just, I have not gotten COVID yet, though. That is the one thing I have not gotten. So everything else, though, come and talk to Kara. You can have it anyway. My immune system. No, I'm not saying that.

Yeah. But anyway, Cara, even on your deathbed, you're here taping the podcast. And today we're going to talk about two cases that the Supreme Court has heard this week. But you've been covering Section 230 from the beginning. The beginning of the before the beginning. It was 1996 is the beginning, right?

I think it's Stratton Oakmont with Prodigy when Prodigy got into trouble for this. Explain what Prodigy is for the young people, Cara. Prodigy was a thing that Sears and IBM put up and I called it everything Sears knew about computing and everything IBM knew about retail. So squat. Yeah.

It was terrible. AOL came in behind it and kicked their ass, but there was also CompuServe, which was slightly better. Were these internet providers? Yes, they were very big. They were the beginning of everything. But anyway, they got into trouble for what was on their services, like what people were saying. And it's crazy because you can't control, if you're creating this worldwide information network, people say the craziest stuff. And so you can't be responsible for anything that's published on it. You're not really a publisher, you're a platform. And so this was designed...

by in Congress to bring this nascent industry to the fore. It was very important or else they would have been sued out of existence just for just a comment on a board and stuff like that. Yeah, so in 1996, Congress basically said, if you're a platform, not a publisher, you effectively have immunity

And over time, that immunity has been narrowed very, very slightly, right? Slightly. I'm curious because obviously this is a pre-algorithm law for a post-algorithm world. Has your point of view on it changed over time? Like, do you think that these platforms should have more responsibility? I mean, that's a big part of your... There should be a way to make these companies liable for some things, right? How they behave, how they manage. And at the same time...

and not mire them in lawsuits. There's almost no fixing this. You know, when I think about it and think about it, there's got to be some way. I think around my feeling is you do very stringent privacy laws and things like that, and that takes care of it. And then you watch where the money goes, how they advertise and things like that. This law cannot be removed without really removing the main thing

infrastructure of the internet, it would collapse. It would collapse. Because all of a sudden they would have no incentive to allow anyone to say anything. Or they'd have an incentive to say everything, right? It's weird. It's weird. They have to make good faith efforts to clean up their platforms and we've got to let them. Right. Right? And so that's the problem is that it's impossible to do it, but without this, it's impossible to operate. That's, I think, the interesting thing about hearing these arguments.

in the Supreme Court because it's such an intellectual exercise. They're like, well, if you look for his Becky Rice, it's kind of constant philosophical argument about where do you draw that line? Well, it's just not going to, this is a nonsense case. Both cases are nonsense. I'm sorry. That's a way to pitch that. I'm just saying, I just don't see the Supreme Court as the one that should be weighing on this. If Congress wants to make changes, they need to work with consumers in the industry to figure out the best way to do this.

This is way too, this is the third rail and you can't touch it. You just can't. It's been enlightening to watch this, I believe, octogenarian lawyer for the plaintiff, Eric Schnapper, who's

having to argue algorithms and more. Yeah, he's not doing very well. He keeps being like, I don't want to answer that question because that question takes me down a rabbit hole. I don't want to go down because that doesn't end well for me. This is not going to end well for him, just so you know. Like he was telling me, apparently a lot of the lawyers have been conflicted out by the tech companies. So the plaintiffs were kind of left with Schnapper anyways. Well, I'm sure the rich tech companies conflicted everyone out. That's called the Rupert Murdoch move. Yeah.

It's not the only Rupert Murdoch move, sadly. Kagan had a good point on this, which she said, every other industry has to internalize the cost of its conduct. So why is it that the tech industry gets a pass? It's a little bit unclear, but on the other hand, I mean, we're a court. We really don't know about these things. Yes, that's correct. That's correct. That's a good question. Elaine and Kagan have been very funny, but they've all seemed to be in agreement. So the court just heard these oral argument in these two cases, Gonzalez versus Google and Twitter versus Tom Neh. Both

Both involve terrorism and families claiming that these companies, Google and Twitter respectively, are liable for the loss of family members as a result of aiding and abetting terrorists effectively. And we have a panel of experts here to discuss it with us. Evelyn Dueck is a professor at Stanford Law School who's focused on regulation of online speech. Hani Fareed from UC Berkeley School of Information is a computer scientist and he's an expert in misinformation.

And then Jeffrey Rosen, he teaches at GW Law School and is a president and CEO of the National Constitution Center. Yeah, I'm very excited to talk to them. There's so many smart people on this topic, but this is a great trio of guests and they should illuminate us. I'll enjoy talking about the nonsense cases, Cara. It's nonsense. That's my legal opinion. That's my, I'm a very good lawyer. Nonsense. I was going to be a lawyer. That's more like a judge.

I don't think you're a lawyer. You're a judge, Kara. That's true. All right, let's take a quick break and we'll be back with Evelyn Dweck, Connie Fareed, and Jeffrey Rosen. This episode is brought to you by Shopify.

All lowercase. That's shopify.com slash tech.

Evelyn, Hani, and Jeffrey, thank you so much for being here to talk Section 230 with me. It's been exciting for me for a long time, but nobody else. It's everybody's favorite subject now. Evelyn, why don't you start very quickly about what the law is and why it matters? Sure. So the law everyone's talking about, Section 230, is famous for shielding platforms for liability from content that other people post on their website. So

So Cara, if I go on Twitter and I defame you, you can sue me, but you can't sue Twitter. And that's really well established. That's the central, you know, that's the heartland of the law. But what these cases are about, especially the case yesterday, Gonzalez v. Google, is about what if YouTube starts amplifying that, recommending it through things like the UpNext algorithm? Does it then lose its Section 230 immunity for that content?

So being an editorial, essentially, right? Making editorial decisions. I think it's further than that, because I think it's well accepted that sort of some sort of editing is part of the publishing function, right? Like publishers, the New York Times, it decides where to put certain things on its front page, its homepage. That's a fairly core part of editorial functions. I think it's something more than that, that the plaintiffs are arguing that case, that they're really sort of pushing it at you, recommending it to you, saying, hey, this is content that you like, you know, here's a terrorist that you may know and want to talk to, that kind of thing.

Right. And when it was initiated, I'd like you to go back very quickly, it was sort of to make them good Samaritans to clean up stuff, right? To keep the platforms clean without being subject to lawsuits that would sink them. Right. Yeah. I guess there's two parts of the law. So the first is to make sure that they don't over-censor content, over-take things down because of risk of liability, because they may not care about a specific platform.

an individual post. And so, you know, it's in their interest if there's even a specter of liability to sort of take that down. But there was also this court holding that said, well, but if you start taking things down, you can be assumed to be engaging in this kind of editing function and that you have knowledge and take responsibility from what's on your site. And so this reversed that holding in Stratton Oakmont to say, no, even if you engage in content moderation, if you're a good Samaritan doing these kinds of things, you're not going to become liable.

Now, Jeffrey, let's dive into the cases. Can you describe each case that is before the Supreme Court and how they're similar and where they differ? This is the first time a 230 cases come before the Supreme Court. That's right. And both cases raise the issue about whether or not when the platforms host content that might lead to illegal acts but don't know that it's going to lead to particular illegal

or illegal acts, they're liable or not. So the Google case involves the question of whether- Gonzales versus Google is one of them. Let's name them so people know. Exactly. Google Gonzales is filed by a 23-year-old American woman's family. She was Nomi Gonzales. She was killed in Paris in an ISIS attack.

And the lawsuit claims that Google, which owns YouTube, violated the Anti-Terrorism Act's ban on aiding and abetting terrorism by recommending ISIS videos. And the Gonzalez case raises particularly the question of whether Section 230 of the Communications Decency Act protects terrorism.

Google and other internet platforms when its algorithms target users and recommend other people's content. And the central question is the one you were just discussing with Evelyn, is an algorithm a form of recommending? Is it an active act that removes Google's immunization? Or as many, many of the justice suggests, is a neutral algorithm that recommends a

cat videos or cooking recipes in the same way as that it recommends ISIS videos, not a form of active recommendation. And the other one? And the Twitter case in particular involves the question of a terrorism act that says that you're...

liable for, for example, aiding and abetting a bank fraud when you take certain active steps to keep the books and you know that your actions are going to help the robbery. And the big question here is whether the algorithms are that kind of active knowing or not. And all the questions of the argument focused on

Do you have to know that the algorithm is going to promote a video that will lead to a particular act or help a particular terrorist? Or if you're simply hosting ISIS videos but don't know that there's going to be any connection to the Paris attacks, for example, does that not create liability under the act? So the two cases are very closely connected because as Justice Barrett said in the oral argument, if the court finds that there's no liability in the Twitter case and that the

promotion of the videos didn't aid and abet terrorism, then the Section 230 case goes away. Honey, you're a computer scientist, not a lawyer. A lot of the conversation here is, as Justice Kagan puts it, about pre-algorithm law in a post-algorithm world. So should companies be more liable for, say, the creation of a thumbnail or for a recommendation list? Should there be a narrower view of 230?

Yeah, I think the issues here got a little muddled up in the Gonzalez v. Google case. So what Google was trying to argue is that it is absolutely necessary for YouTube to recommend videos. It's fundamental to their operation. But that's not true. What Google was saying is, look, if you do a Google search, we must somehow use algorithms to organize data. 100% correct. That's what a search does.

What's at issue here in the YouTube case is not doing a search. It's a recommendation algorithm which says, you've watched this video, here's another video. You've watched a video and there's a panel along the right-hand side of your browser that says, here are some other things you might like to watch. That's in order to get you to consume more. It's a business decision, correct? That's exactly right. It's a design decision. YouTube could have eliminated those features entirely...

And then these issues don't come up. So the issue to me is not recommendation algorithm or not. It's a design decision, which then has an algorithm that promotes ISIS videos and horrific content. And that, it seems to me, gets us closer to the Lemon Snap case from a few years back, where Snap was found liable for designing a product that

that the courts said did not get 230 protection because it was a product that was designed, as the court said, to encourage people to drive fast because it showed your speed over your video. That was not a 230 protection. So I think...

I was frustrated that I thought the issues were getting muddled up. This is not about thumbnails. It's not even really about recommendation algorithms. It's about a design of a service. Designed to make you do this. An algorithm has demonstrated to lead you to more extreme content. I've seen it with my own kids, um,

My issue here is that YouTube chose to make those design decisions. And I think that doesn't give them 230 protection because it's not about user-generated content at that point. So, Evelyn, is this a case about 230, really? The Justice Department has taken a diametrically opposed position on each case. In Gonzales, the government is supporting plaintiffs, saying that Google should be liable in some way. The algorithm funnels content to users. In Tamna, the Justice Department sided with a social media company, saying Twitter should not be liable.

Explain the difference. So in Tamna, they're saying that this couldn't constitute aiding and abetting liability under the act, under the Anti-Terrorism Act. But they are saying in Gonzalez that the platform's role in recommending content at some point might become a peer Section 230 immunity. Now, the question in Gonzalez could be potentially broader than Anti-Terrorism Act requires.

So this might be, you know, the question is, if you lose liability, then you go back down to court and you sue around the merits of an individual case. And so it's sort of that it's not diametrically opposed in the sense that all that Gonzalez is,

says is, can we get in through the door to get to court to sue these platforms? And then Tom is like, well, can we sue these platforms in this particular case? Do we have enough to show liability under the Anti-Terrorism Act in this particular case? And the Justice Department's answer in that case is no. No, not in that case. Okay.

So the case today, Twitter versus Tom Nick, was specifically about is there underlying liability? So can you say that Twitter has aided and abetted terrorism by not doing enough to take –

ISIS content down off its platform. If I could just sort of stop on this for a second, because I think this is really important. One thing that's really remarkable about these cases is how attenuated the causal link is. So in both cases, there's no evidence in particular that Google or Twitter had any particular role in encouraging or committing these specific two attacks in these cases, or that any of the attackers saw a particular piece of content or were radicalized or recruited through the platforms.

It's just some sort of very generalized idea that, oh, well, come on, we all know platforms have ISIS content on them, and they didn't get absolutely all of the needles in the haystack, and so therefore they have aided and abetted terrorism. So that was the question today. Now, even if...

the plaintiffs win on that and say, yes, that is aiding and abetting under the Anti-Terrorism Act, they can still lose if Google wins its 230 case. Because the whole point of Section 230 is to say, look, you can have liability. There can be a defamation cause of action. There can be an anti-terrorism civil liability cause of action. But 230 says it doesn't matter. The platform is immune.

One thing I would quibble with with what Hani said, though, is this distinction between algorithms that YouTube could just turn off all recommendations or not exist without any kind of recommendation. And I just don't think that that's sort of exactly what this case is like. It's not sort of the same to say that the UpNext or a newsfeed algorithm is the same as what Hani was talking about in the Snap case, where there was a specific feature.

filter that was added to content that was saying, you're going really fast, go faster, go faster, that sort of encouraged a specific kind of dangerous behavior. I think there was sort of interest from the court in saying, okay, you're right. There has to be a point at which a platform has done so much that they are really shoving this down your throat. But the justices were

We're literally asking the lawyers, give us a line. Give us a line between a newsfeed algorithm and some sort of- How different would that be? Every single news organization does that now. Read this, go this, go here next. It's just more intense on a Google or a Twitter. So I think that's the point. I mean, I think they're saying, look, this is what is inherent to publishing. The whole point of Section 230 is that you are immune for doing what every single news organization does, which is saying that here's some content that you might like, and that the

The point of 230 was to immunize platforms from that. Now, the justices are saying, well, surely there's a point where platforms are doing it so intensely, so targetedly, so persistently. Yeah, and they know who you are persistently. But they were asking the lawyers to say, hey, give us a line. Show us where the platforms cross the line. And the lawyers just couldn't give them a line. And I think it really scared the justices. Yeah. Right. There is no line. Hani, can you respond to that?

Yeah, I don't think it's fair to equate what the New York Times does by highlighting articles with what YouTube does. YouTube is scraping up, vacuuming up every morsel of my personal data, looking at my viewing habits, other people's viewing habits, and then making very targeted recommendations regardless of whether those recommendations are good, bad, ugly, or illegal.

And I don't think you can say that about the New York Times. When I go to the New York Times and, Kara, you go to the New York Times, we see the same information. And you can't say that about YouTube. It's not targeted specifically to you. Exactly. In that regard. That's a fair point. Jeffrey, there's an obvious ideological split between conservatives and liberals on the court. But with Section 230, it's not there. They all seem to be saying the same thing. I was sort of shocked. And we'll go into the individual justices. But what is happening here today?

on the court right now. You're absolutely right. There is a fascinating broad split, but it was not present in this case. The split arose in a series of opinions by Justice Thomas, where he expressed- Yes, nonsensical, but go ahead. Well, his concern was that

230 might allow platforms to discriminate on the basis of content and ideology. And he's sympathetic to laws passed by Texas and Florida, for example, that require the platforms to obey First Amendment standards and not to discriminate on the basis of content. And essentially, he suggested that the platform should be treated like common carriers with an obligation to

even though they're not governmental bodies, to open their platforms to all comers without speech discrimination.

That split was not present in this case where there was broad consensus among the justices that converged around the traditional First Amendment standard that said that to be illegal, content has to be intended to and likely to cause imminent violence or lawless action. Hevelin rightly focused on this question of attenuation as the central issue in both cases. And generally, the First Amendment in a standard accepted by liberals and conservatives, First

introduced by Justice Brandeis in the Whitney case and then embraced by the Supreme Court in the Brandenburg case, says that unless you both intend lawless action and your speech is likely to cause it, then you can't be liable. And although Section 230 and the Terrorism Act pose that question differently, there was discomfort on the part of liberal and conservative justices here

for holding platforms accountable for speech they neither intended nor could have possibly anticipated would lead to lawless action. And just as on both sides, from Justices Kagan to Justice Kavanaugh, both said this could kill the internet. It would just result in business chaos. It would also completely obviate Congress's purposes in passing Section 230, which is why it was really interesting that both Justices Kavanaugh and Kagan said

This is for Congress to decide. Justice Kagan said, you know, we're not nine Internet experts here. We're really not very good at this. Yeah, well, let's listen to that. Justice Kagan made that great point. I mean, we're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the Internet. LAUGHTER

So she should have a career in stand-up. She was very funny yesterday. But she was making, I think, the salient point is, why are we deciding this? And so, honey, should this be decided by the court? They seem to be backing away from this as fast as possible, which makes me question is why they took it in the first place. Yeah.

I agree with your interpretation. I think both sides scared them. Not doing anything scared them and doing something scared them. Obviously, I do think this is better handled by Congress. But the problem is when you go to Capitol Hill, you hear very different things from either side of the aisle. The right hates the technology sector because they have bought into this false narrative.

that content moderation is anti-conservative. It is not. Conservative voices dominate social media. The data is overwhelming to suggest that. But that is the narrative they are pushing.

The left is saying that tech is not doing enough to mitigate the harms from technology. And if you disagree on the nature of the problem, you fundamentally disagree on the solution. One says less moderation. One says more moderation. So I'm not particularly hopeful that Congress is going to move in the right direction. Having said that, we are seeing some good movement from the EU and from the UK and from Australia. And this is another thing for us to think about.

we are a country of 350 million people, about 5% of the world's population. We need to think very carefully how we start to litigate and regulate the internet that is going to affect 95% of the people who don't live in this country. And I don't know- That is a very good point.

That I have confidence in Congress to do that. Yeah, I just want to jump in on this about the politics and, you know, finding agreement. And, you know, one of the areas where there might be a chance of finding agreement is terrorism. And, of course, that's sort of looming large in the background of these cases. And perhaps one of the reasons why these cases were the ones that got picked up, basically,

because there's sort of a question of how much are these specifically terrorism exceptional kinds of cases. And I have to say, one of the things that I'm really worried about is the court and potentially Congress having a big blind spot about sort of the First Amendment and the free speech issues at stake here with respect to terrorism, where they have in the past as well, creating a rule that really incentivizes platforms to sort of over-moderate and be extremely risk-averse in this context,

that has disparate impact on marginalized communities. We've seen this a lot, for example, in Gaza and Palestine, where a lot of Arabic content just goes missing or evidence of war crimes from Syria because platforms are so scared about potential liability. I want to be really clear. There might be a point at which platforms are so negligent and so willfully blind to the problems on their platforms that they should absolutely be liable.

But we're talking about a case where platforms are – they have programs and there might be questions about whether these programs are sufficient. But holding them liable for trying to get, as I said before, every single needle in the haystack might create a draconian regime with really problematic disparate impacts. The opposite being that they're too loose. Right. And then they're too loose. It sort of drags them both ways. Right.

I want to talk about how the justices are reacting in each case. And again, I've been surprised by how cooperative they've been. They're actually acting like justices. It's kind of cool to remember when that was. Let's start with Gonzalez. What are the themes emerging from each justice? It really was striking, the level of agreement. Justice Thomas kicked things off by saying that if an algorithm recommends ISIS videos based on a user's history and also recommends cooking videos, then how can it be held just?

responsible, and several justices echoed that concern. Justice Gorsuch was focused, as always, on the text of the statute, and in both

cases, and Gonzalez in particular, he's skeptical about the Ninth Circuit's test, which is based on neutral tools. And he said that wasn't in the language of the statute. But by going back to the language, he felt that it would be a firmer basis for avoiding liability. Justice Jackson maybe is most sympathetic to the Gonzalez family. She said that

maybe Congress is trying to obviously protect internet platforms that are screening offensive materials. And here she's saying the argument is that Section 230 protects the promotion of offensive materials. How is that conceptually consistent with what Congress intended? Although in other parts of the argument, she was concerned about the attenuation question. Justice Kagan, as we heard, said the court might not

be well suited to solve the question and express concern about the business model for the internet. Kagan,

was echoed by Kavanaugh, who specified that Congress had this broad language in Section 230, and all courts had interpreted that to provide for protection for conducts like YouTube. And why should the law be totally changed here? That was a really significant fact for all the justices that all the precedent basically supported this. Justice Barrett was just zeroed in on the question of how the whole case might go away if the Twitter case ended up not finding liability. And

And that was the gist of the nine justices. As you say, it was really heartening to hear the justices act like a court. And it may be because there really is a broad consensus about the core of justice.

First Amendment tradition really requiring intent and likelihood to cause imminent lawless action, and also because the precedents of the lower courts had been relatively in the same place. Always the same, honey. Yeah, I was expecting to see more partisanship around this anti-conservative false narrative, and I was heartened not to see that. I was disappointed that

I think some of the technical issues as my role as the computer scientist in this conversation, we're getting muddled up. And I think Google did a very good job of muddying the waters in terms of equating a Google search with a YouTube recommendation saying we have no choice.

but to organize the world's information. And I don't think the judges pushed back on that or really understood that in any real way. And I would have liked to have seen a more clarity on what exactly the underlying issues are here. And I think King got it right, is that these are not the nine world's experts on the internet. Right. But you know, Google, that's their whole job because they're God. So Evelyn, anything that struck you? Yeah. I mean,

Hany's right that the anti-conservative bias narrative didn't come up. I wouldn't get too excited or hopeful about that. Sorry, Hany. That's partly because those cases are coming. They're coming likely in the next term with their choice cases arising out of Texas and Florida. And I wonder if, you know, I was surprised that Justice Thomas and Justice Alito sort of seemed to be more sympathetic to the platform's point of view in these cases than I expected. And I wonder if that's because they know that they're going to have a second bite at the apple next term. Yeah.

The other surprise, I think, was Justice Jackson in Gonzalez really seemed to be interested in trying to find ways to narrow Section 230. Obviously, we had no idea what Justice Jackson's views were before this argument. And she was really going back to where you started, Cara, about the good Samaritan nature of this provision. She's saying, isn't this intended to make sure that platforms take content down? And if they are not...

acting as good Samaritans, shouldn't they lose 230 immunity? So she sort of seemed really interested in that reading. And both Jackson and Sotomayor today as well in the Tamna case seemed really concerned about the idea that the platforms might be leaving terrorist content up and at what point they should be, they should get

liability for that as well. So that was interesting. But I mean, the politics around these issues in general right now are so weird. It is one of the few areas in sort of constitutional law right now, the First Amendment and then the statutory issues where, you know, it's super high stakes, but it doesn't

fall neatly in a left-right divide because it cuts both ways. Absolutely. So let's talk about the lawyers too, because I got to say that was not impressive in the case for the plaintiffs. Jeffrey, you get Eric Schnapper, the ever-signing Eric Schnapper. How do you assess his performance here? The justices seemed impatient with him. He would repeatedly apologize for not- He's doing both cases. Is that correct? That's right. Yes, he is. The

distinguished legal scholar and advocate, very great background. And he apologized several times for not squarely answering the question, but in particular, because he favored multi-part tests and refused to give a categorical answer for what standard in both cases the court could embrace for recognizing liability. Both textualist justices like Gorsuch and more pragmatic justices like

including Kagan, expressed impatience. So, you know, it's a complicated situation

that he's arguing for, not a simple one. And at one point, interestingly, I think it was Justice Kagan said, why isn't this a jury question? I mean, if you can't identify a clear standard, why not send this back to juries case by case? So he was well defending his position in the sense of arguing for a different approach to 230 that wasn't categorical, but the justices just didn't seem to be buying it. Yeah, I don't know doesn't seem to be a very good answer to these people.

That's my legal take. But Evelyn, what about the defendants? First, Lisa Blatt, who's representing Google, and Seth Waxman, who was representing Twitter. Yeah, can I just say briefly on the plaintiff's lawyer, it was actually pretty disheartening because it was like shockingly bad, like stunningly bad. Not, you know, this is a hard case and he struggled with some tricky questions. These were like totally predictable.

questions like, how do you differentiate your case from a search engine? And he literally went silent and sighed. And so I don't know. Stunningly bad. Yeah. Stunningly bad. Stunningly. And it's kind of depressing, you know, how many hours of...

Yeah, academic articles have been written about this, podcasts by the thousands of hours that have been preparing, running up to these arguments, and it sort of all came down to these stunned silences, which is a little depressing to see the sausage get made in that way. On the platform lawyer's side, I mean, these are experienced, established, well-known, very competent professional advocates. They were very strong in making many of their arguments. Obviously,

Sort of nothing as surprising there. I was a little surprised at some of the concessions that Lisa Blatt made on Google's behalf yesterday in terms of saying, like, if you had a featured video, would that make you lose Section 230 immunity? That was sort of a bigger concession that I'm not sure how her clients would have felt about that. And then today on the platform side, Seth Waxman, you know, I think –

I was a little surprised that he didn't talk more about the speech implications here. You know, there was a lot of discussion about, is this like a bank? If a terrorist has an account at a bank account, are they aiding and abetting? And I think it's really important that we talk about sort of the disparate impact that how speech intermediaries are different from other kinds of intermediaries. You know, speech is somehow, I think we should recognize that it is special and it has First Amendment protection for a reason.

And so we should be very cognizant of the potential sort of harms that come from being overly censorious. Honey, anything to add? How do you view the court discussing technology? Often these are spectacles like Zuckerberg in the House testimony in 2018.

They're trying to be dignified here. They're trying their hardest. Yeah, but they weren't. I did find myself on many times screaming at my computer screen at Schnapper's responses. I found the whole thumbnail thing, which he seemed to put a big stake in the ground for, completely baffling, that he was arguing that because –

Google or YouTube rather, generates a thumbnail of an uploaded video. They have now created the content. The judges were confused. I was confused. Defense's lawyers were confused. And it was a bizarre argument.

I also was puzzled as how he couldn't explain the most basic fundamental differences between a Google search and a YouTube recommendation. And I don't think he did his clients justice. Okay. Now I want to make a prediction from each of you. It will be outcome of each case. Evelyn, Twitter versus Tomna, and then Google versus Gonzalez. So today's argument was really confusing for me.

I went into it thinking there is no way that they could find that this was sufficient to constitute aiding and abetting because the knowledge was just too generalized. But they were pushing on Twitter's lawyer extremely hard that I sort of the first half of the argument, I was thinking, oh, no, the platforms are really going to lose its case. And then it sort of flipped back around on the other side when they were questioning Schlapper. So I

I mean, I'm not a betting person. I think I'm definitely more worried than I was that they will find that this is sufficient to be aiding and abetting liability under the Anti-Terrorism Act. And what about Gonzalez? Yeah, I think, so I can't see them doing

just finding liability in this case. There's a couple of different ways that it comes out. If they decide with Twitter in the Twitter v. Tumner case, they actually, you know, my speculation is they will dismiss the Gonzalez case as improvidently granted because they don't need to decide that if there's no underlying cause of liability. And one of the reasons why I think they might do that is I just got the sense that they were pretty scared of

about drawing a line that's gonna mess everything up. I think that they were really desperately asking the plaintiff's lawyer, give us a line, give us a rule that we can use in our judgements to sort of make sure that platforms aren't liable for absolutely everything that they recommend, but also are liable for some of the stuff that they recommend. And no one came up with a really good rule that they could sort of copy paste into their judgements. And so I think that they are either gonna leave the status quo and say, this is not for us, it's for Congress,

or say, we're going to decide this another day when we have a better set of facts. Okay. Jeffrey? I understood about the pushback in Tamna, but in the end, I think they will not find liability. So many of the justices of different perspectives thought that there had to be some connection to knowledge of a particular person or action before you could find liability. And since neither was here, and because they're concerned about the consequences of liability, I

I don't think they'll find it. And then I agree there won't be liability in the Google case. It could be dismissed if the Tomna case is clear or on other grounds. But I think it'll be a rare example of bipartisan consensus on the U.S. Supreme Court. Honey? I, of course, don't have the legal credentials that I'm going to definitely do. What do you think should happen? You're smart. Yeah. So I was surprised to...

to hear the arguments in Twitter, Tom, now I actually think there's a chance that they rule for plaintiff here and Twitter is going to lose that one. So I'm with everyone on that. I'm with Evelyn and Jeffrey that I don't think Gonzalez is going to win. And I think everybody knew this was a weak case going in, but I do think the justices are hungry to do something. And I think they may open the door

in their ruling for a future case to welcome maybe a case that is better on the facts that would have more guardrails and allow them to rule in a more narrow way to rein in some of the over interpretation of 230 that some of us think has happened over the last few decades. That's a good point. Jeffrey or Evelyn, which one will it be? Is there one coming? So I mean, I think

that there are real legitimate questions about the breadth of 230 is the way the lower courts have interpreted it. I think, you know, Hani talked about the Snapchat case earlier, which is a good example of where 230 immunity was pierced. And I think, you know, there are other really good questions around really bad actor platforms that know all of this stuff is going on and not taking action. Teen mental health, for example. Yeah, I mean, I think, you know, there's going to be causal, uh,

chain problems on some of those cases. But I do think that Hani's absolutely right. The court took these cases because there's hunger. Everyone's talking about Section 230. We should be talking about Section 230. But I think that these weren't the fact sets that they thought. It'll be interesting to see if they come back and have another bite at it soon. Jeffrey, is there another case? Well, the ones we've talked about from

Florida and Texas, which as Evelyn said, the court will take next year, involve a different question about the scope of 230, but one that the court is likely to divide over. And it's possible that that could have implications for how liability is applied in other cases too. But that's going to be absolutely fascinating and so squarely poses the conflict about whether or not the platform should be treated as common carriers and obey First Amendment standards. And in some ways, those will even be more

constitutionally significant than these cases. All right. Is there any other industry that gets blanket immunity protections the way social media companies do? Everybody gets sued except them. Is there any sort of parallel here, can any of you think? No, there isn't. I mean, I'm not the legal scholar here, but we've heard this. And I think even one of the justices says this during the Gonzalez hearing is, why does the tech industry get so much protection? Every other industry has to internalize these risks and deal with it.

And I don't know of any other industry that has this type of almost blanket immunity. I mean, you know, the tech industry obviously gets sued all the time. But I do think that there – I mean, this is a somewhat exceptional statute provided for what sort of Congress recognized at the time as an exceptional situation, which is, you know –

these platforms have become the custodians of all of our speech. And I think the important thing to remember about Section 230 is, yes, it provides platforms immunity, but it also provides users immunity. And the point of that platform immunity is to protect the speech of users. I'm sounding much more libertarian on this podcast than I intended to.

I have to say, you know, I really do think... You've lived in Silicon Valley too long. Yes, six months. That's all it took is something in the water. You can be libertarian light, which is most of them, honestly. I think content moderation is extremely important. I just get nervous about government rules that incentivize over moderation and that platforms that don't care about sort of marginalized communities or disparate impacts end up

We have seen this before with the Foster Amendments as well, taking down speech of people who don't have the same resources. Can I follow up on that, Cara? So Evelyn raises an absolutely valid point that we do have to be careful about over-moderation. I will point out, however, that when we passed the DMCA, the Digital Millennial Copyright Act,

These same claims were being made by the tech companies that you are going to force us to over-moderate to avoid a liability. And it wasn't true. And look, DMCA is not perfect, but it has largely been fairly effective. And it created a healthy online ecosystem that has allowed us now for both creators and producers to monetize music and

and movies and art. And so when you have rules of the road, they can actually be very, very good at creating a healthier online ecosystem. And since the companies are incentivized to keep content up, that's the financial side, I think that on balance, this might actually work out even if there is more liability with reduction of 230 protection. I would just say that industries that are immunized from suits are

include lawyers. They're the ones who are most protected. And all the privileges that the courts have protected against ineffective assistance of counsel claims or the lawyer-client privilege are designed to protect deliberative privilege and First Amendment values. The same with executive privilege, when you can't sue the executive to get the deliberation so that you can get advice. So this immunity, as Evelyn says, for the platforms is

to achieve a First Amendment value, which is deliberation and not over-moderating. And it's heartening, despite the really tough questions that are on the horizon involving the scope of the First Amendment, to

to see a consensus that 230 did achieve its purpose. And there's a reason that the U S has a freer free speech platform than Europe, for example, which lacks this immunity and the consequences of abandoning it might be, uh, severe. So, uh, let's just pause during this, this brief moment of, uh, of, of,

agreement not to sing Kumbaya, but to say it's great that thinking about this hard, the justices may be inclined to think that 230 isn't so bad after all. So my last question, because that you led me perfectly to it. There's two ways to go here is that, you know, this world and how powerful these social media companies are.

There's one way where Google, Twitter, Meta, et cetera, gets their ships in order without legislative or judicial action because they should be in charge of all this stuff because they were duly elected by nobody. Or, as Kagan specifically called out, Congress to act, which are our elected officials, as Kagan

as damaged as they may be. Two things. One, who should be running the show here? And let's imagine a world with rational internet regulations.

What would those be and what would the internet look like? Honey, you start with the first one. And then Jeffrey and Evelyn, you can answer the second one. There is no evidence that the technology company can self-regulate. The last 25 years have taught us this. And not only that, is that the business model that has led to the Googles and the Facebooks and the TikToks of the world continues to be the dominant model.

business model of the internet which is engagement driven ad driven outrage driving and that business model is the underlying root poison i would argue i don't think we can sit around and wait for the companies to do better i don't think they will there is no evidence of it um i think the

Despite the fact that I don't want the regulators putting rules of the road, I think there is no other choice here. Ideally, by the way, the consumers would have made the choice. We would have said, okay, we don't like the way you're doing business, so we're going to go elsewhere. But in addition to phenomenal wealth, they have virtual monopolies. And so we, as the consumer, don't even have choices. And that means the capitalism won't work here, and so we need the regulators to step in. All right. Jeffrey, what –

Congress should act. My feeling is Congress should have done privacy and antitrust legislation and taken care of this in a whole different way. But what do you think about that part? I guess the question first is...

will it act and what should it do? And will it? Probably not because there's not consensus as we've been discussing with conservatives more concerned about content discrimination for better or for worse and liberals more concerned about hate speech and harmful conduct. I find it hard to imagine what a national free speech regulation would look like. And in fact, I can't imagine one that's consistent with

First Amendment values short of imposing them, which there's an argument for not doing at the federal level because companies need some play in the joints to take down some more offensive speech than the First Amendment protects while broadly allowing a thousand flowers to bloom. One interesting consequence of this argument is

it is to make me think, you know, the companies, although it's messy and there's lots to object to, it may be better than the alternatives of either really sweeping, imposing a First Amendment standard on the federal level or allowing a great deal more moderation than would be consistent with First Amendment values. Evelyn, you get the last word. 230 looks like it's going to live to fight another day. Yeah. There

There is no rational world where the best way to make tech policy is by nine older justices weighing in on a case every 20-something years to sort of catch up on what's been going on. That is not how this should happen. Absolutely, Congress, if it could get its act together or it could pass some legislation enabling a digital agency that could be even more

more nimble and gather facts and understanding in which to make policy that's more finely attuned to the problem. Then we could talk... Absolutely, Kari, you mentioned privacy and antitrust. That would be 100% the place where I would start. I would also really start on transparency legislation and data access. What are these platforms doing? Are they doing what they say they're doing? Let's get researchers in... Exactly.

And that's where I'd start, because you can't solve problems that you don't understand. And I think that that's step one. And the only other thing, you know, before we close, as this has been a very sort of parochial conversation, but there are other legislatures and Europe is taking action. The Digital Services Act is coming. And so these platforms are going to have to change and adjust anyway.

because they're going to be regulated, you know, no matter what the Supreme Court does. You know, that's a thing I say to every U.S. legislator. I'm like, you're letting Marguerite Vestager, who likes to knit, run the internet, just FYI. And if it's not her, it's someone else in like, I don't know, Australia. Yeah, exactly. Those foreigners. Oh, God, the Australians. That's a terrible outcome. Just doing that on you. Well, you know what? They do a lot better than we do. I really appreciate all three of you. It's been a very thoughtful discussion.

Jeffrey, Hani, Evelyn, thank you so much. And we'll see where it goes. Thank you. Thank you so much. Thanks, Kara. If only we were with Margaret Vestager. Yes, she's great. By the way, I don't mind her running the internet. I'm perfectly happy. Exactly. She's a badass and she can run anything she wants. That's my feeling on her.

I really appreciated how Evelyn was taking this libertarian stance. But she's right. She's not generally like that if you read her writing and her papers. Well, it's a complex issue. I mean, I think that was the point. And so I think what's really nice, what I've seen here is the justices really coalesce in some way that they understand the importance of this because they should not be meddling here. And they knew it and they said it, thank God. Well,

it's unclear. Like they shouldn't be meddling here, but they should have a philosophy around it. And it might not be with these cases, but there, there is something, there's something that's happening with this argument around free speech and internet infrastructure. Maybe that, that is maybe true, but this is so fundamentally a congressional thing. That's crazy. You know, having Biden putting out statements about two 30, Trump just did it off the top of his head. Uh,

It needs to be considered by our congressional electorate officials how we want to do this. I know, but so little happens there, Cara. It's not true. They passed 230. Guess what? Yeah, 1996. Very different Congress. They did a great job. I know they can do it again. I'm pulling for them. Cara Swisher, Congress is...

main cheerleader. They did a good job then. And I haven't always supported 230. There's got to be some ways of them being liable for some things. That's through privacy, data, antitrust. It'll work. Go right to the business plan. I thought Hani's point at the beginning was very interesting about Google kind of hiding behind this, we have to do X, that you don't have to recommend, you do have to rank and search, that's what he was saying, kind of. That to me raised two really interesting questions. One is,

How should we think about where that line gets drawn? And it is an interesting philosophical argument to say, okay, if there were no Section 230, how would these companies behave?

They wouldn't exist. They wouldn't exist or they would exist in a very narrow way or they would have approvals, right? Like everything would need 15 minutes to appear. Everything would be slower. It would be too expensive. It would be too expensive. The whole thing would fall apart. You would not have your TikTok or whatever. But yeah, when you tell these companies to fix something, you tell these companies to fix child pornography, for example, they can figure it out. So there is a little bit like chicken and egg. Let me just tell you, hit it where it hurts. Business plan.

Privacy, data, surveillance. Those things have nothing to do with free speech. They have nothing to do with 230 here and everybody chit-chatting away on the internet. You can hit them where it hurts. They need to be liable in those ways. So antitrust, privacy, et cetera. Yeah. Transparency. Transparency.

The other thing that I think is remarkable here, and this was the kind of Mark Zuckerberg 2018 Congress thing, is how I actually thought the court did a good job on the technical elements, maybe better than Schnapper did, right? They did a good job because they didn't say they knew what they don't know. Exactly. That's a very powerful thing to say, I don't know. We need more young people to be in government and to be kind of grokking these issues and thinking about this. We looked at the Supreme Court lately.

I know. I know. They're meeting at 5.30 for dinner. Anyway, let's go on. They did a good job here. And as Jeffrey Rosen said, it's heartening to see the court doing, acting like a court. I agree. I was very upset by these cases because I think they're so stupid. Thank you.

All right, Judge Swisher, read us out. Today's show was produced by Naima Raza, Blake Nishik, Kristen Castro-Rossell, and Raffaella Seward. Rick Kwan engineered this episode. Our theme music is by Trackademics. If you're already following the show, you may proceed. If not, you're out of order.

Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. I'd be such a good judge. Thanks for listening to On with Kara Swisher from New York Magazine, the Box Media Podcast Network, and us. We'll be back on Monday with more.