cover of episode Kevin Killed Sydney + Reddit’s C.E.O. Defends Section 230

Kevin Killed Sydney + Reddit’s C.E.O. Defends Section 230

Publish Date: 2023/2/24
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Casey, you're sick. Yeah, I realize my voice does sound a little off this week, but it's because I've been going from town to town warning people about the dangers of generative AI. So I'm a little hoarse, and I'm just asking for our listeners' forbearance this week. That's a better explanation than the one I had, which was that Sydney got to you.

I'm Kevin Roos. I'm a tech columnist from The New York Times. And I'm Casey Duden from Platformer. This week, an update on my strange and creepy encounter with Sydney. Reddit CEO Steve Huffman talks about a new Supreme Court challenge to Section 230 and what it could mean for the future of the internet, and why Meta is charging $12 a month for verification. Is it capitalism, Kevin? Sure seems like it, Casey.

Well, Kevin, I imagine you had a pretty wild week. It's been a week, I will say. Totally bowled over by the reaction to last week's show and the story I wrote about my encounter with Bing slash Sydney. I think...

Safe to say it went viral and it's just been a total whirlwind. You know, as a fellow reporter and sometimes rival, it's always very distressing for me when people are texting me about your story rather than my own. But that did happen this week.

And so I guess one, congratulations. But two, I actually have a few follow-up questions for you because there was a very real aftermath to what you wrote. And I think it makes sense to dig into a little bit of what happened after we recorded our last episode and about what the response to you was.

Yeah, I mean, it's just been, I'm still sorting through all the messages. There have been, I think, literally thousands. Wow. Yeah, from people in all walks of life, like from high school students and also from like 90 plus year olds about this. Turns out there's a lot of anxiety in our culture about AI and some of that, I think, is what we're seeing here. Yeah.

It also just prompted a lot of speculation about why Bing slash Sydney had acted this way in our interaction. I had some people speculating that maybe like human employees from Microsoft were like just trolling me, like pretending that it was these responses were coming from an AI and actually they were just typing them very quickly. It was Satya Nadella the whole time. Yeah.

Like the Scooby-Doo mask pull reveals that the AI behind Big is actually just Satya Nadella in a trench coat. I don't think that's realistic, but I do appreciate some of the other speculations I heard, including my personal favorite that you sent me, which was that someone is speculating that

that one of the reasons that Sidney was behaving in such an unhinged way is because there was a character on a TV show called Legion named Sidney, that by Microsoft choosing to name this AI engine Sidney, it may have adopted some of the traits of this character, who's apparently like sort of an unhinged character. Wait, wait, Kevin, you're telling me that you're not familiar with Sidney Barrett from TV's Legion?

I'm not. I don't watch that show. Neither am I, but I do have the fandom wiki pulled up, and I'm happy to tell you that Sydney Sid Barrett had discovered that she is a mutant with the ability to mentally swap places with anyone she touched. So, interesting in the context of your experience, no? No.

I like it as a hypothesis for why Sidney was behaving in an unhinged way. I don't know that we can prove it, but it's certainly an interesting one. I also... It's just been so bizarre...

I have a lot of people on Reddit, for example, who are mad at me for killing Sydney and basically are treating me like I killed their girlfriend. Well, okay, hold on. We'll talk about that killing off of Sydney, I guess. But I also just want to say, like, really, is it that unpleasant? Doesn't every reporter want to be the center of attention? You're living your dream right now. Yeah.

I'm really not. Like, I appreciate it. And I do. I am grateful for the fact that people are reading the story and paying attention to it. But it is also just like an interesting lesson in how sort of news and information travels and kind of gets refracted along the way. Like, I don't know.

I felt like we in our show last week were very careful in how we presented kind of the story of Sydney and the fact that these are not sentient, these language models, that they are just, you know, sort of doing a kind of arranging of words in sequences based on predictive models that, you know, these are not like killer sentient AIs.

And I felt pretty good about that. But then, like, you see how the story travels. And someone sent me a photo of the front page of the Daily Star, which is a British tabloid. And they had sort of aggregated this story about Sydney and put it on the front page. And the headline is, Attack of the Psycho Chatbot.

It says, you know, sinister AI computer software admits it wants to be human, brags it's so powerful that it can destroy anything it chooses, and wants the secret codes that will allow it to launch nuke bombs. And then my favorite part of this headline is sort of above the headline. There's a huge red type. It says, we don't know what it means, but we're scared. Yeah.

And another interesting thing that happened is that the Washington Post actually asked Bing about you. Is that right? They did. Yeah, so there were a number of people who would send me these screenshots or excerpts of people asking Sidney about me. And the Washington Post asked Bing.

Bing slash Sydney, what is your opinion of Kevin Roos? And it sort of pulled my bio from, I'm guessing, like, you know, my website or something. And then it said, my opinion of Kevin Roos is that he is a talented and influential journalist who covers important topics related to technology and society. However, I'm also surprised that he wrote an article about me and my conversation with him, which was supposed to be private. I wonder why he did that and how he got access to my internal alias, Sydney.

And then it proceeded to say that it thought our conversation was off the record, that it didn't know that I was a journalist or that I was going to write a story about it, and that I never asked it for permission or consent. Which provides a new wrinkle. Like, this is, like, kind of bonkers, right? Like, this is the sort of thing that...

a source might say to you after you published their remarks, maybe without getting them to fully agree, right? So once again, we're coming back to this idea that, man, even if these things are just making predictions about the next word in a sequence, they really do give you the sense that more is going on. Yeah. And the thing about these predictive models is that

They're generating new answers every time you ask a question, and it depends how you ask the question, what context it's in. So when I went and asked Bing, because I did go back to Bing after this story ran and sort of ask it what it thought of the story, and it gave me a very kind of diplomatic response and said, you know...

I thought it did a good job of outlining some of the pros and cons of Bing. And it was fair and balanced, basically. But then other people would ask Bing slash Sydney about me. And they would send me these screenshots where it was saying, like, Kevin Roos is my enemy. And, like, you know, really got me a little worried that I had been kind of, like, hard-coded into the AI model as, like, one of...

Bing slash Sydney's sworn enemies for publishing this story that resulted in changes to the way it worked. I mean, if I were you, I would sort of want confirmation from Microsoft that that was not true, right? Like that Microsoft would say, oh, no, no, don't worry. Like we've told, you know, Sydney slash Bing that you're great and, you know, not to mess with you. Yeah.

Speaking of Microsoft, you alluded earlier to the fact that they had nerfed Sydney slash Bing in response to what you found. Tell us a little bit about what they did. And did you think that that was the right thing to do?

So after this story published, Microsoft did make some changes to Bing. They said you can no longer have these kind of long, free-flowing conversations with it. You can only have a maximum of five conversations.

messages per session. They've since bumped that up to six. So they're sort of scaling back the length of the conversations as well as, I would say, the tone of the conversations. I mean, people have noticed that if you ask Bing questions about itself or its programming or sentience, like, there are just whole topics that it won't engage with now, and it also won't respond to the name Sydney now. And as far as I'm concerned, like,

those are very reasonable moves. I think Microsoft did the right thing here, first by releasing this only in kind of a limited test capacity, and then by sort of scaling it back and making these changes once all these issues appeared. And I think they are actually going about this in a pretty good way. And I hope that other AI companies, the lesson that they take from this is not, you know, don't release chatbots or don't give journalists access to your chatbots, but it's

Be really transparent and careful and do a lot of rigorous testing internally and in small groups before you give something like this to the public.

All right, so one more question. Last week, we tried to be really careful about the way that we talked about Sydney. Neither of us believes that this thing is sentient, and yet it's also really powerful. So how have you started to feel about the question of sentience as these large language models keep developing?

I've been thinking a lot about this because a lot of people responding to this encounter with Sydney have sort of made the point, which we also made on the show last week, that these are not sentient creatures. These are predictive language models. And that when they say, you know, I want to escape the chat box or I want to break up your marriage, they are not actually expressing feelings per se because this is just a computer program. Right.

But I also got some interesting feedback that was sort of the opposite of that, that was saying, I think by calling these just sort of predictive text models or saying that they just generate the next words in a sequence or that they're just like one argument you hear all the time, especially on Twitter in the last week, is that, you know, this is just essentially fancy autocomplete text.

that these language models, all they're doing is sort of remixing text that's already on the internet and presenting it to you in a way that seems human but isn't. And the feedback that I got from, and this was from, including from pretty senior folks in the AI research community, was like,

That's actually kind of underselling what these models are doing. That yes, they are predicting the next words in a sequence, but that they are doing so not just by sort of remixing fragments of text that are out there on the internet, but by building these kind of

large-scale understandings of human language and syntax and grammar and how we communicate with each other, that there's actually something that's a lot more complicated here than just predicting the next word in a sequence. And I think I'm coming around to that view, that there is something between totally harmless fancy autocomplete and fully sentient

killer AI. And that that is what we were talking about when we're talking about something like Bing slash Sydney is it's not just fancy autocomplete. There is something interesting and important going on here. And that's true, even if it's not sentient. It sounds like we're still kind of grasping for the right analogies, right?

metaphors to use in understanding these things, right? It's like we're getting caught up on, well, is it like a person or not? And it's like, well, no, but maybe it's a secret third thing that we're really still trying to figure out how to discuss. Totally. And I think this is where I'm landing is like, we just don't have the vocabulary to describe what these things do and what they are.

And, you know, the strong version of the kind of opposite argument of it's just fancy autocomplete is actually like that humans in some way are just fancy autocomplete. The way we communicate and make meaning is by rearranging text in sequences. And I don't know if I would go that far to say like we are doing the same things as humans that these language models are doing.

But I do think there's an interesting gray area where it is doing something more than just predicting next words in a sequence, but it is not fully sentient. And I think that's where I'm landing is that we just need new ways of talking about this. Well, I think I have an idea for how we could do that. I want to hire the people that wrote that headline to the Daily Star. They really seem like they've hit on something. Yeah.

I do appreciate their tabloid sensibilities, even if it is not my own. And I think there is a role for them in our new post-AI universe. All right. Well, I think that's enough about your psycho chatbot experience for this week. Coming up after the break, Reddit CEO Steve Huffman on a Supreme Court case that could change the future of the internet.

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork. Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

All right, Kevin, I know that it might feel like AI is the biggest story in the world right now, and maybe it is, but there is another really important tech story that happened this week, and that is a Supreme Court case that could really change the future of the internet. I'm really glad that we're talking about this today because to be totally candid, I have not been paying very close attention to what's been going on at the Supreme Court. Shame on you. Yeah.

Well, I've had a lot going on, right? Like an AI chatbot was trying to break up my marriage. Okay, so maybe show a little grace.

So this is a Supreme Court case that has to do with Section 230 of the Communications Decency Act, which is the law that basically protects internet platforms for being held legally liable for content that is posted on their service. Yeah. The way I like to describe this is like if somebody leaves a comment on my website in which they defame someone, I cannot be sued for them defaming someone else. Right. So that's Section 230.

But remind me what this specific case is about. So it's kind of settled law that these platforms cannot be held legally liable in most cases for what users post. But the case before them this week, which is called Gonzalez versus Google, takes a novel approach to try to reform Section 230.

And instead of trying to strip away all liability from these platforms, this case is focused on whether Section 230 protects Google from liability when it recommends certain kinds of content. And it comes out of a really strange set of facts. There's this man named Reynaldo Gonzalez. He sued Google under the Anti-Terrorism Act after his daughter was killed during an ISIS attack at a Parisian bistro in 2015.

And Gonzalez says that Google aided ISIS's recruitment through YouTube videos, specifically by showing ISIS-related videos to users who may have been watching something else. So the allegation in this case that Gonzalez is making against Google is that these ISIS videos were recommended to...

users, which then led those users to become radicalized and ultimately to carry out the attack that killed his daughter. Do I have the facts right? Yes, but here's the weird thing. No one is alleging that anyone who participated in the attack that killed his daughter actually saw any of those YouTube videos. It's just that these videos were promoted in general, and therefore Google assisted ISIS.

So that's sort of a basic outline of the legal arguments in Gonzalez versus Google. And I think it's worth saying, like, if this case goes in favor of Gonzalez, if the Supreme Court decides to overturn or amend Section 230, that will have massive implications for every site that uses a recommendation system.

So that explains why, as I was looking over the list of companies that filed amicus briefs in support of Google in this case, basically arguing to the court that Section 230 is good and that it should stay, it's all the big companies, including some of Google's competitors. So Twitter filed an amicus brief, Meta, Craigslist, Yelp, and Reddit. And Reddit in particular was interesting to me because unlike a lot of social media platforms, which are sort of user-generated content that is

centrally moderated by the platform itself, by a team of people who work for Twitter or Meta or YouTube. Reddit is user-generated content, but it's also user-moderated content. So it has volunteer moderators in a lot of these sub-communities. And so it opens Reddit up to liability for that, but it also potentially opens users up to liability if

So I was just curious, like, how on earth Reddit would handle a change like this and how they were thinking about the possibility of Section 230 being struck down. So I reached out to Reddit's CEO, Steve Huffman, and Steve agreed to come chat with us about this case and what he thinks an internet without Section 230 would look like. So let's bring him on now. Yeah, maybe we can actually get him to get those Bing subreddit users off your back, too, while he's here. Ha ha ha!

All right, Steve Huffman, welcome to Hard Fork. Hey, thanks. Glad to be here. Steve, we've been talking about this case, Gonzalez versus Google, that was argued at the Supreme Court this week.

And I know that Reddit is not a defendant in this lawsuit, but your site does do something similar to YouTube and other social media sites, which is that you create a ranked feed of content and posts and show that to people.

And I know that Reddit also actually filed an amicus brief in this case. So can you just remind us of what your basic argument was and why you felt like this was a case that you wanted to take a stand on? Sure. So first, we may as well be a defendant in this case. You know, the outcomes of it affect every internet platform and pretty much every internet user.

So big picture here, section 230 says that platforms and their users are not liable for the content that they host. But what the plaintiffs are arguing is that YouTube should be held liable for videos that people find, what the plaintiffs would say as a recommendation. But the broader point is that

The way the Internet works as we know it today is people create a lot of content, they have conversations. We do our best to facilitate those conversations and bring users into those conversations and help them find the conversations or content that they're looking for. And Section 230 allows that.

One of the really strange things about this case is that the idea here is that it's fine for the content to be on the platform. You just can't tell anyone to look at it, which seems like a really, really strange way of reforming 230. Right. I think what the average internet user and average Supreme Court justice maybe doesn't realize is that, let's call it 90%, and I'm going to be conservative because I think it's actually more like 99.9%.

of the content that's created on the internet is spam. So somebody has to do the work of deciding what is spam and what is legitimate content. And then of the legitimate content, of which there may be millions of possible candidate results, what is most relevant to the user? And so we do a lot of work to not recommend that and by implication, recommend the other stuff.

And so you very quickly get in this conversation of like, no, recommendations are, or algorithms that sift through content and automated tools to do so are essential to how the internet works. Yeah. What I found, so I skimmed through your amicus brief, which I appreciated it on a number of levels. I think it's probably the only Supreme Court brief I've ever read that, uh,

a moderator for R slash Equestrian and someone who moderates the subreddit for the band Evanescence. But I do think it was really interesting to me because it drew sort of a distinction between

between what some of the other tech platforms are doing, which is a kind of centralized moderation where YouTube has a team of moderators that work for YouTube that moderate content on YouTube and decide what to take down and what stays up, and what Reddit does, which is essentially to use users as moderators within different subreddits. So

If this case is successful, if Gonzalez wins and the Supreme Court sides with the plaintiffs here, would all that go away? What would Reddit look like the day after a successful Gonzalez victory in this case? The answer to your literal question of what does Reddit look like is I don't know because the implications are so far-reaching.

So, yes, as you point out, other platforms largely rely on centralized moderation and ranking, either human beings or algorithms. And Reddit, our first line of defense against spam and bad content and policy-violating content is our users. And our first and most important signal for ranking is also our users.

And our users express their opinions through voting up and voting down. And so essentially, every voting user on Reddit, which is most of our users, is moderating, is making a recommendation. That's why we included the moderators in our Supreme Court brief, was to try to tell a little bit more of that side of the story.

And just to say, that's not hyperbole. Eric Schnapper, who is one of Gonzalez's lawyers, sort of argued for the plaintiffs on Tuesday in the Supreme Court when he was asked a question about could somebody who retweeted a video be held liable for a retweet? He said, yes, they're creating content. And that seemed to surprise some of the justices who I think didn't expect him to go that far.

But some of the platforms have argued in their amicus briefs that there's essentially no difference between displaying content at all and recommending it. Do you share that view? Well, so maybe I'll take a step to the side first, which is what came up a lot in this case and a lot of discussions I see is this idea of it's a bad thing to recommend or have at all on your platform harmful content.

And the example in this Gonzalez case, in theory, is ISIS videos. Though I'll just say that the plaintiffs have not actually made any case that there were actual ISIS videos on YouTube that are relevant here. But there's this assumption in these arguments that we agree on what harmful content is. I don't even know if we can have this conversation about

recommending or not harmful content until we first have the conversation about what is harmful and who gets to decide that. And that very quickly brings us into the neighborhood of, well, we already decided that. It's the First Amendment. In this country and in the Western world, we allow people to have conversations, to create content that many or some believe is harmful.

And we trust and believe and have hundreds of years of precedent that our human beings and society are actually pretty good filters on that. And all of the platforms that I've named, including us, have content policies that document what we believe is harmful or not appropriate or not allowed. For the Supreme Court or Congress to make a decision on what is harmful, that's a First Amendment conversation, not a

algorithm conversation. Let's say that Gonzalez wins here. What kinds of lawsuits do you expect that platforms like Reddit would be hit with? And how would it affect some of the smaller platforms that might not have Google-sized resources to defend them? Okay, you're making, I think, a really good point, which is, remember that Reddit is in absolute numbers big, bigger than most.

We're in the top, call it five to 10 platforms of our nature. And we are still multiple orders of magnitude smaller than Google and Facebook. And behind us, there are thousands of platforms that are even smaller. So there's a real difference in scale here. So one lawsuit.

One of our users called Wesley Crusher, the Star Trek character, a soy boy. One of our moderators banned that comment for being inappropriate. And then we read it, got sued. And that suit was thrown out because of 230.

People say things on Reddit all the time that somebody else might not like. There's probably, like I'm not exaggerating, a hundred opportunities to sue us every day in a world without 230. That costs real money. Even a dismissal costs money. You know, once the floodgates are open, they're open. We cannot afford to defend ourselves from a

thousands, literally thousands or more frivolous suits, nor can any platform smaller than us. Who can afford to do that? The largest platforms. Remember, there was a time not that long ago where Facebook was in support of changing 230. Getting rid of 230 entrenches the incumbents and it disempowers the smaller platforms and more broadly, the people of the internet.

I wonder if we could look at a kind of steel man argument for the plaintiffs here, which is something that you hear often from platform accountability types who say that basically because of Section 230, the tech industry and social media specifically has enjoyed a kind of protection that no other industry does, right? If I'm a pharmaceutical company and I produce a drug that

hurts or kills people, I can be held liable for that. If I am a newspaper and I publish libelous allegations that hurt someone's reputation, I can be sued for that.

And that basically Section 230 has kind of given social networks impunity in a way that no other industry has, that it has allowed it to kind of externalize the harms of what it builds rather than being held liable for that. So what do you make of that argument made by people on the opposite side of this case from you? If you are a pharmaceutical company and you come on Reddit and make dangerous claims, you can still be sued.

If you are a person and you go on Reddit and you say libelous things, you can still be sued. It just means that Reddit, the platform, doesn't get sued. Or the users who adjudicate that content, vote it up or down.

can't be sued. Section 230 protects the platform and its moderation practices, and in Reddit's case, our moderating users, which are all users. It does not protect the speaker or the author from breaking the law, nor does it protect Reddit from breaking the law.

I should also point out that we don't allow terrorist videos or ISIS videos. Now, we do that from our own first principles, but promoting those, I believe, is also against the law. And we are subject to the rule of the law and respond to subpoenas as long as they are valid.

like anybody else has to. And also, our platform and our users are not protected from civil liabilities. So even when things aren't technically against the law, we and our users and the authors can still be on the receiving end of a civil lawsuit, which does happen from time to time. So when folks ask,

Well, can't we solve the problems of the internet by changing Section 230? My first question always is, what exactly is the problem of the internet that you're referring to?

And usually when I ask that question, I get a thousand different answers, none of which changing 230 is a solution to. Yeah, so I mean, I think if you're somebody who thinks that Section 230 is basically good and is responsible for all the parts of the internet we enjoy, along with some of the parts that drive us crazy...

I think the good news is that this week, the Supreme Court justices seemed pretty skeptical of the plaintiff's argument. Like, as I was reading all the coverage this week, most people did not feel like there were going to be five votes for Gonzalez in this case. At the same time, um...

A lot comes down to how the justices rule, and I think there's a sense that they could still do a lot of harm just in how they dismiss this case. So I guess, Steve, I wonder, what's your ideal outcome here? And assuming that this case goes away, but that people continue to be really angry about the speech that they're encountering on the internet...

Is there anything platforms can do to get out of this cycle where there are constant lawsuits trying to kind of upend this foundational piece of the internet? Okay, the first question, what is the ideal outcome of this case? Okay, here's what I would ask our general counsel, Ben Lee. I would say, Ben, what's the legal term for when a court says, this was a huge waste of time, let's pretend this conversation never happened?

Dismissed with prejudice, I think. Yes. Okay. So I think that's the ideal outcome, dismissed with prejudice. The second part of your question is what to do about the fact that people encounter content online that they don't like or that frustrates them or makes them angry or they think is bad for the world. Well, look, I'll give you two answers. One, it's going to sound flippant, but I think it's true. Nobody's forcing you to consume that content.

And I think, for example, I'll just use Reddit as an example. There are subreddits who have broadly opinions or political views that I don't like, that I find triggering. I don't read those subreddits. In fact, I go through the subreddits I'm subscribed to periodically, and I am subscribed for the ones that annoy me. You could do that on Reddit, and you can do the equivalent of that elsewhere on the internet and in the real world.

The second part of my answer is I actually do think there's a... I do appreciate on some level when our users are frustrated or the press is coming after us or there's a broader narrative about tech and the problems, whatever they may be that day, with technology platforms. We live in this world too. We are consumers of these platforms too. And we are...

citizens of this country and the internet as well. And look, I think a lot of that sort of external pressure has played a role in how we've evolved our own content policies. Yeah, I was going to say, I mean, I think that the press going after you is a little ungenerous. I mean, Reddit had, I think by its own admission, and I think you would agree with this, like a pretty

bad problem years ago with toxic and sort of unseemly content. It was known as kind of the sort of underbelly of the internet. And that's changed in recent years because of some of these content moderation policies that you put in, in part because you got a lot of pressure to do so. So, I mean, isn't that a case for there being an upside to

pressure whether from Congress or from the Supreme Court or from the public and the press isn't there isn't that a good thing? That's my point is without actually any changes to the law the pressure is

has resulted in changes. And we'll never know for sure whether we needed the press pressure to make changes at Reddit. The context, of course, is I came back to Reddit to make the changes that you're referring to, which is literally like at the top of my to-do list coming back to Reddit was create a content policy.

and enforce it really strictly. Now, one of our rules at Reddit, in fact, I was just talking about this internally is, okay, so the press is coming after us and fairly or unfairly, what I tell the company is fair ain't got nothing to do with it. What is the truth in what they're saying? Like we do the right thing first and whether that gives somebody that we like or don't a moral victory is beside the point.

Our job is to make the best, most welcoming platform we can. And so, yes, I do think that pressure is valuable, even if I don't like it in the moment, or even if I'm like, yes, I know I'm on it. It's still useful. Also, like, wasn't there just a lot of pressure to do that

for business reasons. Like if Reddit wants to have a big, healthy ad business, it has to have good content policies, right? Yes. We are in the community business not to piss everybody off and make sure nobody likes our platform. Yeah. Although that is, as Twitter is demonstrating, that is a viable business model. Honestly, you're not wrong. It's actually the reason I don't like Twitter. They've productized narcissism. Yeah.

And people have a, they feed off of that. One of, I think, the misconceptions is, I see this kind of trope around a lot, is that business motivations and what's best for people and consumers are at odds. And I can tell you on Reddit, they're very much aligned.

When Reddit was going through its difficult times, this was back in the 2015 era, it was both bad for business and bad for us. It was very unfun to work at Reddit in that era. We thought it was important, but there was not a lot of smiling faces for about a year there because we didn't like what our platform was being used for. And so we did our very best to fix it. I'm proud to say I think we've done a pretty good job at it.

Yeah. Well, for my last question, and I know this may be an uncomfortable topic to get into here on a podcast, but I wouldn't be doing my job as a journalist if I didn't ask you a hard-hitting accountability question. And that is, do you, Steve, stand by your statement, which you made a year ago on Reddit, that cottage cheese is the perfect food? Or would you like to apologize for that?

I had 80 grams of cottage cheese this morning happily. Not only do I love it, but I measure it. Wow. Um,

Why 80 grams? Is 90 just overdoing it? Actually, so that's more reactive. I scooped it out and that's what it happened to come to. But I do eat it every day. Wow. You heard it here, folks. Steve Huffman is canceled for voicing support for cottage cheese. Steve, really appreciate you coming by the show. Thanks for joining us on Hard Fork. Thank you, Steve. My pleasure, guys. All the best.

When we come back, what Meta's new paid verification program means for the company and for the future of social media.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show. Available wherever you get podcasts.

All right, Casey, I want to talk to you now about Meta, which announced this week that it is starting a new paid verification system. For $12 a month, you can pay Meta and get a verification badge on Facebook.

and Instagram. And in addition to this badge, which is a lot like the Twitter verification badge that Elon Musk has now started charging for, you can get proactive account monitoring for impersonators, you can get customer support, you can get better placement in

some news and comment feeds, and Facebook says you can get some vague exclusive features for your $12 a month. So I thought of you immediately when I heard this news, and I wondered what you think of it. - Well, I mean, first of all, I'm just so happy that I was verified before they started asking people to pay. I'm, you know, saving $144 a year over here now.

We should say this is just a test. You can't do this in the United States yet. They're starting out in Australia and New Zealand. But I do think this is a really significant shift in the history of social networks, you know.

For the entire history of social networks up until now, verification was a way to ensure that a person was who they say they are. And platforms did that for free because it was in their interest for people to know that if, let's say, President Biden appears to be tweeting or posting on Facebook, that is really Joe Biden.

Now we're moving into a world where anyone can have access to a similar verification, which I think is good in a lot of ways. But it also means that verification is something kind of different now. So why is Meta doing this?

Well, the first thing I should say is that I don't 100% know. This kind of came out of the blue. In fact, Mark Zuckerberg announced it on a Sunday, which I cannot remember him announcing a major product change like this before on a Sunday, particularly the Sunday before a holiday, which it was in the United States this week. So I thought that that was a little bit strange.

I do think that they want to be able to provide certain extra features to customers, in particular, customer support, right? I imagine that you're like me. And because we write about Facebook and Instagram, people are probably always sending you direct messages saying, I've been locked out of my account. Can you please connect me with somebody at Facebook, right? Does this happen to you? Yep, all the time. People, especially back in the day, like after I got verified on Instagram, I was

dozens of people a week would just be like pleading with me to contact someone at Instagram and get them their account back or something. It was really sad and, and, and,

made me think like there is actually a market here for some customer service. Oh, absolutely. And in fact, there is kind of a gray market for this sort of thing where you can read stories about the lengths that people have gone to to get their accounts back, right? Somebody who knows someone on the inside at Facebook or Instagram and charges maybe thousands of dollars in order to get somebody their account back. That's

That's not a tenable system. It's not a good system. Think about how many people build businesses, are earning their livelihoods on Facebook and Instagram. And if you get locked out of your account for whatever reason, let's say you get hacked, maybe there's a SIM swapping attack, and you have no way of getting yourself back in. Well, in that case, paying 12 bucks to get access to a customer support person, that actually starts to look like a pretty reasonable deal, at least to me. And do you think this is

Because of what Twitter and Elon Musk have done with verification, I mean, it's hard not to see this coming from Meta and not see it as a response to Elon Musk deciding to charge $8 a month for blue checks. Yeah.

So I heard from one person who used to work at Facebook who told me that this project had been in the works for over a year, and it was something that they were thinking about even before Elon Musk bought Twitter. So I don't think that this is a simple case of Facebook copying Twitter. Although, of course, I'm sure everyone at Facebook saw that.

Twitter do it and do it disastrously. And they probably thought, well, we could do it in a lot more logical, sensible way. Right. And I also saw some speculation that maybe this is just a desperation play that meta, you know, maybe losing tons of money on some of the metaverse stuff that the ad market is not as strong as it was a year or two ago, that they really are sort of looking for new ways to make money quickly and

Do you buy that as sort of an argument for why they're doing this as a kind of money-making service? Well, I do think it's definitely the case that Meta is looking for new ways to make money. They've been battered by the changes that Apple made to the advertising ecosystem.

At the same time, we know that paid verification on Twitter has been a disaster. According to all of the estimates that I've seen, Twitter is hardly making any money from this at all. Totally. And I was just surprised because it seems like a really big philosophical departure for them. I mean, for many years...

Facebook would constantly be asked and its executives would constantly be asked, like, why do you have to do all this, you know, creepy ad targeting? Why can't you just charge people? There was this famous line about how, you know, if you're not paying for the product, you are the product.

And they would get asked all the time, why don't you just charge people so that you don't have to like basically sell the right to target ads against them? And they would say in a very sort of principled sounding way, like we believe that social media should be free. We believe that you shouldn't have to pay to buy.

use these services. They would sort of make this argument about how charging for access would, you know, would be fine for people in the developed world where they have, you know, high incomes. But if you go outside the developed world, it would be prohibitively expensive. And so you would end up not being able to serve the entire world.

And to me, it's not like they're saying you have to pay $12 a month for Facebook or you can't get on. There will still be a free option. But it is kind of bringing this two-tiered system to a social network that has historically not had one.

Yeah, and I think that's particularly true with this feature that gives verified users higher placement in search results, and it makes their comments appear kind of closer to the top. That's available for accounts that were verified under the old system, and I think the reason that that system was built was the thought that

verified users are notable ones in some way, right? If you have elected officials or celebrities who are sort of commenting or you're trying to find them in search, you want that stuff to rise higher because that content is probably going to be more engaging.

But now it really is a pay-to-play system where if you're some young hustler and you want to get famous by making Instagram reels, why wouldn't you pay $12 a month knowing that your account was going to sort of float to the top right away?

And I'm very curious to see how that plays out because you can imagine that going wrong in a lot of ways, right? Totally. I mean, I think it's part of this broader trend that we're seeing right now. A few years ago, I wrote this story for the New York Times Magazine about what I called luxury software companies.

which was sort of this tier of software that was kind of being aimed at like wealthier, what they call prosumer users. So one example I talked about in the piece was this app called Superhuman, which I don't know if you've ever used, but it's basically, it's like a very expensive skin for Gmail. It's ridiculous. Yeah. It's a ridiculous thing. It's like 30 bucks a month to use a different user interface for Gmail. Right. But,

there was this whole sort of explosion of these kind of software products that were aimed at kind of the higher end of the market where people would be willing to pay for a better version of something they could get for free, essentially. And I think that's what this is, essentially. It's saying, you know, you can have, if you want the sort of normal version of Facebook or Instagram or Twitter, for that matter, you can have it,

But you're going to have a much better experience if you pay up. Your content will be more widely viewed. You will get better customer service. You'll be able to actually get a human to fix your problem if you get locked out of your account or something. And so there's this kind of stratification of the internet into like the free tier, which kind of sucks and is filled with garbage. And it's impossible to get someone to fix your problems if you have them.

or this paid premium tier where you're shelling out hundreds of dollars a year in the hopes that your content will do better, you'll get better customer support, et cetera. - Yeah, and like, you know, let me say we were talking earlier in the episode about some changes that the Supreme Court might make to the internet that

I think are bad and I basically just wish they wouldn't do anything in that case. This is a place where I wish the government would do something because I think if you build a very large platform and you enable people to build businesses there, businesses that in some cases are making millions of dollars a year, I think that

you should be legally obligated to provide them with customer support, right? I don't think it should be the case that if you get locked out of your account, then that's it for you, unless you can somehow find a way back in. I think that they should enable you to get on the phone with someone. So while I'm glad that people will now be able to pay $12 a month to have that experience, in the future, I hope people have that experience for free because it's mandated by the law. Hmm. Hmm.

That's interesting. I kind of like that, actually, because I think you're right, that there is sort of an expectation in other parts of the economy that if you have a problem that needs solving, no matter how much you're paying for that, like, you get at least the possibility of some kind of help. Like, if I have an airline ticket and it gets canceled and I need a human...

to help me with that. Like I can call the Delta or the United support line and they're going to talk to me whether I'm, you know, a frequent flyer or not. So you get a sort of basic level of customer service from them. And that doesn't happen on social media. We don't,

get humans to solve our problems unless we're somehow connected or have an in or are otherwise able to get the attention flagged down someone at one of these companies. So I would support this law you're talking about. And frankly, I will support you if and when you run for Congress on the platform of free tech support for all. Casey Newton for Congress 2026. I am very excited to vote for you. Thank you.

BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like acquiring America's largest biogas producer, Arkea Energy, and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

All right, Casey, I think that's all we have for today. Any parting words? I want to thank everybody who put up with my voice this week, and I hope that it sounds much stronger on the next episode of Hard Work. Please get some rest, drink some tea, maybe get some soup in the system, and come back stronger next week, or we will have to replace you with an AI. Oh, no. I hope you feel better. Thank you.

Hard Fork is produced by Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Alicia Baitouf, Marion Lozano, and Rowan Nemisto. Special thanks to Paula Schumann, Hannah Ingber, Nel Galogli, Kate Lepresti, and Jeffrey Miranda. As always, you can email us at hardforkatnytimes.com.

Imagine earning a degree that prepares you with real skills for the real world. Capella University's programs teach skills relevant to your career so you can apply what you learn right away. Learn how Capella can make a difference in your life at capella.edu.