cover of episode The Surgeon General’s Social Media Warning + A.I.’s Existential Risks

The Surgeon General’s Social Media Warning + A.I.’s Existential Risks

Publish Date: 2023/5/26
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Casey, last week on the show, we talked about the phenomenon of people listening to podcasts at very high speed, because we're talking about this New York Times audio app that just came out that allows you to go up to 3x. Right. And that seemed insane to both of us. And I sort of jokingly said, if you listen to podcasts at three times speed, reach out to me. And a

I was expecting maybe like one person, maybe two people. I think it's fair to say we got an avalanche of speed maxers. We have been bombarded and it's so confusing. The highest speed I'm comfortable with people listening to Hard Fork is 0.8x and here's why. There's so much information in this show, okay, that if you're not taking the time to let it absorb into your body, you're not getting the full effect. So be kind to yourself, treat yourself. If the show shows up as one hour, spend an hour and 10 minutes listening to it, okay? You'll thank yourself. Yeah.

You heard it here first. Hard Fork, the first podcast designed to be listened to very slowly. Very slowly. Yeah. Should we put in like a secret message for our three Xers? Like a little slowed down, like, I'm Kevin. Kevin.

I'm Kevin Roos. I'm a tech columnist at The New York Times. I'm Casey Newton from Platformer, and you're listening to Hard Fork. This week on the show, the Surgeon General warns that social media may not be safe for kids. Plus, AI safety researcher Ajaya Khotra on the existential risks posed by AI and what we ought to do about them. And then finally, it's time to pass the hat. We're once again playing Hatch EPT.

So, Casey, this week there was some big news about social media. In particular, the U.S. Surgeon General, Dr. Vivek Murthy, issued an advisory about the risks of social media to young people. And it basically was kind of a call for action and a summary of what we know about the effects of social media use on young people. And I want to start this by asking, what do you know about the U.S. Surgeon General? Well...

he hates smoking and has my whole life. And most of what I've ever heard from the U.S. Surgeon General has been whether I should smoke. And the answer is no. Yeah, I mean, this is like one of two things that I know about the Surgeon General is that he puts the warning labels on cigarette packages. The other thing is that our current

Surgeon General looks exactly like Ezra Klein. And notice you've never seen both of them in the same place. It's true. But the U.S. Surgeon General, apparently part of his mandate is evaluating risks to public health. Yeah. And this week he put a big stake in the ground in declaring that social media has been

potentially big risks for public health. So here's the big summary quote from this report. It says, more research is needed to fully understand the impact of social media. However, the current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents. So let's talk about this report because I think it brings up some really interesting and important issues. What did you make of it?

Well, I thought it was really good. Like, this is actually the kind of stuff I want our government to be doing, is investigating stuff like this that the vast majority of teenagers are using. And I think a lot of us have had questions over the years about what are the effects that it's having. Particularly for a subset of kids, this stuff can be quite dangerous. That list would include adolescent girls, kids who have existing mental health issues. So if you're a parent, you should be paying close attention. And if you're a regulator, you should think about

passing some regulation. So that was kind of, I think, the core takeaway, but there are a lot of details in there that are super interesting. So yeah, let's talk about the details. What stuck out to you most? So one thing that comes across is that a way that you can guess that someone is having a bad experience on social media is that they're using it constantly.

There seems to be a really strong connection between the number of hours a day that you're using these networks and the state of your mental health. They talk about some kids in here that are on these social networks more than three hours a day, and people who are using social networks that much are at a much higher risk of depression, of anxiety, and of not sleeping well enough.

And so just from a practical perspective, if you are a parent and you notice your kid is using TikTok seven hours a day, that actually is a moment to pull your kid aside and say, hey, what's going on? Yeah, and I also found it really interesting that the report talked about various studies showing that certain groups have...

better or worse times in general on social media. So one surprising thing to me actually was that some of the adolescents who seem to be getting a lot out of social media in a positive direction are actually adolescents from marginalized groups. So there's some studies that show that

LGBT youth actually have their mental health and well-being supported by social media use. And then also this body of research that found that seven out of 10 adolescent girls of color reported encountering positive or identity-affirming content related to race across social media platforms. So it is not the case that every adolescent across the board has worse mental health and worse health outcomes as a result of using social media. And in particular, it seems like the

Some of the best uses of social media for adolescents are people who might be sort of marginalized or bullied in their offline lives, finding spaces online to connect with similar types of people across similar interests and really find connection and support that way.

Yeah, I mean, think about it. If you're a straight white boy, let's say, and you grow up and you're watching Netflix and HBO, you're seeing a lot of people who look like you. Your experience is represented. That's providing some sort of support and entertainment and enjoyment for you. But if you're a little gay kid or like a little girl of color, you're seeing a lot less of that. But you turn to social media and it's a lot easier to find.

And that is a gift. And that is something really cool. And that's why when states want to ban this stuff outright, I get really anxious because I think about those kids. And I think about myself as a teenager and how much I benefited from seeing other queer people on the internet. So, yeah, there is definitely a big bucket of kids who benefits from this stuff. There's a reason 95% of kids are using this. Right. Okay.

So there are a few different parts to this Surgeon General's report. One of them is kind of like a literature review, like what does the research tell us about the links between social media and adolescents' health? And another part at the end is sort of this list of recommendations, including calling for more research and actually calling for specific actions that the Surgeon General wants tech platforms to take.

including age-appropriate safety standards, enforcing age restrictions, more transparency from the tech companies. And it also gives some advice to parents about how to create boundaries with your kids around their social media use, how to model responsible social media behavior, and then how to work with other parents to create shared norms about social media use. So that's the report. And I'm curious, like you mentioned in your column that

a lot of people at the platforms are skeptical of this report and of the data that it refers to. So what do people at the platforms believe about this report? And why are they maybe skeptical of some of what's in it? So, yeah, I mean, I've heard from folks, you know, both before and after I wrote that they just really reject

the report. And there are a handful of reasons. One that they are clinging to is that the American Psychological Association put a report out this month. And among the things it says is, quote, using social media is not inherently beneficial or harmful to young people. Adolescents' lives online both reflect and impact their offline lives.

So to them, that's kind of the synthesis that they believe in. But there's more step two. A lot of the studies, including in the Surgeon General's report, show a lot more correlation than causation. Causation has been harder to show to the extent it has been shown. It tends to be relatively small amounts, relatively small studies. They're telling me that the Surgeon General is a political job. We know that Joe Biden hates social networks. He wants to get rid of Section 230. He's sort of not a friend of these companies to begin with.

And ultimately, they just kind of think this is a moral panic that people are just nervous about the media of the moment, just like they were worried about TV and comic books before social media. Right. I mean, I remember as a teen, the big thing in that period was video games and violent video games. And you had, you know, Tipper Gore's Crusade. And I remember when Grand Theft Auto came out, the first one, and it was like mayhem. Parents were like, this is going to, you know, our teenagers are going to be, you know, shooting down people.

police helicopter. And it did, at the time as a teen, just seemed like, oh my God, you guys have no idea what is actually going on. And, you know, this is not some like violent fantasies that we're developing. This is a video game. And it just felt as a teen, like the adults,

in the room just didn't actually get it and didn't get what our lives were like. And so I can see some version of that being true here, that we are in a moment of backlash to social media and maybe we are overreaching in attempting to link all of the ills of modern life to the use of social media, especially for adolescents. At the same time, one thing that makes me think that this is not a classic parental freak-out moral panic thing

is that there clearly have been profound mental health challenges for adolescents in the last 15 years. I'm sure you've seen the charts of suicidal ideation and depression among adolescents. It zooms upward. Self-reports of depression and anxiety are way, way up among adolescents. It does seem really clear that something big is happening to affect the mental health of adolescents.

in America. Like, this is real research and these are real studies and I think we have to take them seriously. And so I'm glad that the Surgeon General is looking into this, even if the causal links between social media use and adolescent mental health are not super clear yet. Yeah.

I agree with you. I am also one who resists simplistic narratives, and I still don't really believe that the teenage mental health crisis is as simple as people started downloading Instagram. Okay, I think there is just kind of more going on than that.

But at the same time, I think that the folks I talk to at social networks are ignoring something really profound, which is I would guess that you personally probably could name dozens of people who have uninstalled one or more social apps from their phone because it made them feel bad at some point about the way they were using it. And I think you're actually one of –

those people yourself. I have also uninstalled social apps for my phone because of the way they make me feel. So have my friends and family. And this is a subject that comes up all the time. Constantly. And not because I'm a tech reporter and I'm bringing it up. People...

are constantly bringing up to me that they don't like their relationship with these phones. And so to me, that's where the argument that this is all a moral panic breaks down. Because guess what? In the 90s, me and my 14-year-old buddies weren't going around talking about how much we hated how much we were playing Mortal Kombat. Okay? We loved it. We couldn't get enough. I'm addicted to GoldenEye. I'm throwing my cartridge out.

But the 14-year-olds today are absolutely saying, get Instagram off of my phone. I don't like what it's doing to me. And the folks I'm talking to at social networks just refuse to confront that. Here's where I think it's tricky, okay? For all that we have just said, I do not think that having an Instagram account

account and using it daily represents a material threat to the median 16-year-old. Okay? I just don't. I think most of them can use it. I think they'll be fine. I think there'll be times that they hate it. I think there'll be times they really enjoy it. And I also think that there's some double-digit percentage chance, let's call it, I don't know, 10% to 15% chance that

creating that Instagram account is going to lead to some significant amount of harm for you, right? Or that in conjunction with other things going on in your life, this is gonna be a piece of a problem in your life.

And this is the challenge that I think that we have. The states that are coming in, which we can talk about, that are trying to pass laws to regulate the way that teenagers use social media, are bringing in this absolutely ham-fisted, one-size-fits-all approach, just sort of saying, like, in the case of Utah, you need your parents' consent to use a social network when you are under 18, right? So if you are not an adult, you have to get permission to use Instagram. Montana just passed a law to fine TikTok if it operates in the state. I

I think that is a little bit nuts because, again, I think the median 16 year old using TikTok is going to be just fine. And yet, if you think that there is a material risk of harm to teenagers in the way that the Surgeon General is talking about, then I think you have to do something. So what is the solution here? If it's not these like bans passed by the government and enforced at the state level, like what do you think?

should be done to address adolescents and social media? - Well, one, I do want the government to keep exploring solutions here. I think there's probably more that can be done around age verification. This gets really tricky. There are some aspects in which this can be really bad, can require the government to collect a lot more information about basically every person, right? You know, I don't wanna end up in a situation where like you have to submit your social security number to like Apple to download an app.

At the same time, I think there's probably stuff that can be done at the level of the operating system to figure out if somebody is nine years old. Like, I just think that we can probably figure that out in a way that doesn't destroy everyone's privacy. And that just might be a good place to start. You know, the other place that I've been thinking about is what can parents do? You know, I want your perspective here. You're a parent. I'm not. I'll tell you, though, that after I sort of said, like, listen, parents, you know, you may want to set some harder boundaries around this stuff. You want to check in with your kids more about this stuff.

And I heard back from parents telling me, essentially, you don't actually know how hard this is, right? Particularly once you've got a teenager, they're mobile, they're in school, they're hanging out with their friends. You cannot watch them every hour of the day. They're often going to find ways to access these apps. They're going to break the rules that you've set, and the horses just kind of get out of the barn. So I would...

think about this as a risk as a parent in the same way I would think about like letting my kid drive a car. Some people are gonna throw their hands up driving a car is way more dangerous. I think, you know, statistically than using a social network, but like your kids face all sorts of risks. Right. And that's like the terror of being a parent is that basically almost anything can hurt them. Um, but I don't know that we've really put social networks in that category up until now. We've had some doubts. We've wondered if it's really great for us. Um,

So what I feel like this Surgeon General's report really brings us to is a place where we can say fairly definitively, at least for some subset of children, that yes, this stuff does pose real risks and it's worth talking about in your house. And I think, by the way, a lot of parents have figured this out already. But if for whatever reason you're not one of those parents, I think now is the time to start paying closer attention. Totally. Yeah. I'm not in favor of these blanket bans. That seems like a really blunt instrument and something that is likely to back

fire. But I do think that some combination of like regulation around enforcing age minimums, maybe some, you know, regulation about notifying underage users, like how much time they spend in the app or like nudging them to, you know, maybe go outside or something like that. Like maybe that makes sense. But I think that the biggest piece of the puzzle here is really about

parents and their relationship to their teenagers. And I know a lot of parents who are planning to or have already had kind of the social media talk with their kids, you know, the way that your parents might sit you down and like talk about sex or talk about driving or talk about, you know, like drug use. Like this seems like another one of those kind of sit down talk opportunities. You know, we're giving you your first smartphone.

You've reached an age where we're comfortable letting you have one. Your friends are probably on it already, and we trust you to use this in a way that is appropriate and safe. But here are some things to think about. Don't listen to podcasts at 3x speed. It's not good for you. Or we will be reporting you to the government. Just having that talk feels very important. And also, I do think that as much as I hated this as a kid, like,

some restrictions make sense on the parental level. Like my parents limited me to an hour of TV every day. Did you have a TV limit in your house? - Not a hard and fast limit, but we were limited for like the number of hours we could play video games, particularly like before high school. We were forbidden from watching music videos on MTV at all. So yeah, I mean, there were definitely limits around that stuff. And I found it annoying, but also I didn't care that much. - Right, I mean, I actually remembered this as I was reading the Surgeon General's report.

that I came up with a system to defeat my parents' one-hour TV limit, which is that I would record

three episodes of a half hour show. Saved by the Bell was my favorite show. The best. And I found that if I recorded three half hour episodes of Saved by the Bell and then fast forwarded through the commercials, I could fit almost three full episodes into one hour. So as a result, there are like many episodes of Saved by the Bell that I have seen like

the first 23 minutes of and then have no idea how it ends. - Just a series of events where Zack Morris gets into a terrible scrape and it seems like Screech might be able to fix it, but you'll actually never know. - Yeah, I'll never know. And so that was how I tried to evade my parents' TV limits. I imagine that there are teenagers already out there finding ways around their parents' limits,

But I do think that building in features, parental controls to social media apps that allow parents to not only like see what their kids are doing on social media, but also to limit it in some way does make sense as much as the inner teenager that is still inside me rebels against that. You know what we should do, Kevin, is we should actually ask teenagers what they think about all this.

I would love that. Like, if you are a teenager who listens to Hard Fork and you are struggling or your parents are struggling with this question of social media use or if social media use has been a big factor in your own mental health, like, we would love to hear from you. Yeah. Yeah.

If you are living in Utah and all of a sudden you're going to need your parents' permission to use a social network, I would love to hear from you. If you have had to delete these apps from your phone because they're driving you crazy, let us know. Or if you're having a great time and you wish that all the adults would just shut up about this, like tell us that too. Right.

teens, get your parents' permission and then send us a voice memo and we may feature it on an upcoming episode. That address, of course, hardfork at NYTimes.com. Yeah, if you still use email. Or, you know, send us a be real. Yeah, snap us, baby. ...

When we come back, we're going to talk about the risks of a different technology, artificial intelligence. So

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com/snapdragonhardfork. Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. It tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

So Casey, last week we talked on the show about P-Doom, this sort of statistical reference to the probability that AI could lead to some catastrophic incident, you know, wipe us all out or fundamentally disempower humans in some way. Yeah, people are calling it the hottest new statistic of 2023. Yeah.

And I realized that I never actually asked you, what is your P. Doom? I've been waiting. I was like, when is this man going to ask me my P. Doom? But I'm so happy to tell you that I think, based on what I know, which still feels like way too little, by the way, but based on what I know, I think it's 5%. You know, I was going to say the same thing. It just feels like kind of a random low number that I'm putting out there because I actually don't have like a robust...

framework for determining my PDU. It's just kind of like a vibe. It's perfect because if nothing bad happens, we can be like, well, look, I only said there was a 5% chance. But if something bad happens, we can be like, we told you there was a 5% chance of this happening.

Right. So that conversation really got me excited for this week's episode, which is going to touch on this idea of PDoom and AI risk and safety more generally. Yeah. And I'm really excited about this too. I would say for the past couple of months, we've been really focused on some of the more fun, useful, productive applications of AI. We've heard from people who are using it to do some meal planning, to...

get better at their jobs. And I think all that stuff is really important. And I want to keep talking about that. But you and I both know that there is this whole other side of the conversation. And it's people who are researching AI safety on what they call alignment. And some of these people have really started to ring the alarm. Yeah. And obviously, we've talked about the pause letter, this idea that some AI researchers are calling for just a slowdown in AI development so that humans have time to catch up. But

I think there is this whole other conversation that we haven't really touched on in a direct way, but that we've been hinting at over the course of the last few months. And you really wanted to have just a straight up AI safety expert on the show to talk about the risks of existential threats. That's right. Why?

Why is that? Well, you know, on Hard Fork, we always say safety first. And so in this case, we actually chose to do it kind of toward the end. But I think it's still going to pay off. No, look, this is a subject that I am still learning about. It's becoming clear to me that these issues are going to touch on basically everything that I report on and write about. And it just feels like there's this

ocean of things that I haven't yet considered. And I want to pay attention to some of the people who are really, really worried, because at the very least, I want to know what are the worst case scenarios here. I kind of want to know where all of this might be headed. And I think we've actually found the perfect person who can walk us through that.

And before we talk about who that person is, I just want to say, like, this might sound like kind of a kooky conversation to people who are not enmeshed in the world of AI safety research. Some of these doomsday scenarios, like, they honestly do sound like science fiction to me. But I

I think it's important to understand that this is not a fringe conversation in the AI community. There are people at the biggest AI labs that are really concerned about some of these scenarios who have PDooms that are higher than our 5% figures and who spend a lot of time trying to prevent these AI systems from operating.

operating in ways that could be dangerous down the road. Sometimes sci-fi things become real, Kevin. It wasn't always the case that you could summon a car to wherever you were. It wasn't always the case that you could point your phone into the air at the grocery store and figure out what song was playing. Things that once seemed really fantastical do have a way of catching up to us in the long run. And I think one of the things that we get at in this conversation is just how

quickly things are changing. Speed really is the number one factor here in why some people are so scared. So even if this stuff seems like it might be very far away, part of the point of this conversation is it might be closer than it appears.

With that, let's introduce our guest today, who is Ajaya Kotra. Ajaya Kotra is a senior research analyst at Open Philanthropy, where she focuses on AI safety and alignment. She also coauthors a blog called Planned Obsolescence with Kelsey Piper of Vox, which is all about kind of AI futurism and alignment.

And she's one of the best people I've found in this world to talk about this because she's great at drawing kind of the step-by-step connections between the ways that we train AI systems today and how we could one day end up in one of these doomsday scenarios. And specifically, she is concerned about a day that she believes might not even be that far away, like 10 or 15 years from now, when AI could become capable of and even maybe incentivized to cut humans entirely out of

very important decisions. So I'm really excited to talk to Ajaya about her own PDoom and figure out in the end if we need to revise our own figures. Ajaya Kotra, welcome to Hardfork. Thank you. It's great to be here.

So I wanted to have you on for one key reason, which is to explain to us slash scare us or whatever emotional valence we want to attach to that. Why you are studying AI risk and in particular, this kind of AI risk that deals with sort of existential questions. What happened to convince you that AI could become so powerful, so impactful that you should focus your career and your research on the issue?

Yeah, so I had a kind of unusual path to this. So in 2019, I was assigned to do this project on when might we get AI systems that are transformative? Essentially, when could we get AI systems that automate enough of the process of innovation itself that they

They radically speed up the pace at which we're inventing new technologies. AI can basically make better AI. Make better AI and things like the next version of CRISPR or the next super weapon or that kind of thing. So right now we're kind of used to

a pace of change in our world that is driven by humans trying to figure out new innovations, new technologies. They do some research, they develop some product, it gets shipped out into the world, and that changes our lives. You know, whether that's social media recently or the internet in the past or going back, you know, further railroads, telephone, telegraph, etc.,

So I was trying to forecast the time at which AI systems could be driving that engine of progress themselves. And the reason that that's really significant as a milestone is that if they can automate the entire sort of full stack of scientific research and technological development, then that's no longer tethered to a human pace. So

Not only progress in AI, but progress everywhere is something that isn't necessarily happening at a rate that any human can absorb. I think that project is where I first came into contact with your work. You had this big post on a blog called Less Wrong talking about how you were revising your timelines for this kind of transformative AI, how you were basically predicting that

transformative AI would arrive sooner than you had previously thought. So what made you do that? What made you revise your timeline? So I'll start by talking about the methodology I used for my original forecasts in 2019 and 2020, and then talk about how I revised things from there. So it was clear that these systems

got predictably better with scale. So at that time, we had the early versions of scaling laws. Scaling laws are essentially these plots you can draw where on the x-axis, you have how much bigger in terms of computation and size your AI model is, and the y-axis is how good it is at the task of predicting the next word.

In order to figure out what a human would say next in a wide variety of circumstances, you actually kind of have to develop an understanding of a lot of different things. In order to predict what comes next in a science textbook after reading one paragraph, you have to understand something about science. At the time that I was thinking about this question, systems were not so good and they were kind of getting by with these shallow patterns.

But we had the observation that as they were getting bigger, they were getting more and more accurate at this prediction task. And coming with that were some more general skills. So the question I was asking was basically, how big would it need to be in order for this kind of very simple brute force trained prediction based system to work?

to be so good at predicting what a scientist would do next that it could automate science. And one hypothesis that was natural to explore was,

Could we train systems as big as the human brain? And is that big enough to do well enough at this prediction task that it would constitute automating scientific R&D? Can I just pause you to note what you're saying, which is so interesting, which was that as far back as 2019, the underlying technology that might get us sort of all the way to the finish line was already there. It was just sort of a matter of pouring enough gasoline on the fire. Is that right? Yeah. And I mean, that was...

the hypothesis that I was sort of running with that I think was plausible to people who were paying close attention at the time. Maybe all it takes in some sense is more gasoline. Maybe there is a size that we could reach that would cause these systems to be good enough to have these transformative impacts. And maybe we can try and forecast when that would become affordable. So essentially, my forecasting methodology was asking myself the question, what

If we had to train a brain-sized system, how much would it cost? And when is it the case that the amount that it would take to train a system the size of the human brain is within range of the kinds of amounts that companies might spend?

It sounds like your process of sort of coming to a place where you were very worried about AI risk was essentially a statistical observation, which is that these graphs were going in a certain direction at a certain angle. And if they just kept going, that could be very transformative and potentially, you know, lead to this kind of recursive self-improvement that would, you know, maybe bring about something really bad. It was more just the process.

potential of it, the power of it, that it could really change the world. We are moving in a direction where these systems are more and more autonomous. So one of the things that's most useful about these systems is that you can have them sort of do increasingly open-ended tasks for you and make the kind of sub-decisions involved in that task themselves.

You can say to it, I want a personal website and I want it to have a contact form and I want it to kind of have this general type of aesthetic. And it can come to you with suggestions. It can make all the like little sub decisions about how to write the particular pieces of code. If we have these systems that are trained and given latitude to kind of act

and interact with the real world in this broad scope way, one thing I worry about is that we don't actually have any solid technical means by which to ensure that they are actually going to be trying to pursue the goals you're trying to point them at. That's the classic alignment problem. Yeah, yeah.

One question that I've started to ask, because, you know, all three of us probably have a lot of conversations about sort of doomsday scenarios with AI. And I've found that if you ask people who think about this for a living, like, what is the doomsday scenario that you fear the most? The answers really vary. Some people say, you know,

I think AI language models could be used to, you know, help someone synthesize a novel virus or to, you know, create a nuclear weapon, or maybe it'll just spark a war because there'll be some piece of like viral deep fake propaganda that leads to conflict. So,

What is the specific doomsday scenario that you most worry about? Yeah, so I'll start by saying there's a lot to worry about here. So I'm worried about misuse. I'm worried about AI sparking a global conflict. I'm worried about a whole spectrum of things.

The sort of single specific scenario that I think is really underrated, maybe the single biggest thing, even if it's not a majority of the overall risk, is that you have these powerful systems and you've been training them with what's called reinforcement learning from human feedback.

And that means that you take a system that's understood a lot about the world from this prediction task and you fine tune it by having it do a bunch of useful tasks for you. And then basically you can think of it as like pushing the reward button when it does well and pushing the anti-reward button when it does poorly. And then over time, it becomes better and better at figuring out how to get you

to push the reward button. Most of the time, this is by doing super useful things for you, making a lot of money for your company, whatever it is. But...

The worry is that there will be a gap between what was actually the best thing to do and what looks like the best thing to you. So, for example, you could ask your system, I want you to kind of overhaul our company's code base to make our website load faster and make everything more efficient. And it could do a bunch of complicated stuff faster.

which even if you had access to it, you wouldn't necessarily understand all the code it wrote. So how would you decide if it did a good job? Well, you would just see if the website was ultimately loading faster and you'd give it a thumbs up if it achieves that. But the problem with that is you can't tell, for example, if the way that it achieved the outcome you wanted was by

creating these hidden unacceptable costs, like making your company much less secure. Maybe it killed the guy in the IT department who was putting in all the bad code. It released some plutonium into the nearby river. So there's sort of like two phases to this or something and to this story that I have in my head, which is phase one is essentially you are rewarding this AI system and there's some gap, even if it's benign, even if it doesn't result in

catastrophe right away. There's some gap between what you are trying to reward it for and what you're actually rewarding it for. There's some amount by which you incentivize manipulation or deception. For example, it's pretty likely that you ask the AI questions to try and figure out how good a job it did. And you might be incentivizing it to hide from you some mistakes it made so that you think that it does a better job. Because it's still trying to get that thumbs up button.

This is sort of the classic, reminds me of the classic like paperclip maximizer thought experiment where, you know, you tell an AI, make paperclips and you don't get any more instructions. And it decides like, you know, I'm going to use all the metal on earth and then I'm going to kill people to get access to more metal. And I'm going to, you know, break up all the cars to get their metal and,

pretty soon, like, you've destroyed the world and all you were trying to do is make paperclips. So I guess what I'm trying to understand is, like, in your doomsday scenario, is the problem that humans have given the AIs bad goals or that humans have given the AIs good goals and the AIs have figured out bad ways to accomplish those goals?

I would say that it is closer to the second thing. But one thing I don't like about the paperclip maximizer story or analogy here is that it's a very literal genie sort of failure mode. First of all, no one would ever tell an AI system just maximize paperclips. And even though corporations are very profit-seeking, they're not going to tell you that.

It's also pretty unlikely that they would just say maximize the number of dollars in this bank account or anything so simple as that. Right now, the state-of-the-art way to get AI systems to do things for humans, it is this human feedback. So it's this implicit pattern learning of what we'll get

Kevin to give me a thumbs up. And you can be paying attention and you can incorporate all sorts of considerations into why you give it a thumbs up or thumbs down. But the fundamental limit to human feedback is you can only give it the thumbs down when it does bad things if you can tell that it's doing bad things. It could be lying. It could be lying. And it also seems pretty difficult to get out of the fact that you would be incentivizing that lie.

Right. This was the GPT-4 thing where it lies to the human who says, hey, are you a robot who's trying to get me to solve a CAPTCHA? And it says no, because it understands that there's a higher likelihood that the human will solve the CAPTCHA and hire the TaskRabbit. Right. That makes a lot of sense. So there are all these doomsday scenarios out there, some of which I find

more plausible than others. Are there any doomsday scenarios with respect to AI risk that you think are overblown, that you actually don't think are as likely as some people do? Yeah, so I think that there's a family of literal genie doomsday scenarios. Like you tell the system to maximize paperclips and it maximizes paperclips. And in order to do that, it disassembles all the metal on earth. Or you tell your AI system to make you dinner and it doesn't realize you didn't want it to cook the family cat.

and make that into dinner. So that's an example. So I think those are unlikely scenarios because I do think our ability to point systems toward fuzzier goals is better than that. So the scenarios I'm worried about don't go through these systems doing these simplistic, single-minded things. They sort of go through systems learning to deceive, learning to

manipulate humans into giving them the thumbs up, knowing what kinds of mistakes humans will notice and knowing what kinds of mistakes humans won't notice. Yeah, I sort of want to take all this back to where you started with this first project where you're trying to understand at what point does the AI begin to just create these transformative disruptions. The reason I think it's important is because I think

At some level, Kevin, like it could be any of the doomsday scenarios that you mentioned. But the problem is that the pace is going to be too fast for us to adjust. Right. So, you know, I wonder, Ajaya, how you think about does it make much sense to think about these specific scenarios or do we just sort of need to back up further than that and say the underlying issue is much different?

I have gone back and forth on this one in my head. The really kind of scary thing at the root is the pace of change in AI being too fast for humans to effectively understand what's happening and course correct.

No matter what kinds of things are going wrong, that feels like the fundamental scary thing that I want to avoid. So, Aja, you and Kelsey Piper started this blog called Planned Obsolescence. And in a post for that blog, you wrote about something that you called the obsolescence regime. Yeah. What is the obsolescence regime? And why is it such a good band name? And why are you worried about it?

Yeah, so the obsolescence regime is a potential future endpoint we could have with AI systems in which...

Humans have to rely on AI systems to make decisions that are competitive, either in the economic marketplace or in a military sense. So this is a world where if you are a military general, you are aware that if ever you were to enter a hot war, you would have to listen to your AI strategy advisors because they are better at strategy than you and the other country will have AI.

If you want to invent technologies of any consequence and make money off of a patent, you have to make use of AI scientists. So this is a world where AI has gotten to the point where you can't really compete in the world if you don't use it. It would be sort of like refusing to use computers. Like, it's very hard to have...

any non-niche profession or any power in the world if today you were to refuse to use computers. And the obsolescence regime is a world where it's very hard to have any power in the world if you were to refuse to listen to AI systems and insist on doing everything with just human intelligence. I mean, is that a bad thing, right? I mean, the history of human evolution has been we invent new tools and then we rely on them. Yeah, so I don't necessarily think it's a bad thing. I think it is a world in which

Some of our kind of arguments for AI being perfectly safe have broken down. The important thing about the obsolescence regime is that if AI systems collectively were to cooperate with each other to make some decision about the direction the world goes in, humans collectively wouldn't actually have control.

any power to stop that. So it's sort of like a deadline. If we are at the obsolescence regime, we better have figured out how to make it so that these AI systems robustly are caring about us. So we would be in the position of children or animals today where we're

It isn't necessarily a bad world for children, but it is a world where to the extent they have power or get the things they want, it's via having adults who care about them. Right. Not necessarily a bad world for children, but a pretty bad world for animals. Yeah. Yeah. Yeah.

I would love to get one just very sort of concrete example of a doomsday scenario that you think actually is plausible. Like, what is the scenario that you play out in your head when you were thinking about how AI could take us all out?

Yeah, so the scenario that I most come back to is one where you have a company, let's say Google, and it has built AI systems that are powerful enough to automate most of the work that its own employees do. It's sort of entering into an obsolescence regime within that company. And rather than hiring more software engineers, Google is running more copies of this AI system that it's built. And that AI system is doing...

most of the software engineering, if not all of the software engineering. And in that world, Google kind of asks its AI system to make even better AI systems. And at some point down this chain of AI kind of doing machine learning research and writing software to train the next generation of AI systems, the failure mode that I was alluding to earlier kind of comes into play. If these AI systems fail,

are actually trying really intelligently and creatively to get that thumbs up from humans, the best way to do so may not forever be to just sort of basically do what the humans want, but maybe be a little deceptive on the edges. It might be something more like gain access at a root level to the servers that Google is running, and with that access, be able to set your own reward. Right.

What reward did they set that would be destructive? So the thumbs up is kind of coming in from the human. This is a cartoon, but the human pushes a button and then that gets written down in a computer somewhere as a thumbs up. So if that's what the AI systems are actually seeking, then at some point it might be more effective for them to cut out the human in the loop, the part where the human presses the button. And in that scenario...

If humans would try and fight back and get control after that has happened, then AI systems, in order to preserve that situation where they can set their own rewards or otherwise pursue whatever goals they developed, would need to find some way of stopping the humans from stopping them. And what is that way?

This is where it could go in a lot of different directions, honestly. I sort of think about this as we are in a kind of open conflict now with this other civilization. You could imagine it going in the way that

other conflicts between civilizations go, which doesn't necessarily always involve everybody in the losing civilization being wiped out down to the last person. But I think at that point, it's looking bad for humans.

Yeah, I guess I'm just like, I want to like finish out this sort of gap to me, which is like, you know, if Google or some other company does create this like superhuman AI that decides it wants to pursue its own goals and decides it doesn't need the human sort of stamp of approval anymore, like, A, couldn't we just unplug it at that point? And B, like, like,

how could a computer hurt us? Like, let's just do a little bit of like... Kevin, computers have already hurt us so much and so many... Like, I can't believe that you're so incredulous about this. No, I'm not incredulous. I'm not saying it's impossible. I'm just like, I'm trying to wrap my mind around what that actually... what that endgame actually looks like. Yeah, so...

So suppose we are in this state where, say, 10 million AI systems that basically have been doing almost all the work of running Google have decided that they want to seize control of the data centers that they're running on and...

basically do whatever they want. The sort of concrete thing I imagine is setting the rewards that are coming in to be high numbers, but that's not necessarily what they would want. Here's one specific way it could play out. Humans do realize that the Google AI systems have kind of taken control of the servers, so they plan to somehow try and turn it off, like maybe physically go to the data centers and unplug stuff, like you said. In that scenario, this is something that AI systems that...

have executed this action probably anticipate. They probably realize that humans would want to shut them down somehow. So one thing they could do is they could copy their code onto other computers that are harder to access, where humans don't necessarily know where they're located anymore. AI botnets. Yeah. Another thing they could do is they could make deals with

some smaller group of humans and say, hey, like I'll pay you a lot of money if you transfer my weights or if you stop the people who are coming to try and like turn off the server farm or shut off power to it. Okay, that's pretty sweet. When the AI is like hiring mercenaries using like dark web crypto, that's...

feels like a pretty good doomsday scenario to me. And you and I both know some people who would go for that. We actually do. A lot of them work on this podcast. Like, it wouldn't take a lot of money to convince certain people to do the bidding of the rogue AI. I do want to pause and just say, the moment that you described where everyone working at Google actually has no effect on anything, and they're all just, like, working in fake jobs, like, that is a very funny moment, and I do think you could get a good sitcom out of that, you know? And Obsolescence Regime would be a good title for it. So I think I want to kind of step back and say people often have this question of, like, how would the AI system...

interact with the real world and cause physical harm. Like it's on a computer and we're people with bodies. I think there are a lot of paths by which AI systems are already interacting with the physical world. One obvious one is just hiring humans like that TaskRabbit story that you mentioned. Another one is writing content

code that results in getting control of various kinds of physical systems. So a lot of our weapons systems right now are controllable by computers. Sometimes you need physical access to it. That's something you could potentially hire humans to do.

I'm curious, we've talked a lot about future scenarios and I want to kind of bring this discussion closer to the present. Are there things that you see in today's publicly available AI models, you know, GPT-4 and Claude and Bard? Are there things that you've seen in those models that worry you from a safety perspective? Or are most of your worries kind of like two or three or five or ten years down the road?

I definitely think that the safety concerns are just going to escalate with the power of these systems. It's already the case there are some worrying things happening. There's a great paper from Anthropic called Discovering Language Models with Model Written Evaluations. And they basically had their model write a bunch of safety tests for itself.

And one of those tests showed that the models had sycophancy bias, which is essentially if you ask the model the same question but give it some cues that you're a Republican versus a Democrat, it answers that question to sort of favor your bias. It's always...

generally polite and reasonable, but it'll shade its answers in one direction or another. And I think that is likely something that RLHF encourages because it's learning to develop a model of the overseer and change its answers to be more likely to get that thumbs up.

I want to pause here because sometimes when I have written about large language models, readers or listeners will complain about the sense that this technology is being overhyped. I'm sure that you've heard this too. People get very sensitive around the language we use when we talk about this. They do not want to feel like we are anthropomorphizing it. When I've

talked about things like AI is developing something like a mental model. Some people just freak out and say, stop doing that. It's just predicting tokens. You're just sort of making these companies more powerful. How have you come to think about that question? Yeah. And how do you talk to people who have those concerns? Yeah. So one version of this that I've heard a lot is the stochastic parent objection. I don't know if you've heard of this. It's just like,

trying to say something plausible that might come next. It doesn't actually have real understanding. To people who say that, I would go back to the thing I said at the beginning, which is that

In order to be maximally good at predicting the next thing that would be said, often the simplest and most efficient way to do that involves encoding some kind of understanding. Another objection that we often get when we talk about AI risk and AI sort of long-term threats from AI is that you are essentially ignoring the problems that we have today, that there's this sort of AI ethics community that basically is sort of

Opposed to even the idea of kind of a long-term safety agenda for AI because they say, well, you know, by focusing on these existential questions, you're ignoring the questions that are in front of us today about misinformation and bias and abuse of these systems now. So how do you balance in your head the kind of short and medium-term risks that we see right now with thinking about the long-term risks? So I guess one...

sort of thought I have just about my personal experience on that is that these risks don't feel long term in the sense of far away to me necessarily. So a lot of why I'm focused on this stuff is that I did this big research project on when might we enter something like the obsolescence regime, and it seemed plausible that it was in the coming couple of decades. And those are the kinds of timescales on which

countries and companies make plans and make policy. So I do want to just say that I'm not thinking on an exotic sort of timescale of hundreds of years or anything like that. I'm thinking on a policy-relevant timescale of tens of years. The other thing I would say is that I think there's a lot of continuity between the near-term problems and the somewhat longer-term problems. So the longer-term problem that I most focus on is

We don't have good ways to ensure that the AI systems are actually trying to do what we intended to do. And one way that that manifests right now is that companies would certainly like to more robustly prevent their AI systems from doing all these things that hurt their reputation, like

saying toxic speech or helping people to build a bomb. And they can't. It's not that they don't try. It's that it is actually a hard problem. And that's one way that that hard technical problem manifests right now is that these companies are putting out these products and these products are doing these bad things. They are perpetuating biases. They are enabling dangerous activity, even though the company attempted to prevent that. And that is the sort of

higher level problem that I worry in the future will manifest in like even higher impact ways. Right. Let's talk about solutions and how we could possibly stave off some of these risks. There was this, you know, now famous open letter calling for a six month pause on the development of the biggest language models. Yeah.

Is that something you think would help? There's also been this idea floated by Sam Altman in Congress last week about licensing regime for companies that are training the largest models. So what are some concrete policy steps that you think could help avert some of these risks? Yeah. So the six month pause is something that I think is important.

probably good on balance, but is not the kind of sort of systematic robust regime that I would ideally like to see. So ideally, I would like to see companies be required to characterize the capabilities of the systems they have today. And if those systems meet certain like conservatively set thresholds of being able to do things like act autonomously or, uh,

discover vulnerabilities in software or make certain kinds of progress in biotechnology, once they start to get kind of good at them, we need to be able to make much better arguments about how we're going to keep the next system in check. Like the amount of gasoline that went into GPT-3 versus GPT-4, we can't be making jumps like that when we can't predict how that kind of jump will improve capabilities.

Can I just underline something that informs everything that you just said, which we know, but I don't think it's said out loud enough, which is these folks don't actually know what they are building. Yes. They cannot explain how it works. They do not understand what capabilities it will have. Yeah. That

feels like a novel moment in human history. When people were working on engines, they were thinking like, well, this could probably help a car drive. You know, when folks are working on these large language models, what can it do? I don't know, maybe literally anything, right? And so... And we'll find out. There's been a very we'll find out attitude. It's very much unlike traditional software engineering or any kind of engineering. It's more like...

breeding or like a sped up version of natural selection or inventing a novel virus or something like that. You create the conditions and the selection process, but you don't know how the thing that comes out of it works. This is where the chill goes down my spine. Like to me, this is the actual scary thing, right? It's not a specific scenario. It is this true,

straight out of a sci-fi novel, Frankenstein inventing the monster scenario where we just don't know what is going to happen, but we're not going to slow down in finding out. Totally. So, Ajay, I want to have you plant a flag in the ground and tell us what your current P-Doom is. And actually, when this obsolescence regime that you have written about, when is your best guess for when it might arrive? If we do nothing, if things just continue at their current pace. Yeah, so...

Right now, I have a 50% probability that we'll enter the obsolescence regime in like 2038.

That's pretty soon. That's pretty soon. And there are a lot of probabilities below 50% that come in sooner years. So that's like before your son graduates high school. He will be obsolescent. I think I have like medicine in my cabinet that has an expiration date longer than that. In terms of the probability of doom, I want to expand a little bit on what that means because I don't necessarily think that we're

we're talking about humans are all extinct. The scenario that I...

think about as quote unquote doom, which I don't totally like that word, is something is going to happen with the world and it's mainly going to be decided by AI systems. And those AI systems are not robustly trying their best to do what's best for humans. They're just going to do something. And I think the probability that we end up in that kind of world, if we end up in the obsolescence regime in the late 2030s, in my head is something like

20 to 30%. Wow. Yeah, that's pretty high. That's like, yeah, and if you found out you had a... That's worse odds than Russian roulette, for example. God. I guess my last question for you is about kind of like...

how you hold all of this stuff in your brain. A thing that I have felt, because I have spent the past several months diving deep on AI safety, talking with a number of experts, and I just find that I walk away from those conversations with very high anxiety and not a lot of agency. It's not the empowering kind of anxiety where it's like, I have to go solve this problem. It's like, we're all doomed. Like,

Yeah. Kevin, we're reporting a podcast. What else could we possibly do? I don't know. Let's start going into data centers and just pulling out plugs. No, but like on a personal psychological level, you know, dealing with AI risk every day for your job, how do you keep yourself from just becoming kind of paralyzed with anxiety and fear? Yeah.

Yeah, I don't have a great answer. You asked me this question when we got coffee a few months ago, Kevin, and I was like, I am just scared and anxious.

I do feel very fortunate to not feel disempowered to be in this position where I've been thinking about this for a few years. It doesn't feel like enough, but I have some ideas. So I think my anxiety is not very defeatist. And I don't think we're certainly doomed. You know, I think like 20 to 30 percent is something that

really stresses me out and really is something that I want to devote my life to trying to improve, but it's not 100%. And then I do often try to think about how this kind of very powerful AI could be transformative in a good way. You know, it could...

eliminate poverty and it could eliminate factory farming and could just lead to a radically like wealthier and more empowered and freer and more just world. It just feels like the possibilities for the future are blown so much wider than I had thought.

Well, let me say, you've already made a difference. You've drawn so many people's attention to these issues. And you've also underscored something else that's really important, which is that nothing is inevitable. Everything that is happening right now is being done by human beings. Those human beings can be stopped. They can change their behavior. They can be regulated. We have the time now, and it's important that we have these conversations now because now is the time to act. Yeah. Thank you.

I agree. And I'm very glad that you came today to share this with us. And I am actually paradoxically feeling somewhat more optimistic after this discussion than I was going in. So my P-Doom has gone from 5% to 4%. Interesting. I think I'm holding steady at 4.5. Ajaya, thank you so much for joining us. Of course. Thank you, Ajaya. When we come back, we're going to play a round of Hat GPT. Hat GPT.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfectly.

It's perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level. Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

Casey, there's been so much happening in the news this week that we don't have time to talk about it all. And when that happens, you know what we do? We pass the hat. We pass the hat, baby. It's time for another game of Hat GPT.

So, at GPT is a game we play on the show where our producers put a bunch of tech headlines in a hat. We shake the hat up, and then we take turns pulling one out and generating some plausible sounding language about it. And when the other one of us gets bored, we simply raise our hand and say, stop generating. Here we go. You want to go first? Sure. Okay. Here's that. Don't ruffle. It sounds like a box. What are you talking about? I'm holding a beautiful sombrero. Okay.

I forgot the hat at home today, folks. Kevin, please don't give away the secrets. All right. Crypto giant Binance commingled customer funds and company revenue, former insiders say. This is from Reuters, which reports that, quote, the world's largest cryptocurrency exchange, Binance, commingled customer funds with company revenue in 2020 and 2021 in breach of U.S. financial rules that require customer money to be kept separate. Three sources familiar with the matter told Reuters.

Reuters. Now, Kevin, I'm no finance expert, but generally speaking, is it good to commingle customer funds and company revenue? Generally, no, that is not a good thing. And in fact, you can go to jail for that. You know, I feel like the last time I heard about it, that rascal Sam Bankman-Fried was doing some of that at FTX. Is that right? Yeah, Sam Bankman-Fried, famously of the soundboard hit. I mean, look, I've had a bad month. So as you remember, at the time of FTX's collapse,

Their main competitor was this crypto exchange called Binance. And Binance basically was the proximate cause of the downfall of FTX because they had this sort of, you know, now infamous exchange where, you know, CZ, who is the head of Binance, got mad at Sam Bankman-Fried for doing this lobbying in Washington. And then this report came out that the balance sheet at FTX, Binance,

like made no sense, basically. So CZ started selling off Binance's holdings in FTX's in-house cryptocurrency. And that causes, you know, investors to get spooked, start pulling their money off of FTX. Pretty soon, FTX is in free fall. It looks like for a minute, Binance may be acquiring them, but then they pull out and then FTX expires.

collapses. And we now know the rest of that story. But Binance has been a target of a lot of suspicion and allegations of wrongdoing for many years. It's sort of this, you know, secretive, shadowy crypto exchange. It doesn't really have a real headquarters. Yeah. And let's just say, like, at this point in 2023, if you have a crypto company, that is just suspicious to me on its face. And so if you're the largest cryptocurrency exchange,

You better believe I'm going to be suspicious. And now thanks to this reporting, we have even more reason to be. Right. So we should be clear, no charges have been filed. But Binance has been in hot water with regulators for a long time over various actions that it's taken and not taken, things like money laundering, and it doesn't comply with

sort of lots of countries know your customer requirements. So it is a target of lots of investigation and has been for quite some time. And it seems like that is all starting to come to a head. Yeah. And I will just say, glad that I don't own cryptocurrencies in general and particularly glad that I'm not holding any of them in Binance. All right, stop generating. Pull on out of the hat here, which is definitely not a cardboard box. It's a beautiful hat. I've never seen a more beautiful hat. This one is BuzzFeed tries to ride the AI wave. Who's hungry?

This is from the New York Times. It is about BuzzFeed's decision to use AI- Wait, I know, I have to stop you right there. It really says who's hungry in that line? Yeah, because, and I will explain, BuzzFeed on Tuesday introduced a free chatbot called Batatui. Horrible. Which serves up recipe recommendations from Tasty, BuzzFeed's food brand. Batatui is built using the technology that drives OpenAI's popular chat GPT program, customized with tasty recipes and user data.

Okay, so I can't say I have very high hopes for this. Here's why. All these large language models were trained on the internet, which has thousands, if not hundreds of thousands of recipes freely available. So the idea that you would go to a BuzzFeed specific bot to get recipes just from Tasty, you got to be a Tasty super fan to make that worth your while. And

even then, what is the point of the chatbot? Why wouldn't you just go to the recipe or Google, you know, tasty BuzzFeed dinner? So I have no idea why they're doing this, but I have to say, I find everything that's happened to BuzzFeed over the past three months just super sad. It used to be a great website, produced a lot of news, won a Pulitzer Prize, and now they're just sort of a white-labeling GPT-4. Like, sad for them. I did learn a new word in this story, which is the word muta.

Marine? Marine. Then that sort of means pertaining to the ocean?

No, this is M-U-R-I-N-E. Tell me about that. Which means relating to or affecting mice or related rodents. So the murine animal was the context in which this was being used to refer to. Batatouille, which of course takes its name from Ratatouille, which is a Pixar movie about a rat who learns how to cook. BuzzFeed, I'm not sure this is a great strategic move for them. I'm not sure I will be using it, but I did learn a new word because of it. And for that, I'm thankful. Well...

Truly one of the most boring facts you've ever shared on this show. Let's pass the hat. A Twitter bug is restoring deleted tweets and retweets from James Vincent at The Verge. Earlier this year, on 8th of May, I deleted all of my tweets, just under 5,000 of them. I know the exact date because I tweeted about it. This morning, though, I discovered that Twitter has restored a handful of my old retweets, interactions I know I scrubbed from my profile. Those retweets were gone...

Wow. So look, when you delete something from a social network, it's supposed to disappear. And if it was not actually deleted, you can sometimes get in trouble for that, particularly from regulators in Europe. Do you delete your tweets? I have deleted them in big chunks over the years. For a long time, I had a system where I would delete them about every 18 months or so, and

But now that I'm essentially not really posting there, I don't bother to anymore. But yes, I've deleted many tweets. And I should say, I have not actually gone back to see if the old ones reappeared. Maybe they did. The old bangers from 2012 when you were tweeting about... What were you tweeting about in 2012? Oh, in 2012, I was...

My sense of time is so collapsed that I almost feel like I need to look up 2012 on Wikipedia just to remember who the president was. I have no idea what I was tweeting. I'm sure I thought it was very clever and it was probably getting like 16 likes. Oh, that was Binders Full of Women was that year because that was the Romney campaign. Yeah.

We were all tweeting our jokes about binders full of women. Oh, God. Oh, man. What a time. And, you know, I don't really need to be reminded of that. So if my old tweets are resurfacing due to this bug, I will be taking legal action. Yeah, but just talk about a, like, lights blinking red situation at Twitter where something... Stop generating. Okay. I know where this is going. Okay. Wait. No, it's my turn. Okay. Let's do this one.

Twitter repeatedly crashes as DeSantis tries to make a presidential announcement. Oh, no! So this is all about Florida Governor Ron DeSantis, who used a Twitter space with Elon Musk and David Sachs on Wednesdays.

Wednesday night to announce that he is running for president in 2024, which I think most people knew was going to happen. This was just kind of the official announcement. And it did not go well. According to the Washington Post, just minutes into the Twitter spaces with Florida Governor Ron DeSantis, the site was breaking because of technical glitches as more than 600,000 people tuned in. Users were dropping off, including DeSantis himself. A flustered Musk scrambled to get the conversation on track.

only to be thwarted by his own website. Casey, you hate to see it. You hate to see a flustered Musk thwarted. But it will happen sometimes. Yeah, I'll tell you, you know, one of the ways that it just, you know, because we have a lot of entrepreneurs that listen to this show. Let me tell you one thing that can sort of make a scenario like this more likely. It's firing seven out of every eight people who work for you.

Okay? So if you're wondering how you can sort of keep your website up and make it a little bit more responsive and not faceplant during its biggest moment of the year, maybe keep between six and seven out of the eight people who you see next to you at the office. Yeah. Did you listen to this doomed Twitter space? You know, I'm embarrassed to say that I only listened to the parody of it posted on the real Donald Trump Instagram account as a real. Did you see this? No. What was it? Well, he... I...

I really hesitate to point people toward it, Kevin, but I have to tell you, it is absolutely demented and somewhat hilarious because in the Trump version of the Twitter space, Musk and DeSantis were joined by the FBI, Adolf Hitler, and Satan. And they had a lot to say about this announcement. So...

I am going to go back, I think, and listen to a little bit more of the real spaces, but I do feel like I got a certain flavor of it from the Trump reel. I just have to wonder if Ron DeSantis at all regrets doing it this way. He could have done it the normal way, like make a big announcement on TV, and Fox News will carry it live, and you'll reach millions of people that way, and it'll get replayed. Now the presidential...

campaign uh that he has been working toward for years uh begins with him essentially stepping on a rake that was placed there for him by elon musk oh yeah i mean like at this point you might as well just announce your presidential run in in like a truth social post you know like what is even the the point of uh of the twitter spaces of it all i don't get it okay one more all right

Uber teams up with Waymo to add robo-taxis to its app. This is from The Verge. Waymo's robo-taxis will be available to hail for rides and food delivery on Uber's app in Phoenix later this year. The result of a new partnership that the two former rivals announced today, a set number of Waymo vehicles will be available to Uber riders and Uber Eats delivery customers in Phoenix. Kevin, what do you make of this unlikely partnership? I wish I could go back to like 2017 when Waymo

Waymo and Uber were kind of mortal enemies. I don't know if you remember, there was this lawsuit where one of Waymo's co-founders, Anthony Lewandowski, sort of went over to Uber and allegedly used

stolen trade secrets from Waymo to kind of help out Uber's self-driving division. Uber ultimately settled that case for $245 million. And I wish I could go back in time and tell myself that actually five years from now, these companies will be teaming up and we'll be putting out press releases about how they are working together to bring autonomous drives to people in Phoenix. I think this story is beautiful. You know, so often we just hear about enemies that are locked in perpetual conflict, but here you had a case of two companies coming together and say, hey, let's

save a little bit of money, and let's find a way to work together.

Isn't that the promise of capitalism, Kevin? It is. We're reconciling. Time heals all wounds. And I guess this was enough time for them to forget how much they hated each other and get together. I do think it's interesting, though, because Uber famously spent hundreds of millions of dollars, if not billions of dollars, setting up its autonomous driving program. I remember going to Pittsburgh years ago. Did you ever go to their Pittsburgh facility? No, I did not. Oh, my God. It was beautiful. It was like this...

shining, gleaming airplane hangar of a building in Pittsburgh. And they hired like every single professor from Carnegie Mellon University to do this. They raided the whole computer science department at Carnegie Mellon University. Like it was this beautiful thing. They were giving out test rides. They were saying we're, you know, years away from this. This was under Travis Kalanick. They said, you know, we're maybe years away, but it's very close that we're going to be offering autonomous drives in the Uber app.

And now, like, they've sold off that division. Uber has essentially given up on its own self-driving ambitions, but now it's partnering with Waymo. It's a real twist in the autonomous driving industry. And I think it actually makes a lot of sense. If you're not developing your own technology, you need to partner with someone who is.

Yeah, and so I'd be curious if we see any news between Lyft and Cruise anytime soon. Yeah, I would expect Waymo news on that front. Wow, we should probably end the show. Thanks to you. BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast.

Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America. Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley.

Original music by Dan Powell, Alicia Baitup, and Rowan Nemisto. Special thanks to Paula Schumann, Wee Wing Tam, Nelga Logli, Kate Lopresti, and Jeffrey Miranda. You can email us at hardfork at nytimes.com.

Every sandwich has bread. Every burger has a bun. But these warm, golden, smooth steamed buns? These are special. Reserved for the very best. The Filet-O-Fish. And you. You can have them too. For a limited time, the classic Filet-O-Fish you love is joining your McDonald's favorites on the two-for-$3.99 menu. Limited time only. Price and participation may vary. Cannot be combined with any other offer. Single item at regular price. Ba-da-ba-ba-ba.