cover of episode A.I. Vibe Check With Ezra Klein + Kevin Tries Phone Positivity

A.I. Vibe Check With Ezra Klein + Kevin Tries Phone Positivity

Publish Date: 2023/4/7
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

One interesting thing about the culture of AI is that it's just a casual icebreaker. At a party or something, people will come up to you and say, "What are your timelines?"

By which they mean, like, how long do you think we have until AI destroys the world? Which in most parts of the world would be a very strange way to open a conversation with a stranger you just met. But in San Francisco these days, it's pretty normal. It's how you get to a first date. Like, are our timelines similar or different? Oh, you're more of a 20-year person. I'm more of a three-year person. This isn't going to work.

I'm Kevin Roos. I'm a tech columnist at the New York Times. I'm Casey Newton from Platformer. This week, we do an AI vibe check with New York Times columnist Ezra Klein. And why Kevin is breaking his phone out of phone jail. You know, Kevin...

I love a crossover episode. Like, do you remember when the Flintstones met the Jensens? I do. You actually probably do. Our listeners don't because they were born in 2009 and they've been assigned this podcast in their freshman English class. But ask your parents or grandparents about it because it used to be that, you know, characters from different sort of cinematic universes would come together in a kind of killer crossover. And I'm delighted to tell you we have one of those for you today here on Hard Fork. We do. And-

let's say who it is. It's Ezra Klein. It's Ezra Klein. Ezra Klein is my colleague at the New York times. He's an opinion columnist and has a podcast called the Ezra Klein show. And he writes about a lot of things that we also spend a lot of time talking about in this podcast. Yeah. And before he was your colleague in the New York times, he was my colleague at Vox and I just relish a chance to get to know him even a tiny bit and hear him talk about any subject under the sun. But like you say, in recent weeks and months, uh,

our areas of coverage had been converging because he is every bit as interested in AI as we are. Yeah, and I think it's really a good time to talk to him because I think right now the terms of the AI debate are really being set. You can feel that there's kind of this high...

hunger for opinion and analysis and sort of tell me what to think about AI. Should I be terrified of it? Should I be skeptical of it? Should I be in awe of it and excited about it? I think there's just so many swirling questions about this area of technology. And Ezra has really been diving in in a way that I have found very interesting and very thought-provoking, and we don't agree on everything, but I always love hearing from him about this subject. So I thought we should just have him on the show and just

ask him what he makes of this moment in AI, where you have a lot of companies that are racing toward these chatbots and these large language models. You also have people sort of sounding the alarm about the existential risks of all these technologies. And what we really need in this moment, I think, are people who have a clear vision of where this all is going and are sort of observers of not just the AI technology, but the

people and the culture that this AI technology is coming out of. Yeah. And one of the things I love about Ezra, and you'll hear in our conversation, is that he is not one of these pundits who just leans back in his armchair and rips. Like, he is out there meeting with people who are working on this stuff, and he's been doing that for years. So his views are quite well-formed, even though, as he tells us, there are places where he is still trying to make up his mind.

Now, look, we should say we've given you two episodes in a row now of very serious conversation. There's probably a little bit less riffing maybe than we usually like to do. But I think you and I are on the same page, Kevin. This is big stuff, and it kind of demands a bit more serious of a conversation than we might have another week. Totally. So this is not going to be a typical episode. We're not changing the format of the show, even though we had a big interview last week and another big interview this week.

We will return to regular scheduled programming, including lots of jokes and riffs. But this week, let's talk to Ezra Klein about AI. And then after that, maybe just a little bit of shenanigans. Ezra Klein, welcome to Hard Fork. I'm thrilled to be here. Ezra, you write and podcast about many things, economics, housing policy, animal rights, etc. But recently, you've been writing and thinking a lot about AI. And as you know, we also write and think and podcast about AI. I've heard that, yes. First of all...

Please knock it off. You will be hearing from our lawyers. But second of all, we wanted to talk with you about AI because you've been doing a lot of writing and thinking about the people who are creating AI and the values that are animating them and the environment that they're working in, which as you've written about can be shockingly weird.

And I think that cultural discussion really matters because this is such an ideological technology with such important implications. And so I wanted to propose that today we do an AI vibe check.

Let's talk about AI, but instead of talking about the specifics of the technology or how it works or what companies are going to win and lose, let's talk about kind of the world that these tools are coming out of and the larger ideas swirling around them. How does that sound? That's good, although now I feel like all the time I spent reading the AI blueprint Bill of Rights is a little wasted. Well, we can talk about that too. But I want to start back.

with your own position on AI and how it's evolved recently. So you talk on your show about how your thinking on AI has shifted in the last few months, in part because of some conversations you've been having, some people you've been meeting, and also thinking about the lives of your children and how they will be different than what you might have anticipated just a few years ago. So can you tell us about the moment where you got AI-pilled, where you became convinced that this was something that you needed to write and think and podcast about?

So I think I've been intellectually AI-pilled for a while. If you go back to my podcast in Vox, I was having AI guests on back there. I had Sam Altman here at The Times in 2021. It was a very low-downloaded episode. Nobody cared at that point that I had Sam Altman on. I had Brian Christian of The Alignment Problem.

And I've been interested in AI both because it's a technology that you can clearly see is coming, right? How you define AI, I use that term very loosely. I think people get very wrapped around the axle of what is intelligence, and I am personally more interested in what feels like intelligence to us. I think a lot of the societal disruption is going to come from our experience of a world full of things that at least –

feel to us like they have personalities and intelligences and judgment. And we're already there on a bunch of levels. But I used to read a lot about AI. I've been following the rationalists and the AI existential risk people for a long time. But I could not emotionally access the question that well.

I could read it and think, huh, that makes sense. It's probably going to be a big deal. Technologies do change societies. I'll explore this. But I was pushing myself a little bit. What I would say changed for me, though, I'll be honest, it was actually partially your conversation with Sydney. And the reason it was your conversation with Sydney was that they've done a very effective job lobotomizing most of the AI systems I have used. And so their personalities are very weak. Right. And they failed.

on Sydney. And so the personality was very strong. And it was something about that and thinking about that and weirdly the movie Her and the idea that my four-year-old and my one-year-old are going to grow up in a world thickened by inorganic companions, intelligences, decision makers, et cetera, that sort of crossed me over the Rubicon to the

I believe this is conceptually important and it should be understood and policymakers need to take it seriously to I am having trouble not thinking about it because I can feel the weird shifting under my own feet and under my family's feet. Where did your mind go when you started to think about your kids growing up with AI companions, for example? Like, why did that strike you so much? Because I think that the path to that is smoother than to almost all of the other paths.

things people worry about in AI. So to get to existential risk, AI will kill us all, you actually have to buy a lot of highly speculative premises. And you might. I don't wipe that out of contention at all. To get to the AI is going to take all our jobs. I think there's a lot more friction in that process and people give credit for it. I think it will happen in a bunch of jobs. I mean, automation taking jobs is a longtime phenomenon in human history. I have no reason to believe it won't occur here. But

But the speed with which people predict it, I think, assumes firms adapt to things and that we don't protect various occupations through licensing. That just isn't true. I mean, we know nothing if not how to slow down societal change in this country. And then you look at what these systems actually can and can't do, and they hallucinate constantly. And I don't actually understand how they're really going to solve that problem so long as what they're doing is training them on internet text. I did a podcast with Gary Marcus, an AI critic, and I believe what he said there that AI

These are bullshit machines on some level. And the point about bullshit in the Harry Frankfurtian philosophical sense is not that it is not true. It's that it doesn't really have a relationship with the truth. It does not care if it is true. What it cares about is being convincing. And these are systems built to be convincing regardless of what they are saying is true. They may want to serve you. You can say they do have other goals, but truth is not one of them.

So what does that work for? It doesn't work that well for things where it is a big problem to get anything wrong, right? And you actually have to really know your stuff if you're going to read through every AI thing you generate and try to figure out where it is invented a citation. That's actually a very hard kind of proofreading, editing, etc. Totally.

you know who you don't care if they lie all the time in fact you might like that if they say interesting things aren't all your friends your lovers your whatever your companions and so one of the reasons that hit me is that i just think the distance between where we are now and ai upending our social world is actually thinner than i had given it credit for could happen really quick

And things that upend our social world are incredibly important. We don't always know how to talk about them politically. It's one reason we missed what social media was going to do to, say, all politics, right?

But it really matters. And so somehow that was more intuitively accessible to me than the other things, which I find it too easy to invent counterarguments for. And this is already happening, right? This company, Replica, has these AI chatbots, and it had enabled these sort of erotic conversations. And some users were paying like a subscription fee to get to do that. And then the company said, we don't feel great about this. So they shut it off. And the users were surprised.

And they were sort of overwhelming for him saying, you know, this is really hard on us. So clearly this is already happening. Every new internet technology begins in porn. Yeah, totally. Right. I mean, this is like a longtime observation and I don't say dismissively. Yeah.

That should be a signal to us, right? The fact that it was already potentially able to reinvent porn. Like, when that happens in anything, you should expect it to transform all of society within 20 years. Yeah. 100%. And I will say, like, the reaction that I got to the Sydney piece that I didn't see coming was that there was a cohort of people who were very angry that...

I had, quote, "caused Sydney's death" by getting Microsoft to basically give Sydney a lobotomy because they had developed emotional attachments to this thing that had existed for a week at that point. I think you're right to be putting some concern there.

You've written that part of what changed your view on AI is actually getting to know some of the people who are making it. You had a great column a few weeks ago called This Changes Everything, and you wrote, since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on AI. I don't know that I can convey just how weird that culture is. And I don't mean that dismissively. I mean it descriptively. It is a community that is living with an altered sense of time and consequence.

Can you say more about the AI community that you've encountered in the Bay Area and what most surprised you about them? Yeah, so I've spent, as I said, a lot of time over the past, I guess, five years now, just like sitting and having coffee with people working on machine learning and artificial intelligence. And one thing is that, you know, I'm focusing on more near-term things, socializing and jobs. And I mean, they really think, a lot of them, this is not true for everybody, but I am talking about people at the top of the companies, right?

They really think they are inventing the next stage in intelligence evolution, right? The god machine. And you'll have conversations about the possibility of hypergrowth, right? They will talk about, you know, very freely and openly the possibility that what they're doing will extinguish humanity or it will disempower humanity in the way that cows are now disempowered by humanity. And so you'll have this experience talking to them.

where you'll sit there and think, am I crazy or are you crazy? Mm-hmm.

And you have that, to be fair, a lot in the Bay Area. I always felt this with crypto people, too, but I was pretty sure they were crazy. Right? I'd be sitting there like, tell me a story of how this works. But like, well, that doesn't actually make any sense. And I'm like, I don't know blockchain code the way you do, but I do know the systems you're trying to disrupt, I think, better than you do, like politics and governance and financial transactions. And what you're trying to fix is not what's broken oftentimes or not the impediment to fixing what's broken. Right.

But the other thing, it is true on some level that if you could invent or create a form of agentic intelligence much more powerful and generalizable than humanity, that would be quite destabilizing and unpredictable in where it would go.

So there's that. And then the thing where they genuinely don't understand what it is they're creating. They cannot look inside their own systems. We've ended up in this neural network architecture to create systems that are developing more and more emergent behavior and more and more internal model creation. But they can't tell you why exactly. And they keep being surprised by what the systems can do.

But then it's, you know, there's the famous now survey where AI researchers give a 10% chance that if they're able to invent some kind of high-level machine intelligence, they'll be unable to control it. And it will extinguish or fundamentally disempower humanity.

I would not do something personally that I thought had a 10% chance of killing my children and everybody's children. Right. Right. I just don't. Like, if you tell me this action has a 10% chance that my children die, I really don't do that. And so I would say to them, well, why do you? Why are you playing with this? And I would often feel that if I push that hard enough long enough, I would get a kind of answer from the AI's perspective. Right.

What do you mean? Something a little bit like, I feel a responsibility to bring this next form of intelligence into the world. Like that they were a kind of summoner standing at the gate. They're the shepherd, not the, like, they're just ushering in this technology that would have arrived anyway. It's really weird. I mean, I took this down. So there's a profile a couple years ago in The New Yorker around AI and existential risk, and it's a big piece.

And as part of it, they end up talking to Jeffrey Hinton, who's one of the fathers of neural network architecture. I mean, really one of the fathers of this era of AI. And they ask him, if you think this is as dangerous as you kind of say it is, what are you doing then? And he says, I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.

When you see something that is technically sweet, you go ahead and do it. And you argue about what to do about it only after you have had your technical success. When I say the culture is weird, that is a bit what I mean. That to the extent you believe this is a profound technology being invented by a very small number of people.

And definitionally, to be one of the people working on this, if you believe that kind of thing about it, you have to have come to the side of I'm going to work on it anyway. That makes you a bit weird. If you pulled all of humanity and you got humanity to 10 percent chance of killing everybody. Again, I'm not saying I believe that. But if you got them there, I think if you had an up or down vote on should you do it, you might get a down vote.

But by definition, you're not making it if you've decided not to do it. So you're dealing with a weird group of people. Right. I mean, there's an actual religious element to that too, right? Where, you know, for these folks, you know, we sort of sometimes jokingly say they're trying to invent God, but it's like, well, if you're trying to invent something that is more powerful than you that could reshape the future of humanity, it's not a bad name for it. The absolute best book I read this year so far is Megan O'Geeblen's God, Human, Animal, Machine. And it is all about this.

It is all about the intense recurrence of religious metaphor throughout the foundational AI literature. Like you go to Ray Kurzweil. I mean, she has an evangelical Christian background and went to like theology school.

And she really draws the, I don't mean draws like metaphorical connections, shows like just very literally and directly how much patterning is happening from one space to another. How much the religious impulse, which is buried very deeply in us, very deep in our culture. And in some cases, I think most deep in the people who become most secular, who have rejected religion most profoundly, often leaves you the most desperate for something to take its place.

That all matters. One of her points is that metaphors are bidirectional. You begin with a metaphor to help you understand something else, right? The computer is like a mind. And soon it begins turning itself around on you. The mind is like a computer. I am a computer. I am a stochastic parrot. I am a token generating machine. And religiously, that has happened very, very, very clearly for a lot of these people. And

Particularly when fewer people were watching, they talked about it all the time. Like this was not, you don't have to search very far in the literature. You know, the age of spiritual machines was all about this. But yeah,

It's a great book. I cannot give an answer as good as telling people to read it. Yeah. In terms of just bi-directional metaphor as quickly, I was laughing recently because, you know, if you use ChatGPT and it goes on a little bit too long, you can click a button that says stop generating. Now when my friends tell stories that go on too long, one of my friends just says stop generating. Anyway.

That's funny. I want to ask you one more question about the culture of the people you are talking to in the AI world, because I think we're talking to some of the same people. And what has always struck me as interesting about this community of very concerned and yet sort of determined AI architects is that they don't seem to live together.

in the way that you would expect if you actually believed what they say they believe. So the other day I was talking with someone who's pretty high up at one of the AI companies, and they were saying that they're, you know, they're planning to have kids someday. And that was lovely. And we talked about it. And, but it was like, it struck me that like, if you actually thought that there was a decent chance, even a one in 10 chance that AI would fundamentally, you know, extinguish humanity in the next, you know, couple decades, that's probably not

the choice you would make. I mean, these people are buying houses, they're setting up lives. Open AI has a 401 , you know, which is like not a thing that you do if you think that the economy is fundamentally going to be transformed by AI just a couple of years down the road. So what do you make of that? Is that just cognitive dissonance or is there something more going on there? Well, two things about that. So one, they also believe, and remember, it's worth saying because nobody ever quotes this part of the survey, that the AI researchers do believe there's a chance AI is amazing for humanity.

So if you believe AI is going to create economic hypergrowth, you really do want a 401k, right? That's going to go great. Like the market's going to really, really bump. But the other thing, my background is policy reporting, so I'm in a lot of other policy conversations. And I am never a fan of the kind of reasoning that says –

If you really believed everything you are saying, you would live so differently. And because you don't live so differently, I don't really believe you believe it. So you see this reasoning all the time in climate. If you believed what you were saying, you would never take a plane flight. You know, you would never, I mean, some people don't have kids, but a lot of people who work constantly in climate do.

And one, I don't think human beings are means and ends connected in that way. We just kind of can't be. Two, whatever you might think speculatively about AI, for most of human history, the chance that your children would die before their first birthday was one out of four, roughly. And that they would die before their 15th was almost half. And people had kids anyway, constantly.

believing in the future has always been an act of hope so much more profound and an act of risk so much more profound than what we have today that even if you did believe like a 10% chance of things going really badly at some distant point in the future, it still wouldn't compare to what it meant to have kids in the year 1300, to say nothing of all the years before that. So I don't know. I'm never a big fan of that. I also think that I don't know what people really believe here. As I was saying earlier, I

For a long time in this conversation, for me, I felt like I could intellectually grok it but couldn't emotionally feel it. I think that people who work on AI, this is another part of the culture. It's a real culture of people who are extremely far on the bell curve of analytical intelligence. There's a Bertrand Russell quote, I think it is, that the mark of a civilized mind is the ability to weep over a column of numbers.

And I often feel with these communities, and it's particularly true for the so-called rationalist community, et cetera, that for them, the mark of a civilized mind is the ability to have a nervous breakdown or a philosophical thought experiment. They have a very – a capacity to live in analytical, conceptual language and feel that to be almost realer than the real world that most people don't. But at the same time, it is still speculative. And so whether or not they can live in everything they imagine –

That's a really hard jump to make for anybody. I think they make more of it than most of us do, but I don't think most of them are all the way there. I want to know what you make of the criticism, which I've been hearing a lot lately, that to discuss AI in these terms as a potential existential risk, as something that might transform everything...

only serves up to hype these companies that we're sort of, we're empowering them, we're maybe distracting folks from nearer term harms with this stuff. And I wonder how you think about that question. I am half on board with that argument, actually. Not because I think it hypes up the companies, but because I think there's a sort of missing middle in the AI risk and policy conversation that frustrates me. So I'd say on the one hand, you have like the AI safety world.

So that is the existential risk. These are the people who think the world could end. 10 percent, it'll kill us all. Then you have what gets called the AI ethics world. So the AI safety world is a real out of sample risk profile. What if we invent a recursively improving superintelligence that turns us all into paperclips? We've really never faced that question before. The AI ethics world is much more near term. It is what if these machine learning systems

are bad and create problems in exactly the way we are bad and create problems today, right? This is learning off of human data. Human data has all the biases of human beings. It's going to replicate those biases and going to tuck them away in a black box that makes them harder to uncover. I think that's true, by the way. I mean, I think you should take all that very seriously.

But there's a big middle to this that feels to me a little bit under-emphasized, in part because a lot of the alignment problem is human. We have alignment problems between human beings and corporations now. We have alignment problems between human beings and governments now, between governments and governments now. And that I worry a lot about the

the absence of trying to solve those alignment problems. So a critique I have, honestly, of frankly both the safety and the ethics communities, but more the safety one on this, is I think they are just inattentive in a crazy way to capitalism. I

I've written about how important I think the business models that AI companies are allowed to have is. The kinds of models we get are going to be shaped by how you can make money on them. What just drives me up the wall is that we appear to have decided the way AI is going to work is through a competitive dynamic between Google, Microsoft, and Meta. And as such...

Because what is really working in the market right now are chatbots, we are then going to really emphasize creating AIs that are designed to fool us into thinking they are human, right? The more persuasive and frankly manipulative an AI is, the better you're going to do at rolling it out.

I don't think you should be able to make money with an AI system that manipulates human behavior at all. I mean, I think right now the law should come down that you cannot make any money by tying these systems to advertising. I do not want to see more personalized systems that are tuned to make human beings feel like they have a direct relationship with them, but actually make their money by getting me to buy Oreos or whatever it might be. So how should they make their money?

That's a great question. I am not sure. One thing I would like to see is prizes for scientific discovery. I still think the most impressive AI system we've seen was AlphaFold, which was by DeepMind and substantially solved our ability to predict the structure of proteins.

You can imagine a world where the U.S. government or other governments put out, you know, 15 or 20 drug discovery, mathematical and scientific challenges. And if we believe them to be so profound that if you could solve them, you get a billion dollars. You just get it free and clear, but we get the result in the public sphere, in the public domain.

Like, that would probably get us very powerful AI systems tuned to problems we actually have, not the problem of Microsoft really wishes people used Bing instead of Google.

I'm not saying it's not important to be focused on how the systems work, but how we work the systems, like pretending this is all a technical problem and not a problem of human beings and economic models and governments. Alignment problems are not just about AI. So I think there's this big middle space that just isn't getting as much attention as I think it deserves, but I think it's really big. It is kind of new problems.

but not truly new. They're just large-scale civilizational problems of how we don't understand how to make systems actually produce the outcomes we want them to produce. And we need, like, this really emphasizes that we need to, like, figure out some governance mechanisms to do. We'll be right back.

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork. Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

You wrote in one of your recent columns that basically there are two paths toward solving pieces of the AI problem. One of them would be to dramatically slow down the development and deployment of AI technologies. And I get a sense of how you might go about that. You could do that with regulation. You could do that with sort of mutual disarmament pacts between the AI companies or something like that. But I want to ask you about the other side of that solution.

solution, which is the faster we can adapt to AI as a society, the better off we'll be. How do you think that goes? What does that look like for a society to adapt to the presence of, say, large language models? I'll be honest that that is one of those sentences

Where it hides a lot of, and I don't know. I'm familiar with the type of sentence. But let me give a couple thoughts on that. So one is we need governance and input mechanisms we don't currently have. I was talking to the group Collective Intelligence. Yes.

They're trying to build structures we know pretty well from various deliberative democracy and direct democracy experiments. So you could have a rapid and iterative way of figuring out what the public's values are coming from representative assemblies of people.

And that you could then put that into different levels of the process, maybe at the regulatory level. And that then goes into standards that you cannot release a system or train a system that doesn't abide by those standards. So that's a kind of governance we don't currently have, but we may need to create something like. I'm very skeptical that Congress as it exists today will.

I don't just want to say the capacity, but I think the self-confidence, which actually worries me quite a bit, to regulate here. I think Congress knows what it does not know here. It does not have a good internal metaphor to the extent it has any metaphor. It is to replay the social media era and try to deal with privacy breaches and regulating the harms on the margin. So adaptation might be governance structures, input structures, and then –

slowing down, and this is a place where my own thinking has evolved. I don't think the...

The simple slowdown position is going to be a strong one. You mean like the one that came out in that letter with the thousand AI researchers and technologists saying, let's put a six-month pause? Let's just put a pause. I think you need to say clearly, both because it is politically important, but also because you need to do something with the time of your pause, what you are attempting to do. So one thing that would slow the systems down is to insist on interpretability.

Like if you can't explain why your large language model does what it does, you can't release it? Right. So if you look at the blueprint AI Bill of Rights that the White House released, it says things like, and I'm paraphrasing, but you deserve an explanation for a decision a machine learning algorithm has made about you. Now, in order to get that...

we would need interpretability. We don't know why machine learning algorithms make the decisions or correlations or inferences or predictions that they make. We cannot see into the box. We just get like an incomprehensible series of calculations. Now you'll hear from the companies like this is really hard and I believe it is hard.

I'm not sure it is impossible. From what I can tell, it does not get anywhere near the resources inside these companies of let's scale the model. Right? The companies are like hugely bought in on scaling the model and like a couple people are working on interpretability. And when you regulate something, it is not necessarily on the regulator to prove that it is possible to make the thing safe. It is on the producer to prove the thing they are making is safe.

And that is going to mean you need to change your product roadmap and change your allocation of resources and spend some of these billions and billions of dollars trying to figure out the way to answer the public's concerns here. And that may well slow you down, but I think that will also make a better system. I also think, by the way, even if you just believe we should have a lot of these systems out there in the world and they should get better as soon as possible, for them to be stable in society in the future, you're going to need to do this.

If you think the regulations will be bad now, imagine what happens when one of these systems comes out and causes, as happened with high-speed algorithmic trading in 2010, a gigantic global stock market crash. Right? I mean, what the flash crash did then compared to what a bunch of hedge funds running poorly trained, poorly understood AI systems that try to achieve the goal of making lots of money did.

through ways they may not recognize, right? You know, it would sure make a lot of money if all the competitors for this company were investing in didn't exist because they all had a gigantic cybersecurity breach and nobody knows the system has come up with that idea. And so you actually want to be able to have control of these systems to not have the regulatory hammer come down because you release one into the wild. It did something terrible. And now everybody's like, full stop. So, yeah.

I think we can do a lot to adapt. I think we can do a lot to improve. I think a lot of that would slow them down. But I would like to see more of a positive vision of what we're trying to achieve here rather than simply the negative vision of –

you know, stop, stop, stop, stop, stop. Not because I don't think there's a case for that, but because I just don't think that case is going to win. I agree with you. I was more sympathetic to the idea that these systems should take a pause. Like, I understand it's very unlikely that the big companies are going to unilaterally disarm without some sort of pressure. I just don't know how the government is going to develop a coherent point of view like the one that you just laid out

if it doesn't have at least six months to think about it, right? And I guess what I worry about is that if six months go by with sort of unchecked developments, and if these companies do start to train more powerful models, even than the ones that they just released, then by the time the government has a point of view, it's sort of already outdated. I'm not against it, right? If you told me tomorrow that Joe Biden was so moved by the letter that somehow he convinced all of, great, like I'm not against it.

I think it is much likelier for them to come out and say, we're in an information gathering phase and we need a kind of information you can't seem to give us in order to come up with our rules.

And you need to give us that information. Think about this. If you would like to build, say, congestion pricing in New York City, the congestion pricing effort in New York City just had environmental— This is where you would charge people more for driving in New York City. The environmental assessment for congestion pricing for New York City came out in August, and it was more than 4,000 pages. Wow.

And until that 4,000-page report, which included all kinds of public community meetings and gathering of data, until that could be compiled and finished, you can't do congestion pricing. And my view is that took way too long and was way too extensive for congestion pricing. I do not think that is too much or too long or too extensive for AI. And so this is my point about the pause, that instead of saying no training of a model bigger than GPT-4,

It is to say, no training of a model bigger than GPT-4 that cannot answer for us these set of questions. Mm-hmm.

And I don't think it's impossible that Congress in its current state could come up with five good questions or the FTC could come up with them. I talk to people at the FTC. They've got lots of questions. There are a lot of things they would like to know. I think you pick some things that you would like to know here. And you say, until you can answer this for me, and answering it might require you to figure out technological capabilities you have not figured out, you can't move forward on this. That is the most banal thing the government does in this country.

You cannot do anything until you have satisfied our requirements. You need to fill out this form in triplicate. I want to play devil's advocate here because I think what we just saw with social media was a lot of people saying the same kind of criticism. We don't know how these feed ranking algorithms work. We don't know how these recommendations are being produced. We want you to show us.

And so, you know, very recently, Elon Musk open sourced the Twitter algorithm. And this was sort of seen as like a trust building exercise that people would see how Twitter, you know, makes its decisions about what to show you. And it was not greeted with a lot of fanfare. No one really seemed to care that you could now go and see the Twitter source code. So if the same thing happened with AI, if these companies were forced to explain how these large language models operate, and it turns out that the answer is really boring.

or that it's, you know, just it's making statistical predictions. There's no like hidden hand inside the model steering it in one direction or the other. If it's just very dense and technical and not all that interesting, do you think that would still be a worthwhile exercise? I do. I do on a bunch of levels. One, I'm using interpretability as one example here of the kind of thing you may want. And I think there are a lot of things you may want to put into the models before you can begin to train GPT-6 or 7 or whatever it might be. But

The thing about the Twitter algorithm is the only thing the Twitter algorithm does is decide which tweet to show you.

The thing about these models, these general purpose models, is they might do a lot of things. And many of them already are. I mean, models way less sophisticated than what's coming out from Anthropic or OpenAI or Google or Meta or whomever are involved in machine learning ways. I mean, your book is partially about this, Kevin, in making, you know, predictive policing decisions and sentencing and deciding, you know, strategic decisions for companies. We should be able to get an account of

a decomposed account of how it came to that conclusion. When Bing Sidney decides to say, Kevin, I love you and you don't love your wife, let's be honest here, I want to know what happened there. What did it draw on? What lit up in the training data? I mean, you know, because I wrote a piece about this that my view is that the system predicted that

fundamentally, sort of, you wanted a Black Mirror episode. Like, you were a journalist looking for a great story and it gave you one. That's my Blame the Victim classic. Uh-huh.

But I don't know if that's true. I would like to know more. I don't know how much more we can know, but I think we can know a lot more than we currently do. And I don't think you should have these rolling out and integrating into a million other apps and onto the internet until we do. Now, I'm in a place of like learning and uncertainty here. I'm not saying that you should listen to Ezra's view on what we should know. I'm just saying that there are ways to come up with a series of views about the public would like this to be true about models. Yeah.

and the way you're going to slow them down and also improve them, make them more sustainable, make them closer to the public's vision for AI is to insist the company solve these problems before they release models.

It's pretty hard to run an airline in this country. It is hard to build an airplane in this country. And one reason it is hard is because airplanes, we do not let them crash. It is not that it has never happened. It is extraordinary how rarely it happens. And that is true because we have said in order to run an airline, there's going to be a lot of stuff you have to do because like this technology, it is important. It is powerful. It is good. It is dangerous.

That is a good metaphor here. How do you think these technologies, which are fundamentally, you know, things that arrange words in sequences, which is also what the three of us do. How do you think that these AI language models... You're just a stochastic parrot. You know, on bad days, yeah, I'm just doing next token prediction. How do you think these technologies could change our jobs?

I don't know yet. I'd be interested if you think this is true. One of the things I observe is that the pattern of usage for me and for people I know, when any one of these new ones comes out, is like for four days, you use it a lot, and then you don't. And I observe the people who seem to tell me they're using it a lot, and I wonder, is your work getting better? And without naming names, I don't think it is.

Now, you could say, yeah, this is this generation. Soon it's going to be really good. Maybe, maybe. The hallucination problem is going to be really difficult, though, for integrating this into newsrooms because the more sophisticated and convincing the system gets, the harder it is going to be for somebody who does not themselves know the topic they have asked the AI to write about to check what the AI has actually written. So I don't think this is going to roll out easily or smoothly, but

And then there's the other thing, which is one reason I am skeptical of the predictions that this is going to upend the economy or create hypergrowth. A world where AI has put so many people out of jobs is a world where what AI has done is create a massive increase in automated productivity. And that should have been true for the internet. So if you had said, hey, look,

One way the economy and scientific discovery is simply constrained is that it is hard and slow to gather information. It is hard and slow to collaborate with people. It is geographically biased who you can collaborate with. And I said, oh, great. I have a technology that will answer all of that. And you say, wonderful. Like, this is going to change everything. And then how's productivity growth been since the internet? Crappy. So why? People have different theories on this, but my theory is that it also had a shadow side.

It distracted everybody. If I also said to you, I'm going to create this technology that's going to make the entire world 35% more distracted and is going to shrink their attention spans. And every time they have a thought, they're going to interrupt it by checking their Gmail or Slack. How is that going to be for productivity? And the answer is bad. Productivity really matters in terms of like the level of economic intense concentration you can bring to a thing. I feel so attacked right now, but keep going. Exactly.

So I think it's very likely that AI looks more like that, or at least has as much of that effect as the other. That will supercharge productivity in some corporate context, but that we'll all be so busy, like, hanging out with our replica girlfriends and boyfriends. Exactly. That it won't actually change our productivity as workers. I've used this analogy before, but I sometimes will work at a coffee shop with my best friend. I am not more productive when that happens. Right.

I enjoy it more. It's great. I really enjoy going to a coffee shop in that context, but we hang out. And if you watch the movie Her, it doesn't actually seem to me that the main character becomes more productive. Right. That is not a portrait of economic productivity being supercharged. So precisely because I think at least in the near term, these are going to work better for kind of social and entertainment and distraction purposes. Like we may get way better at making immersive video games really, really quickly or immersive movies or whatever it might be.

I think it is very likely that way before AI is automating everybody's job, it is distracting people in a more profound way. And you might say, look, we have enough content already. I would have said that before TikTok came out, but then TikTok came out. For now, we didn't have enough content. We needed much more. And I think that the move not to social media, but to companions who are always there with you, I think that could be distracting, maybe enriching, maybe beautiful, right?

But in a totally different way, right? I'm not sure we realize how much of life there is to colonize with socializing, with interaction.

to what is about to happen. And so I just see a much smoother path to making us less productive than more. That's not a prediction, but it is maybe meant as a provocation. I know we're almost out of time, but I want to close by asking you about skepticism. The

The last paragraph of one of your recent columns on AI really resonated with me. You were writing about sort of the mistake of living your life as if these AI advances didn't exist, of sort of putting them out of your mind for some sort of cognitive normalcy.

And you wrote that skepticism is more comfortable. And then you quote the historian Eric Davis saying that in the court of the mind, skepticism makes a great grand vizier, but a lousy lord.

And I've been thinking about that a lot because some criticism that I've gotten since I started writing about AI, I think we've probably all gotten some version of this, is like you're not being skeptical enough, right? You're falling for hype. You're not looking at all of the pitfalls of this technology. And I think there's a lot of media criticism of AI right now that is skeptical.

I would say skeptical in a way that feels very comfortable and familiar, kind of like reviewing AI like you'd review any other technology, like a new smartphone or something. But it seems like you're arguing against a kind of knee-jerk skepticism when it comes to AI. You're saying maybe we should take seriously the possibility that the people who are talking about this being the most revolutionary technology since, you know, fire are right and try to sort of dwell in that possibility and take that seriously. So...

Make that argument. Convince me that the knee-jerk response of journalists and other people evaluating AI should not be skepticism. Maybe the way I'd say this is that I think that there is a difference between skepticism in the scientific sense, where you're bringing a critical intelligence to bear on information coming into your system, and skepticism as a positional device.

a kind of temperament where you prefer to sound to yourself and to others like you're not a crazy person, which is very alluring. Look, one of the ways I've tried to talk about this is using the analogies of COVID and crypto. And I remember periods early on in COVID where I was on the phone with my family and I was saying, you all have to go buy toilet paper right now. And I was talking to them about a trip. Like, we're going to come see you in three weeks. I'm like, you're not going to come see me in three weeks. In three weeks, you will not be going anywhere. Like, you need to listen to me. Mm-hmm.

And it was really hard. You sounded really weird. And I was not by any means the first person alert to COVID, but I am a journalist and I did begin to see what was coming a little bit earlier than others in my life. And one lesson of that to me was that tomorrow will not always be like today. So that also should not become a positioning device. I think there are people who are always telling you tomorrow will not be like today. So then I think about crypto. And I mean, we were all here in the long ago year of 2021 when that was on the rise and

And you'd have these conversations with people, and you'd have to ask yourself, does any of this make sense exactly? That there's a lot of money here. A lot of smart people are filtering into this world. I take seriously that smart people think this is going to change everything. It's going to be how we do governance and identity and socializing. And they have all these complicated plans for how it will replace everything or upend everything in my life.

But what evidence is there that any of this is true? What can I see? What can I feel? What can I touch? And it was endlessly a technology looking for a practical use case. There was money in it. But what was it changing? Nothing. And so my take on crypto was until proven otherwise, like I'm going to be skeptical of this. You need to prove to me this will change something before I believe you that it will change everything. And one of the points I make in that column about AI is

is that I just think you have to take seriously what is happening now to believe that something quite profound is going on. I think you can look at the people who already have profound relationships with their replicas. I think that you can look at automation, which has already put people out of work.

I think to my point that a world populated by things that feel to us like intelligence, if you believe my view that that is one of the profound disruptions here, that has already happened. It happened to you with Sydney. We already know that militaries and police systems are using these. So you don't even really have to believe the systems are going to get any better than they currently are. If we did not just pause but stop at something the level of GPT-4 –

And just took 15 years to figure out every way we could tune and retune it and filter it into new areas. Imagine you retrain the model just to be a lawyer, right? Instead of it having a generalized training system, it was trained to be a lawyer. That'd be very disruptive to the legal profession. How disruptive would depend on regulations. But I think the capability is already there to automate a huge amount of contracting.

They don't have to be sentient to be civilization altering. I just don't think you need a radical view on the future to think this is pretty profound. Totally. Well, Ezra, we're going to have to ask you to stop generating. The token generating machine is off. Ezra Klein, thanks for coming. Thanks so much, Ezra. Thank you for having me.

All right, that's enough talk about AI and existential risk and societal adaptation. It's time to talk about something much more important than any of those things, which is me. Oh my God, get over yourself. And my phone. Get over yourself, Bruce. When we come back, we're going to talk about my quest for phone positivity and why I'm breaking my phone out of phone jail. This is a long way of saying we're going to talk about how I was right. ♪

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

Casey, I did something very empowering this week. Oh, what was that? I got rid of my phone box. Kevin, the phone box that was one of your New Year's resolutions? Yes. So a couple months ago, we talked on this show about my attempts to control my out-of-control phone use with the help of a box.

that you plug your phone into, it charges your phone, but it also starts a timer and tells you how long you've spent away from your phone. Yeah, you can call it a phone box, but it was a prison. You built a prison for your phone. You expanded the carceral state in this country by building a phone prison.

and you locked up your phone for several hours a day. And if I recall correctly, you said sort of like, you feel like you've been spending too much time looking at this dang thing, and you were going to lock it up, and it was going to sort of help you be more present with other people. Yeah. And at the time, you responded to me with what I thought was a pretty crazy position, which is that you are not worried about your phone time. That's correct. In fact, you're trying to maximize your phone time. Yeah. We're going up in this household. You're a phone maxer. Yeah. But

I was conflicted because I felt my own use of my phone sort of creeping up. And among other things, you know, I have a kid and I didn't want to sort of set an example of always being on my phone. And so I was hoping that this phone box and I also installed this app called OneSec that basically puts like a little like

five or ten second speed bump in between when you're trying to open an app and when you actually do it. Yeah. You've installed technology to harass you every time you try to use your phone. Yes, and the way I thought of it at the time was like, this is going to help me be intentional about my phone use, but I

But I found that what it actually did was just make me incredibly guilty. Every time I looked at this box on my counter, every time I ran into these little speed bumps on my phone, I would just be filled with guilt and shame as if I was eating a slice of cake that I wasn't supposed to be eating.

I mean, to be clear, these are shaming technologies. Yes. If you have a prison for your phone in your house, that doesn't make you say, I've really got things under control over here. And for sure, an app that sort of says stop every time you go to open Instagram is only ever going to make you feel bad about yourself. Right. So I just had a kind of revelation the other day when I was looking at my phone box, and I thought, you know what? I don't want this. I don't.

I don't want to feel guilty every time I use my phone. Good. And I want to adopt a position that is more like yours. I want to learn how to coexist peacefully. And so I am starting now a new project, which I'm calling Phone Positivity. Okay, let's talk about Phone Positivity. So my new approach to my phone is,

is that instead of treating it like a kind of radioactive object in my house... Yeah, the nuclear waste. ...that I have to imprison in a phone prison to keep myself from using...

I am going to start appreciating my phone. Okay. And how are you going to do that? Well, so every day I say affirmations to my phone. No, I don't. I don't. But I do try to really be aware of what my phone is allowing me to do at any given time. So, you know, for example, the other day I was in

in the room with my kid, he was crawling around on the floor. I was fielding work emails and Slack messages. And there was a moment where I thought, I feel really guilty about this. I should not be paying attention to my phone. I should be playing with my kid.

The second thought, my phone positivity thought, is, you know what? It's amazing that I have a tool in my pocket that allows me to be physically present with my kid while also doing work. You know, my grandfather couldn't do that. No. If he wanted to do work, he had to leave the coal mine. Get in the car. Yeah.

You know, he worked at an electrical plant. He had to get in the car and drive to work. He had to literally leave his family to work. I have a phone that allows me to do both at the same time. What an amazing tool. Okay, so I think that that is a great realization, and I'm excited that you will now be leading an abolition movement for phone prisons. Yeah.

Does part of your new phone positivity plan mean that you are also deleting this speed bump app, which I believe you said you had bought a lifetime subscription for? Yes. I turned the speed bumps off. I now have unfettered access to every app on my phone. And you know what? What? My screen time has gone down. No, seriously? You're measuring this? Yes. Yes.

Yes. My screen time in the past week after I started thinking about phone positivity is down 30%. Okay. Well, see, now this makes so much sense to me now that I've thought about it for a beat because if your phone has been locked up and you get one conjugal visit like every 12 hours with this thing, of course, you are not going to put that

thing down until the warden you know splits you up again but now because you're actually looking at it when you need it you're more relaxed about it yeah like all of my interventions that I tried for screen time what it was actually doing was just compelling me to spend more time on my phone because it was so annoying and time-consuming to get what I needed out of it so I actually find that with the speed bumps gone I'm able to like get in do what I need to do and dip back out

And so it has resulted in less screen time, even as I no longer feel this kind of overwhelming guilt about it. Interesting. So as you think back on the very brief 90 days that you spent with these technologies, do you feel like they failed on a technological basis? Or do you feel like you just got to a different place emotionally and they didn't make sense anymore? Yeah.

Well, a couple things I think happened. One is that a lot of the things that I was truly addicted to, like Twitter and Instagram and TikTok, just got less interesting to me. And I don't know whether that's because, you know, people just aren't posting as much as they used to or my friends and, you know, people I follow are just not interacting as much on those platforms. But it does feel like those apps got a little less shiny to me. At the same time, I think...

viewing my phone through a more positive lens, like looking at the things that it can do, that it enables me to do, and appreciating that also helped me use my phone in better ways. Right. So there's this study that came out five or six years ago about social media that said, you know, actually, maybe not all screen time is created equal. Hashtag not all screen time. And the difference between active and passive use of social media, right? So when you are using social media passively, it appears to be bad for you.

And by passively, we mean sort of like mindlessly scrolling with your thumb and sort of like paying half-hearted attention to like what your aunt is up to without really feeling compelled to leave a comment or actually engage with another human being. Right. You are leaning back. You are just sort of scrolling and observing. Yeah. You're lurking. Yeah.

But this study also showed that active use of social media can actually increase mental well-being, can increase your sense of connection with people, can help you stay connected to weak ties in your network, people that you don't live near, family members that are far-flung, or people you haven't seen in many years but still want to maintain a connection to.

And that really tracks with my own experience of this, where if I'm just looking at TikTok or Instagram or Twitter for an hour, like it feels bad. I feel bad, like viscerally about it.

But if I'm using it to connect with people or send text to the group chat or whatever, like that feels much better to me. I want to ask you one other thing, because as I've been thinking about this journey that you have been on, we've also been spending a lot more time together. We've been recording the podcast. We went to South by Southwest. And you are not a phone goblin.

Like I have some friends where if there is one dull moment in the conversation, that phone is out of their pocket so quick and they're, you know, they're scrolling Instagram or doing something else that doesn't seem urgent at all. I don't see you as that person. Do you just make sort of a conscious effort when you're out and about in the world and like the issues that you have happen more when you're kind of laying around the house? Or has something changed for you in the past few months? Yeah, I think I'm pretty sensitive to not

being a phone goblin, as you put it, like when I'm with people face to face, I try to sort of focus on the conversation. But I also have done a lot of calibration of my phone so that, you know, I'm not getting every notification. I think this goes hand in hand with the phone positivity thing is like you actually have to make your phone work for you. Yes. Right. You have to set up your notifications so that the people you want to notify you get through. But

you're not getting a ping for every email, every text message, every spam call. So I think that spending a couple hours up front, really setting up your phone in a way that is maximally useful and minimally distracting to you is a necessary piece of this.

Okay, so if someone were to come to you today and they were going to say, Kevin, I just listened to the first time that you talked about the phone box and I haven't listened to this episode yet and I'm thinking about bringing phone box into my life, what would you tell that person? I think that for some people,

Reducing screen time is a good goal, right? There are people who spend— If you're Sam Bankman-Fried and you've been using that screen to commit a global scale fraud and destroy the effective altruism movement, maybe put the screens away. Right. You need a phone prison. Maybe— Maybe a real prison. A real prison.

So I think that for some people, reducing screen time is a good goal, right? And I think especially for adolescents or young people, like the effects of screen time on mental health are much more severe. Yeah.

But I think for adults and for people who use their phones and their screens for work as well as leisure, I think we could all do with a little more positivity around these things. That's actually a much better experience than...

being consumed with guilt about something that you're in the process of doing. You know, like be okay with who you are. I've been going to therapy and that's pretty much actually just what they tell you. Well, your point when we last talked about the phone box was that like there may be something bigger here than phone addiction, right? Like for a lot of people, what presents as like, I can't stop using my phone is actually something deeper. It's I have anxiety or I...

I am unfulfilled by the things in my life that I would be doing otherwise if I weren't looking at my phone, that there's some bigger problem there. Yes, absolutely. It's like, you know, I talked last time about like the Grindr was one of these apps for me where it was like sometimes I'd just be on my phone and I would feel horrible. And what I ultimately realized is like I am seeking validation in this app that is not actually there. So I need to stop, you know, but it's like you could put the phone in the box and like that's not going to make me feel the validation that I was looking for.

I think it's important to be purposeful about your phones. I don't want it to seem like I'm just advocating for maximizing all forms of screen time. I'm not actually at your position of the more the better for screen time. Except that, as you've been describing it, you've been making me realize that...

everything that you're saying is how I've done my phone. Like I do look at my phone whenever I want to, but I have also taken the time to develop an intentional relationship with it. I am very selective about what I will allow to notify me. You know, you got to kind of figure that out for basically this, everything on your phone.

So it is a massive project. But once you sort of get through with it, I do think you can just kind of use your phone when you want to use your phone, because to your point, it's bringing a lot of amazing things into your life all the time. Right. It makes me think about this old Steve Jobs quote about computers being a bicycle for the mind. Remember this quote? That's right. So I think that we forget that the point of computers, of smartphones, of technology is to help us do things and get places and

be more productive and efficient and bring us joy and entertainment, but also to accomplish things. And so, you know, very few people are out riding bikes all day just for the fun of it. Like you're usually using it to get from someplace to someplace else. That's right. And so I've tried to use my phone more in that way. Like I am doing a task. Once I finished that task, I will put away my phone, but I'm not going to like

feel guilty about it the entire time. I think that's beautiful. If you love your phone, come out to the town square and just say it loud and say it proud. One of my favorite things that I learned when I was researching my book about automation is that when electricity first came to a lot of small towns in America, they would throw parties.

Like everyone would come out on the town square and at some of these, like they would do these symbolic things where they would like bury a kerosene lamp and like hold a mock funeral for it. And like the boy Scouts would play taps, you know, to symbolize like the fact that we don't have to use these like crappy kerosene lamps anymore because we have this amazing electricity. Yeah.

And I feel a little bit of that now toward my phone. Now that I've stopped like being shamed by it, it's like, wait a minute, I can like take a photo of my kid and send it to a wireless photo frame in my mom's house thousands of miles away. Like that is unbelievable. And that is like a realization that I never would have had had I kept treating this thing as a source of shame. All right, well, I'm calling the Boy Scouts and we're going to have a funeral for your phone box. Just burying it in your backyard.

Moving on with our lives. Mission accomplished. RIP. You was here for a good time, not a long time. It was not a good time. It was here for a bad time. Yeah.

BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America. Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant.

This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Alisha Ba'etup, Marion Lozano, and Rowan Nemisto. Special thanks to Paula Schumann, Mel Gologly, Kate Lopresti, and Jeffrey Miranda. You can email us at hardforkatnytimes.com. And if Kevin's phone is out of prison, we'll read it. ♪

Since 2013, Bombas has donated over 100 million socks, underwear, and t-shirts to those facing homelessness. If we counted those on air, this ad would last over 1,157 days. But if we counted the time it takes to make a donation possible, it would take just a few clicks. Because every time you make a purchase, Bombas donates an item to someone who needs it.

Go to bombas.com slash NYT and use code NYT for 20% off your first purchase. That's bombas.com slash NYT code NYT.