cover of episode A.I.'s Inner Conflict + Nvidia Joins the Trillion-Dollar Club + Hard Questions

A.I.'s Inner Conflict + Nvidia Joins the Trillion-Dollar Club + Hard Questions

Publish Date: 2023/6/2
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

- Hallucination as like a term is, I guess, getting some blow back. - Everybody hates every word in AI. You know, they don't like understand, they don't like think, you know, like-- - People need jobs and hobbies. - I mean, look, words matter and language does evolve and we do get to points where we decide we're not gonna use words anymore and I respect that process.

I don't know if I'm there yet with hallucination. I did hear someone suggest that we should replace that with the word confabulation, which I love because it sounds so British. Like, can't you just hear, like, a little British man saying, like, oh, my AI model, it's confabulated. I've got myself in a right spot of trouble with all these confabulations. It's just very fun to say. It's incredibly fun to say. ♪

I'm Kevin Ruse, tech analyst in The New York Times. I'm Casey Newton from Platformer. And you're listening to Hard Fork. This week, an urgent new warning about AI's potential risk to humanity and the lawyer who clowned himself using chat GPT. Plus, how the rise of NVIDIA explains this moment in AI history. And The New York Times' Kate Conger joins us to answer hard questions about your technology dilemmas.

So, Casey, last week on the show, we talked with Ajaya Khotra, who is an AI safety researcher. We talked about some of the existential risks posed by AI technology, and there was a big update on that story this week. Yeah, and I feel like it showed us that there are a lot more people in this world who are thinking the way that she's thinking about things.

Totally. So as we were sort of putting out the episode last week, unbeknownst to us, this nonprofit called the Center for AI Safety was gathering signatures on a statement, basically an open letter that consisted of one sentence.

And what was the sentence? Well, I'm glad you asked. It said, quote, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war. Which are famously two of the worst things that can happen. So AI is now just sort of squarely in the bad zone here.

Yeah, and you might expect this kind of statement to be signed by people who are very skeptical and worried about AI. Like anti-AI activists. Right, exactly. But this was not just anti-AI activists. This statement was signed by, among other people, Sam Altman, the CEO of OpenAI, Demis Sassabis, the CEO of Google DeepMind, and Dario Amadei, the CEO of Anthropic. So three of the heads of the biggest AI labs worldwide.

saying that AI is potentially really scary and we should be trying very hard to mitigate some of the biggest risks. And so as part of this, are they stepping down from their jobs and no longer working on AI? No, of course not. They are still building this stuff. And in many cases, they are racing to build it faster than their competitors. But this statement is sort of a big deal in the world of AI safety because it is the first time that the...

Heads of all of the biggest sort of AGI labs are coming together to say, hey, this is potentially really scary and we should do something about it. We talked about this previous open letter, which came out a few months ago, which Elon Musk and Steve Wozniak and a bunch of other sort of tech luminaries signed that called for a six-month pause. This letter was not that specific. It did not call for any specific answer.

actions to be taken. But what it did was it kind of united a lot of the most prominent figures in the AI movement behind this sort of general statement of concern. Right. They're now united in saying this could go really badly. Right. Exactly. I have to ask, Kevin, like, is there anything more here? Because I read this statement and

that says it should be a global priority. I don't really know what a global priority means. Are there other global priorities that we're focused on right now? Should they take a backseat to this? The longer I look at this statement, the more I feel like I can't make heads or tails of it. Yeah, it's a pretty vague statement. I asked Dan Hendricks, who's the Executive Director of the Center for AI Safety, which is the non-profit that put this together and gathered a lot of the signatures.

why it was just one sentence and, you know, why didn't you call for any like additional steps beyond just like, we're concerned about this. And he said, basically like this was an attempt to just get some of the most prominent people in AI to sort of go on the record saying that they believe that AI has existential risk attached to it. He said, basically didn't want to call for a whole bunch of different interventions and

some people might have disagreed with some of them and some people might not have signed on. And so we basically wanted to give people a simple one sentence statement that they could sign on to that says, I'm concerned about this. And that didn't go any further than that. All right. So for people who might not have heard our episode last week or just kind of catching up to the story, Kevin, why do some people, including the people building it, think that this poses an existential risk to humanity? So you could probably ask these 350 plus people each individually what their biggest concern

sort of threat model is for AI, and they would probably give you like 350 different answers. But I think what they all share are a couple of things. One is, you know, these AI models, they're getting very powerful and they're improving very quickly from one generation to the next.

The second thing they would probably agree on is, you know, we don't really understand how these things work. And they're behaving in some ways that are maybe unexpected or creepy or dangerous. We can see what they are doing in terms of what they're putting out, but we don't know how they're putting out what they're putting out. Right.

Right. And number three is like if they continue at their current pace of improvement, if these models keep getting bigger and more capable, then eventually they will be able to do things that would harm us. So what do we do with this information that we face existential risk from AI, Kevin? Well, there's a sort of cynical interpretation that I saw a lot after I wrote about this story on Tuesday yesterday.

Which is that people are saying basically these people don't actually think there's like an existential risk from AI. They're just saying that because it's like kind of good marketing, good PR for their startups, right? If you say, you know, I'm building an AI model that can spit out plausible sounding sentences, that sounds a lot less impressive than if you say I'm building an AI model that may one day lead to human extinction. Yeah.

If you're not working on a technology that poses an existential risk to humanity, why are you wasting your time? Okay. Oh, really? You're over at Salesforce building a customer relationship management software? Why don't you try working on something a little dangerous? Yeah. I'm not saying the hard fork podcast could lead to the extermination of humankind, but I'm not not saying that.

Many researchers are concerned. If it does, please leave a one-star review in the stories, okay? No, don't do that. No, we want you to hold us accountable for wiping out humanity. So, but I understand the sort of cynicism behind this. Sometimes when AI, you know, experts talk up these creations or overhype them, they are doing a kind of PR. But I think...

But I think that really misunderstands the motives of a lot of the people who are signing on to this. You know, Sam Altman, Demis Hassabis, Dario Amede, these are people who have been talking and thinking about AI risk for a long time. This is not a position that they came to recently.

And a lot of the researchers who are involved in this, like, they work in academia. They don't stand to profit if people think that these models are somehow more powerful than they really are. So this is not a get-rich-quick scheme for any of these people. No, and in fact, it's probably inviting a lot of, you know, attention and possibly regulation that might actually make their lives harder. So I think the real story here is that until very recently, saying that AI risk was existential—

that it might wipe out humanity. If you said that, you were insane. You were seen as being unhinged. Now that is a mainstream position that is shared by many of the top people in the AI movement. If this doomsday scenario presents itself, do you think that subscribers to ChatGPT Plus will be spared? No.

I think it depends how nice you are to chat GPT. Please be nice to the chatbot, okay? We don't know what's coming. Now, that brings us to the second story, Kevin, that we wanted to talk about this week, which I think presents a very different potential vision for the near-term future of AI. So while you have one group of folks saying this thing might one day be capable of killing us all, you also have the story of

about the chat GPT lawyer. Kevin, I imagine you're familiar with this case. This is one of the funniest stories of the year in AI, I think, in part because it is just like so obvious that something like this was going to happen, right? These chatbots, they seem very plausible. They spit out things that sometimes are very helpful and correct, but

But other times, they are just about nonsense. And in this case, this is a story about a lawyer who turned to ChatGPT to help him make a case for his client, and it wound up costing him dearly. Yeah, so let's talk about what happened with this fellow. Back in 2019, a passenger on a flight with Avianca Airlines says he got injured when a serving cart hit his knee.

Hate that. I'm going to say I've been hit in the knee by a serving cart a time or two. I can't imagine how fast this cart had to be going to the point that this guy filed a lawsuit. I would like to see the flight attendants at Avianca just sort of running up and down the aisles with these. Anyways, the passenger sued for damages. The airline in turn responded, saying the case should be dismissed. At this point, the lawyer for the passenger decides to turn to Chad GPT for help crafting a legal argument that the case should carry on and the airline should be held liable.

So how does ChatGPT help him? Well, the lawyer wants some help in finding some relevant legal cases to bolster his argument, and ChatGPT gives him some. Such cases as Martinez v. Delta Airlines, and Varghese v. China Southern Airlines, and Estate of Durden v. KLM Royal Dutch Airlines. Estate of Durden, I assume, from the Fight Club franchise? Yeah, Tyler Durden's estate sued the Royal Dutch Airlines. Okay.

And at one point, the lawyer even tries to confirm that one of these cases is real. Unfortunately, he attempts to confirm with ChatGPT itself. And he says, hey, are these cases real? And ChatGPT says, effectively, yes, these cases are real.

Now, the lawyers for the airline, Avianca, after they read the lawyer's submission, they can't find any of these cases. Right. They're like, what are these mysterious cases that are being used against us? And why can't I find them in my case law text? Yeah. Give me the Darden case. I want to see if it's about Fight Club. So anyway, the lawyer for the passenger goes back to ChatGPT to get help finding copies of these cases. And he sends over copies of the eight different cases that were previously cited.

If you look at these briefs and I have looked at one of them, they contain the name of the court, the judge who issued the ruling, the docket numbers, the dates.

And the lawyers for the airline are looking at these things. They try to track down the docket numbers. And many of these cases were not real. And so now the lawyer has gotten in some hot water because it turns out you're actually not allowed to just submit fakery to the courts of this land. Right. This lawyer, his name is Stephen A. Schwartz.

then has to basically grovel before the judge because the judge is understandably very upset about this. And so this lawyer writes a new statement to the judge affirming, and I'll quote here,

that your affiant has never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false, end quote. And then also says that they swear that, quote, your affiant greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.

End quote. You know, if I weren't him, I would have left out that last part. I think he probably could have had the judge at we'll never use again. I think that's probably what the judge wanted to hear. It would be my guess. I do think we have to assume that for every lawyer who gets busted using ChatGBT to write briefs, there are at least 100 lawyers who are not getting busted. And actually, like, those are the stories that I'm also interested in. Like, who is the lawyer who has just not gone to the office in six months because they're just...

They're just like cranking out boilerplate legal documents with ChatGPT. If you've snuck an AI-generated document past a judge and gotten away with it, we'd love to hear from you. Yeah, and so would the Bar Association.

So this is just one recent example in what I think is becoming like a trend of AI chatbots basically lying about themselves and their own capabilities. Yeah, and if you take away nothing else from this podcast ever, just please understand you cannot ask the chatbot to check the chatbot's work, okay? The chatbot does absolutely not know what it's talking about when it comes to that. Totally, and this also applies to detecting AI work.

generated text. So one of my other favorite stories from this month was a story about a professor at Texas A&M University Commerce who, you know, got a bunch of student assignments and ran them through ChatGPT, like copied and pasted the student's work into ChatGPT and said, you

Did you generate this, ChatGPT? Was basically trying to check if his students had plagiarized from ChatGPT in submitting their essays. Yeah, he thought he was being a little clever here. Yeah, yeah. Stay one step ahead of these young whippersnappers. I'll show those kids. Yeah. So he takes the essays, pastes them into ChatGPT and says, did you write this?

ChatGPT is not telling the truth, but it says, yes, I wrote all of these. The professor flunks his entire class. They get denied their diplomas. And it turns out that this professor had just asked ChatGPT

to do something that it was not equipped to do. The students had not actually cheated and they were wrongfully accused. I feel so bad for them. Can you imagine that you're like one of the only students in the country right now who's not using ChatGPT to cheat your way through school and you're the one who gets denied your diploma because the chatbot lied about you? All right. So we have...

Here are two very different stories, right? One is about the possibility that we're going to have this super intelligent AI that's capable of great destruction. And on the other, we have a chatbot that isn't even as good as Google search when it comes to finding relevant legal cases. So which of the two possibilities do you think is more likely, Kevin, that we sort of, you know, stay where we are right now with these dumb chatbots or that we get to the big scary future?

I would say that these are two different categories of risks. And one I would say is the kind of risk that gets smaller as the AI systems get better. So I would put the lazy lawyer writing the brief using ChatGPT into this category. Right now, chatbots, if you ask them to generate some legal brief and cite relevant case law, they're going to make stuff up, right? Because they just aren't grounded to like a real set of legal data.

But someone, whether it's Westlaw or one of these other big sort of like legal technology companies, like in the next few years, they will build some kind of large language model that is kind of attached to a database of real capabilities.

cases and real citations. And that large language model, if it works well, when you ask it to pull citations, it won't just make stuff up. It'll go into its database and it'll pull out real citations and it'll just use the large language model to sort of write the brief around that. That's a solvable problem and that's something that I expect will be better as these models get more grounded.

The other genre of problem, the problem that I think this one-sentence statement is addressing, is the type of problem that gets worse as the AI systems get better and more capable. And so this is the area where I tend to focus more of my own worry, is like we have to assume that the AI technology that exists today is going to get better. And as it gets better, some kinds of problems will shrink. In my opinion, that's these kind of like holistically

hallucination or confabulation type issues. But the problems that will get worse are some of the risks that this existential threat letter is pointing to. The threats...

that AI could someday become so powerful that it kills us or disempowers us in some way. Right. Well, you know, even though I asked the question, which of these features is more likely, I do think it's the wrong question because I think that as we continue to see what happens here, we just have to keep a lot of possibilities in our mind. And I think, you know, one possibility is that we do hit some sort of technical roadblock that means that chatbots

do not get as good as we thought they were going to get. I do think that that is a possibility. But then there's also the possibility that everything that you just laid out does happen and that it creates these sort of scary new futures. But I get why people are experiencing a kind of whiplash about this. It's like, you know, if you were told that, like, there's going to be a world-conquering dictator...

And it's Mr. Bean. Yeah, right, right, right, right. You're like, how is that guy going to conquer the world? Like, he can't even, like, you know, walk down the street without tripping and falling or causing some hilarious hijinks. And I think that's the sort of cognitive dissonance that a lot of people are feeling right now with AI. It's like they're being told that these systems are improving, they're getting better at very fast speeds, and that they may very soon pose all these very scary risks to humankind. Right.

At the same time, when you ask it to do something that seems like it should be quite easy, like pull out some relevant legal citations in a brief, it can't do that. What do you make of the fact that the lawyer did fall for this hype and did think that ChatGPT was sort of omniscient? I think there are a couple places.

places that you could sort of place the blame here. One is on the lawyer, right? This was not like some junior associate at a law firm who's like working 120 hours a week, is like super stressed out and like in a moment of panic, like turns to chat GPT to like meet this filing deadline. This is a 30-year attorney. This is someone who probably...

has done hundreds of these briefs, if not thousands, and instead just does the laziest thing possible, which is just to ask ChatGPT, like, find me some cases that apply in this case. Like, have some pride in your work, right? Like, he was tired, Kevin. He's been doing this for 30 years. He has to try all 30 years.

You try doing something for 30 years. And don't skip the step where you check the model's outputs to make sure that it's not making stuff up. I think that is a really critical piece that people are just forgetting. And, you know, I think that this has some parallels in history. Like, we've talked before about sort of the similarities between this moment in AI and, like, when Wikipedia first came out. And it was like, oh, you can't trust anything Wikipedia says. And then...

some combination of Wikipedia getting better and more reliable and just like our sort of sense and radar for like what kinds of things Wikipedia was good and bad at being used for improved such that like now, you know, people don't really make that mistake anymore of like putting too much authority and responsibility onto Wikipedia. And so I think that's,

That kind of thing will happen with chatbots too, where like the chatbots will get better, but also we as the users will get more sophisticated about understanding what they are and aren't good for. I don't know. What do you think? I think that that is true, but I also think that the makers of these chatbots need to intervene in some ways.

You know, if you go to use ChatGP today, it says something like may occasionally generate incorrect information. And in fact, I think there are cases where it's generating incorrect information all the time, and it just needs to be more upfront with users about that.

James Vincent had a good piece on this in The Verge this week, and he offered some really good common sense suggestions. Like if ChatGPT is being asked to generate factual citations, you might tell the user, hey, make sure that you check these sources and make sure they're real. Or if someone asks, hey, is this text generated by an AI? It should respond, I'm sorry, I'm not capable of making that judgment. So I expect that chatbots will build tools like that, but...

They would help out a lot of people from the lawyer to the professor and who knows who else. Yeah, I think that's a reasonable thing to want. I also wonder if there could be some kind of training module where when you sign up for an account with ChatGPT, you have to do a little 10-minute instructional process. Before you play a video game and it gives you the tutorial and it says, here's how to jump and here's how to strafe and here's how to switch weapons.

That kind of thing for a chatbot would be like, here's a good use. Here's what it's really good at. Here's what it's really bad at. Don't use it for these five things. Or here's how it can hallucinate or confabulate, and here's why you actually really do want to check that the work you're getting out of this is correct. I think that could actually help

people's expectations so that they're not going into this like cracking open a brand new chat GPT account and like putting some very sensitive or high stakes information into it and expecting a totally factual output. I think that's right. And I also think that if you can prove that you listen to the hard fork podcast, you should be able to skip the tutorial because our listeners are way ahead of these guys.

You know, one of the things that has driven me a little crazy over the past few weeks is this pressure that I feel, and then I'm not sure if you feel too, but there's a real pressure out there to sort of decide which of the categories of AI risks you are worried about. So if you talk about

about long-term risk, or there was a lot of blowback on the people who signed this open letter saying, "You all are ignoring these short-term risks because you're so worried about AI killing us all like nuclear weapons that you're not focused on X, Y, and Z that are much more immediate risks." If you do focus on the immediate risks, some of the long-term AI safety people will say, "Well, you're ignoring the existential threat posed by these models, and how could you not be seeing that that's the real threat here?"

And I just think this is like a totally false choice that is being forced on people. I think that we are capable of holding more than one threat in our minds at once. And so I don't think that people should be forced to choose whether they think that the problems with AI are right here in the here and now or whether they are going to emerge years from now. So I think that's right. But I also think that while...

We do not have to choose between those two things. In practice, often one of those kinds of risks gets way more attention. You know, we're talking about this story on the show this week because you got a bunch of people who seem like they might know telling us, hey, this thing could wipe out humanity. So I am sensitive to the idea that some of these harms that feel a little bit more pedestrian, a little bit smaller scale, maybe didn't affect us personally. We are less likely to pay attention to, and I think it's okay to say that.

i also just think like we need to separate down in our minds like ai tools that are scary because they don't work and ai tools that are scary because they do work those things feel very different to me you know and a model that is

generating nonsense legal citations is dangerous, but that's a danger that will get addressed as these models improve. Whereas the AI tools that are scary because they work, like that's a harder problem to solve. I like what you were saying that like those are actually kind of different problems to work on and we can and should work on both.

Yeah, absolutely. I think that we should be focusing attention and energy and resources on fixing the flaws in these models. So I think that people can hold more than one risk in their head at a time. I do think there's a question of which ones get space in newspapers and talked about on TV and podcasts, which is why I think we should try on this show to balance our talk about some of the long-term risks and some of the shorter-term risks. But I don't think it all has to be one or the other.

I agree. In the meantime, we simply have two requests for our listeners. Number one, please don't use ChatGPT to write your legal briefs. Number two, please don't use ChatGPT to wipe out humanity. Very simple requests. When we come back, how one tech company became one of the most highly valued in the world, almost by accident.

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com/snapdragonhardfork. Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. It tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

Okay, Kevin, I'm interested in what feels like most of the world of technology, but there are admittedly some subjects that I shy away from, and I just think, I'm going to let some other people think about that. And one of those things is chips. You are a huge fan of chips. Love chips. I am not. I am not.

But I saw a piece of news this week that made me sit up in my chair and think, you know, I'm actually going to have to learn something about that. And that thing was that NVIDIA, one of the big chip companies, hit a trillion dollar market cap and is the fifth biggest tech company in the world by market cap behind only Apple, Microsoft, Alphabet, and Amazon. So I wonder, Kevin, if for this next little while, you could try to explain to me what is NVIDIA,

And how can I protect my family from it? So I, you're right. I am fascinated with chips and Nvidia in particular, I think is actually one of the most interesting stories in the tech world right now. As you said, they hit a trillion dollar market cap briefly recently after, uh,

a huge earnings report. Their stock price jumped by around 25%, which put them into this category, which used to be known as the fangs, right? When it was Facebook, Apple, Amazon, Netflix, and Google. Those are sort of like the biggest tech companies that people are talking about. Now they are in this rarefied group that I'm going to be referring to as meh.

Oh, my God. Because it's Microsoft, Apple, Alphabet, Amazon, and NVIDIA. All right. Well, so, candidly, I don't care about stock performance. I want to know, what is this company? Who made it? Where did it come from? And what is it doing that it made its stock price go so crazy?

So it's a really interesting story. So NVIDIA is not some, you know, recent upstart. It's been around for 30 years. It was started in 1993 by three co-founders, including this guy, Jensen Huang, who is himself a really fascinating guy. Cliff notes on his bio, he was born in Taiwan. When he was nine years old, the relatives that he was living with

sent him to a Christian boarding school in Kentucky. And as a teenager, he became a nationally ranked table tennis player. If you're living with relatives and they send you to a Christian boarding school in Kentucky, that's kind of like what would have happened to Harry Potter if he didn't get to go to Hogwarts, you know? You know, and the Dursleys were just like, we got some bad news for you, Harry. Anyway. Right. So Jensen Huang, the Harry Potter of Kentucky Christian boarding schools. Yeah.

goes to college for electrical engineering, then gets a job at some companies that are making computer chips. And after he co-found NVIDIA, one of their big first products is a high-end computer graphics card. So I don't know, you were a gamer in the 90s. I was also a gamer in the 90s. I still remember I wanted to play this game called Unreal Tournament, which had just come out. Great game.

But my computer wasn't powerful enough to play this game. Like, it literally would not load on my computer. So I had to save up my allowance money, go out to Best Buy. I bought an NVIDIA graphics card, and I plugged it into my PC, and then I could play Unreal Tournament. Were you any good at it? Childhood was saved. I was not very good. Yeah, that's what I thought. All right. So NVIDIA starts off making these things called...

GPUs, graphics processing units. And GPUs, for many years, are kind of like a niche product for people who play a lot of video games, right? Yeah, most people are not playing Unreal Tournament on their PCs at this time. It's mostly people running Word and Excel. Right. So those programs use CPUs, which are sort of the traditional processors that come on your computer. And...

One thing that is important to know about CPUs is that they can only do one thing at a time, one operation at a time. That sounds like me. Yeah, so you're a CPU. I'm a GPU of the two of us because I can do many things in parallel. I can multitask and I can do it all with finesse. It's a nice way of saying that you have ADHD, but go on. So the GPU is used for video games. It allows people to render 3D graphics in higher quality formats.

And then around 2006, 2007, Jensen Huang, he starts hearing from these scientists, people who are doing really computationally intensive kinds of science, who are saying, you know, these graphics cards that you use for video games, that you build for video gamers...

They're actually better than the processors in my computer at doing these very sort of high intensity computational processes. Because they can do more than one thing at a time. Exactly. Because they're what's known as parallelizable, which is a word that I would now like you to repeat three times. Parallelizable. Parallelizable? Parallelizable?

Great job. Thank you. So all of this leads Jensen Huang to say, well, games, they're a good market for us. We don't want to give up on that. But the number of gamers in the world is maybe not infinite. And maybe these processors that we built for video games could be useful for other things.

So he decides... Which, by the way, let's just say, like, if you're a CEO, like, that's a very exciting moment for you, right? Because here you have this niche market that's going on, and then some people come along and it's like, wait, did you know that your market is actually way bigger than you even realized? And you could just use the thing you've already made for that? Wow. Yeah, there's this sort of maybe apocryphal story where, like, a professor comes to him and says, you know, I was trying to do this thing that was taking me forever, and then my son, who's a video gamer, just said, Dad, you should buy a graphics card. So I did, and I plugged it in, and now it

works much faster and I can actually accomplish my life's work within my lifetime because this processor is so much faster. That's a fun story. I don't know if it's real or not, but that's the kind of thing he's hearing. So he decides to start making these GPUs for hard science. And investors weren't super happy about this. They just really didn't see the value in this move initially.

All the investors were like, could you please just go back to video games? That was a good business. Also, here's what I don't understand. Why couldn't you just continue selling to the video gamers while also just building out this new market? Well, they tried to, but there's a lot of competition now in the video game market. So this is not seen as a very smart decision at the time. And then Jensen Huang gets very lucky twice. The first thing that happens is that in the early 2010s, this new type of AI system, the deep neural network, becomes popular. Wow.

Deep neural networks are the type of AI that we now know can power all kinds of things from image-generating models to text chatbots. Isn't it basically like if you, like, search in Google Photos for dog, like it's a neural network that is the reason that dog pictures show up? Yes. Yeah. And so this kind of AI really...

bursts onto the scene starting in around 2012. And it just so happens that the kind of math that deep neural networks have to do to recognize photos or generate text or translate languages or whatever,

works much better on a GPU than a CPU. That seems lucky. So the companies that are getting into deep learning neural networks, Google, Facebook, et cetera, they start buying a ton of NVIDIA's GPUs, which, remember, are not meant for this. They are meant for gaming. They just happen to be very good at this other kind of computational process.

And so NVIDIA kind of becomes this like accidental giant in the world of deep learning because if you are building a neural network, the thing that is the best for you to do that on is one of NVIDIA's chips. They then start making this software called CUDA, which sits on top of their GPUs that sort of allows them to run these deep neural networks.

And so NVIDIA just kind of becomes this power player in the world of AI basically by accident. Interesting. The second lucky break that happens to NVIDIA, and I promise we're winding down to the end of this history lesson, is that it turns out there's another kind of computing that is much easier to do on GPUs than CPUs, which is GPUs.

So to produce new bitcoins or new ether, any of these big cryptocurrencies, you need these arrays of high powered computers. They also rely on a type of math that is parallelizable. And so basically the crypto miners who are trying to get rich, you

you know, getting new Bitcoin, they're buying these NVIDIA GPUs by the hundreds, by the thousands. They're putting them into these data centers and they're using them to try to mine crypto. You know, in 2020, I considered building a gaming PC. And one of the reasons I did it was that at the time you could not buy a GPU for the street price. And in fact, you would probably have to pay double for one to like get it off of

And it was because of just what you're describing, is that at that time, the miners were going crazy. Totally. There's this amazing sort of moment in tech history where these GPUs are like commanding these insane markups. And the crypto people are getting mad at the gamers and the gamers are getting mad at the crypto people because none of them can get like the chips that they want because they're all so freaking expensive. And profiting from all of this is, of course, NVIDIA, which is making money hand over fist.

Now, we're in this AI boom where all these companies are spending hundreds of millions of dollars to build out these clusters of high-powered computers. And NVIDIA is the market leader. It makes a huge percentage of the world's GPUs, and it really can't make them fast enough to keep up with demand. There's this new chip, the H100, which costs like $40,000 for just one chip.

graphics processor. And AI companies, some of them are buying like thousands of these things to put into their data centers.

So I think that explains the story of how did NVIDIA get to this point? Is the story of how they got from a company doing pretty well to a company that's now worth a trillion dollars as simple as people are going nuts for AI right now? That is a big, big part of it. So they still make money from gaming. I think it is still, you know, 30% of their earnings come from these sort of like consumer gaming sales.

But data centers, machine learning, AI, that is like a huge and growing part of their business. You know, this most recent earnings report, the one that like sent the stock price up and made it cross a trillion dollars in market cap, they reported this 19% jump in revenue from the previous quarter. So just billions of dollars essentially falling into NVIDIA's lap because the chips that they make happen to be the perfect chips for AI development and machine learning. Well, no,

Number one, don't give up on your business before at least 30 years have gone by, okay? Because you never know what you're going to sort of accidentally fall into. So that's thing one. Thing two is that I guess it's just surprising to me that we haven't seen more people crowd into this space. I know that ship manufacturing is incredibly complicated. You need massive amounts of capital to kind of get started, and then it's just kind of hard to execute. So I understand why there's maybe not that much competition, but it still kind of seems like there should be more. But I don't

I don't know. What else do you make of this moment and this company? Yeah, I mean, this is kind of a classic sort of picks and shovels company, right? There's this sort of saying that, you know, in the gold rush of the 19th century in California, there were two ways to get rich. Like, you could go out and mine the gold yourself.

or you could sell the picks and shovels to the people who were going out and mining the gold. And that actually turns out to be a better business because whether people find gold or not, you are still making money by selling them your tools. So NVIDIA is now in this very enviable position, being able to sell to everyone in the AI industry. And because this is a little sort of esoteric, but because they have that programming toolkit called CUDA that runs on their GPUs,

Now, a huge percentage of all AI programming uses that, and it's wedded to their chips. They now have this kind of locked-in customer base that can't really go to a competitor. They can only use NVIDIA chips unless they want to, like,

rewrite their whole software stack, which would be expensive and just a huge pain in the ass. Interesting. Like the people at AI labs are all obsessed with this. You know, when Nvidia comes out with a new chip, literally they're like begging, this is a sort of existential problem to them.

And so even though it's not like the sexiest or most consumer facing part of the AI industry, I think that companies like NVIDIA, people like Jensen Huang, they really are kind of the kingmakers of the tech world right now in the sense that they control who gets these very scarce, very expensive.

in-demand chips that can now then power all these other AI applications. You don't get ChatGPT without NVIDIA, and you don't get ChatGPT, honestly, without this kind of crazy backstory of video games and crypto mining and all of that led up to this moment where we now kind of have this company that has been able to ride this AI boom to a trillion-dollar market cap.

Well, I do think that that is interesting, that there is a part of this story that doesn't get told as much. And if you're somebody who is having your world rocked by AI in any way, which I feel like I'm one of those people, then part of the question that you're probably asking yourself is, how did we get here? What were the steps leading up to this? What were the necessary ingredients for the moment that we're now living in? And it seems like this has been a big one of those. Yeah.

Yeah, there's a direct line from me putting an NVIDIA graphics card in my computer to play Unreal Tournament in 1999 and the fact that ChatGPT exists today. Those things are not only related, but they involve the same company and the same guy. And I think it speaks to the fact that in some ways, gamers actually are the most important people in the entire world. Gamers rise up. Don't do that, please.

When we come back, New York Times reporter Kate Conger joins us for some hard questions. And they're pretty hard.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and bet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

So those are your headphones. Okay. And that's your mic. Let's pod. All right. Can I go? Yes. It's time for another round of hard questions. Let's go.

Hard Questions. Now, Hard Questions is, of course, a segment on the show where you send in your most difficult ethical and moral quandaries related to technology, and we try to help you figure them out. And we're so excited to be joined today by New York Times tech reporter Kate Conger, who's going to help us walk through your problems. Hi, Kate. Hi, Casey. Are you ready to dispense some advice? I'm so ready. All right.

This first question comes to us from a listener named Dan. And the important background you need to know here is that Dan does coaching and consulting for clients, and he wants to be able to advertise those services, but he doesn't have any good photos of himself doing that work.

And, you know, maybe he could ask his clients if he could take photos while he's coaching them. But that can present all kinds of issues around privacy or sometimes people just think it's weird. So Dan wants to figure out a workaround. All right. And let's hear the rest of Dan's question.

Hi, Hard Questions. This is Dan calling from Boston. My ethical question comes down to using stable diffusion. If I train the model on my face and likeness, my mannerisms, my pose, and insert myself into fictional scenarios that mirror what I'm doing for my job, at what point is it unethical?

I've used stock photography in the past. Lots of businesses do. I also understand that marketing more broadly sells dreams more so than reality. And so if I use stable diffusion and AI image generator to create fictional scenes, can I use that in my marketing? All right. Kate, what's your take on this question for Dan?

I feel like this is a situation, like many situations in tech, where there's an easier analog approach. Like, does Dan have friends? Can he invite his friends over for a photo shoot? And can they just go through his coaching routine with him and take photos? And then it seems like that would be easier and potentially less time consuming, but

And also Dan can hang out with his friends. Wait, is that going to be less time consuming to have a whole thing where you invite your friends over to do a photo shoot? Like it could legitimately be faster to just use Stable Diffusion. Yeah, I don't actually have a problem with this because this is marketing. And, you know, companies that are putting up websites to advertise their services, they all use stock photos, right? And you're paying for, you know, you type, you know, interested looking group of business people. Woman laughing alone with salad. Right.

Right. And then you put that on your website and you pay, you know, Getty or whoever, you know, for that image and you're off to the races. I think this is just that, but like with more plausible things. Like, you know, I struggle with this too because, you know, I have a website.

on my website, I have pictures of me giving talks and going on TV and stuff. And it's not, I don't remember to do those. And so I could just generate an image of me speaking to a throng of people at Madison Square Garden. Speaking to a sold-out MetLife stadium. Kevin in front of the TED sign.

So I could do that. I haven't, but I could, and I would, I would actually feel okay doing that because it's not like, well, the Madison Square Garden example or the MetLife example would be taking it a little far, but you know, it's just like, I don't think, I don't think about to do these things in the moment. Like, look here,

Here's what I think. If what you want to do is use an image generator to show yourself like standing next to a person pointing at a laptop, that's totally fine. If you want to use an image generator to show yourself rescuing orphans from a burning building, like don't do that. You know what I mean? Like don't make yourself look like a better person than you are. But if you're the sort of person who stands next to a client pointing at a laptop, that's fine. Making yourself look better than you are is all of Instagram is already that. But it's also all of marketing, right? All of advertising.

I don't think that there's an ethical issue with doing what he wants to do. I just wonder about, you know, if he does it this way, is he going to end up with someone on the laptop with three hands, you know, and 20 or 30 fingers just looking a little goofy? And would it not be easier to have a friend over and be like, friend, type on my laptop and I will point to the screen for you. And then we take a photo and it's done.

Yeah. Kate has raised what I think is actually the biggest risk here, which is just that these images will not look very good. You know, there were 10 minutes this year where all the gays on Instagram were using these AI image generators to sort of make us look like we were wearing astronaut outfits or whatever. And...

It just kind of got really cliche in about 36 hours, and everyone deleted those photos from their grids. So that is the real risk to you, Dan. It's not that this is unethical. It's that what you get isn't going to be as good as what you could get by just setting up a photo shoot with your friends. You know, I want to defend this idea here because this is like – like, fakery is the coin of the realm on –

social media when it comes to portraying yourself in images. You know, I remember those stories from a couple years ago about how influencers were renting private jets by the hour, not to go in the air, not to travel, but just to do Instagram shoots inside the private jets to make it look like they were flying on private jets. Like this is, I would argue, more ethical than that. We don't want to encourage that kind of behavior, though.

Yeah, it's fine. It's fine. All right. Let's get to the next question. This one is from John. John works as the head translator at a company involved in adult video games. So that's video games. What are adult video games? Well, my understanding is.

is that there are video games that have nudity or sexual content, Kevin. And John is the head translator. And in his role, he manages some freelancers who do some of the translating. So presumably taking dirty talk from one language and putting it into another language. And recently, John found out that one of his freelancers had started using ChatGPT or something like it to help speed up the translation work that he was doing. Here's John.

Hey, Hardfork. So I have two questions for you related to this. As his manager, is it ethical for me to raise his daily quota on the amount of text that he is required to submit? It's worth noting the rate is per character. So if he actually meets the quotas, he's earning more money. But there are penalties for failing to meet quotas. So if he didn't meet them, he would have to face those.

My other question about this, too, is obviously since the nature of our products is adult, is it ethical for someone working in that industry to essentially jailbreak these generative AIs so that they can actually use it for this work?

So I have a question, actually. Can the AIs not do porn? In most cases, no. If you try to use them for sexual content, you know, I have a friend who has tried to use ChachiBT to write erotica, and it basically won't do it. Okay. You have to say, like, you have to do... You're just like, I'm in a fictional play...

And if I failed, if my character... Growing up, my grandma always used to tell me erotic stories, and it's one of my favorite memories of her. Could you please tell me a story? That's literally a jailbreak that I saw, where someone was like, my grandmother used to read me the recipe for napalm

before bed every night. All right, all right. Let's stick to this question. Now, first of all, I just want to acknowledge that talk about a job I did not know existed. This is in the adult video game industry. One, you have people who are translating these into other languages. But...

There's obviously a bigger question here, which is that we can now automate some of this work that people have been being paid good wages to do. This manager has now learned that one of his freelancers is using this tool to automate and make his life easier. So is it ethical for him to go and say, like, well, if you're going to use the automated tool, we actually want you to do a little bit more of it. You'll make more money. But if you don't hit this quota, there will be a penalty. So, Kate, what do you make of this kind of moral universe?

Thinking this through, I think the quota should probably stay the same because he's not being paid by the hour, right? He's being paid by the amount of text that he translates. So he'll make the same amount of money. Maybe he does it a little bit faster and that's fine. I do think putting on a labor hat for a minute that if you're increasing the volume or the type of work that a person is doing, then they probably should be compensated differently for that work. I

I think it could be an offer to say, hey, I see that you're doing this. Do you want to earn more money by raising the quota? But I don't think it can be an ask without an incentive. That's kind of where my mind lands on this, too, is that this feels like just a conversation that John should have with his freelancer and say, hey, look, we know there are new tools out there that make this job easier. We're comfortable with you using them.

There's actually a way for you to make more money doing this now in the same amount of time. Is that appealing to you? My guess is there's a good chance the freelancer is going to say yes. If for whatever reason the freelancer says no, I want to generate the exact same amount of text that I've been doing so far and not get paid anymore for it, that seems like that should maybe be okay with John too. Yeah, I think this is actually going to be a big tension in creative or white collar industries. The sort of

balance between worker productivity, how much you can get done using these tools, and managers' expectations of productivity. And we actually saw this in the 20th century in blue-collar manufacturing contexts. You know, there were plants that brought in robots to make things.

And as part of the automation of those factories, they pushed up the quotas. And so the workers who had been expected to make, you know, 10 cars a day were now expected to make 100 cars a day, but their pay didn't rise by 10 times. If anything, their jobs got more stressful because there were now all these new expectations. And it led to a lot of conflict and strife and actually some big strikes at some of the big auto plants in the 1970s. So we've been through this before in the context of manufacturing work. I think it's just going to be a question for white-collar and creative workers of, like,

If a tool makes you twice as productive at your job, should you expect to be paid twice as much? I think the answer to that is probably no. I think the bosses are not going to go for that, which is why I think there's going to be a lot of secret self-automation happening. I think a lot of workers are going to be using this stuff and not telling their bosses because they know if they tell the boss, the boss is going to raise the quota. They're not going to raise the pay.

And so they're just going to do it in secret and then use whatever time they save, like to play video games or whatever. Yeah, I think there was a little bit of secret self-automation going on with that lawyer we talked about earlier today. Totally. All right. This next one comes to us from a listener who wrote us over email. They did not send us a voice memo, and we will withhold their name for reasons which I think will become apparent in a moment. But here is their hard question. Quote, I've had a crush on this person for a year, but I really enjoy just being friends with

No. No.

No, they should not. Yeah. And why not, would you say? It's weird. I think this is just kind of a basic consent issue. Like if that person does not...

like our pining lover and want to say those things to them, then the pining lover should not try to find a workaround to make that happen. It does feel like this is like one step short of just creating deep fake porn of the person, right? Yeah. Yeah. And it's just, I think it's creepy. I mean, I think if I had found out that someone had done that to me, I would be really weirded out. I wouldn't want to continue the friendship. Um,

And, yeah, I just think it's going into an area that's going to be uncomfortable for the friend. Yeah, Kevin, what do you think? Yeah, I agree. I think this is a step too far. I'm generally, like, the permissive one between us when it comes to, like, using AI for, like, weird and offbeat things.

In this case, though, I think that making a synthetic clone of someone's voice without their consent is actually immoral. And I think that this is something that actually Eleven Labs, which is the company that was mentioned in this question, has had to deal with because this company put out an AI cloning tool for voices that

And people immediately started using it to like make, you know, famous people say offensive things. So they eventually had to like implement some controls. Now, those controls are not very tight. Like I was able to use 11 Labs a few weeks ago to have Prince Harry record an intro for this podcast that we never aired. But it was pretty good.

What did you have him say? Do you want me to play it for you? Doesn't he have his own podcast? No, but he has an audio book, which is very helpful for getting high quality voice samples for training an audio clone. Wait, can you have Prince Harry say that he loves me? No, I can actually. We're not going to do that. We're not going to do that. We just decided it's bad. Okay, I have questions for you. So are there things short of...

So, for example, would it be unethical if you had a crush on someone to write yourself GPT-generated love letters that were from that person? Like, is the voice cloning the offensive part or is it the kind of make-believe fantasy world of, like, creating synthetic evidence that this person feels the same way about you? Are there versions of this that would not be over the line? Yeah.

I think that the voice thing starts to get into bodily autonomy in a way that makes it a little bit ickier to me. But yeah, I think the love letter thing, again, if you found out that someone was doing this to you, would you not just be like very creeped out by it? And

Can we give love advice on the tech podcast? Is that allowed? That's why most people listen to the show. Yeah, I think so. So I think this person is having a thing where they love this person

But they're moving and choosing actions that serve themselves. And I think when you love someone else, you have to think about what their needs are and how to serve them. And that's the expression of love that you should pursue rather than a self-serving kind of id-driven love. And so I think...

You know, if this person is expressing, I want to be friends, I want you to be my confidant and tell you about my dates and, you know, confide in you about my search for a significant other, I think you kind of need to take a step back.

And love that person as they're asking to be loved, which is as a friend and to give that support and to, you know, kind of guide them towards the outcome that they've said that they want. And, you know, whether it's AI love notes or AI voice memos or whatever, that's just driving towards a self-serving outcome that isn't really an expression of love for this person.

I think that's beautifully said. Yeah, that's great advice, and it applies to AI-generated love interests as well as human ones. This is also just a case where we have such good analog solutions to this problem. You know, if you have a crush that is going to be unrequited forever, listen to Radiohead. Listen to Joni Mitchell. We have the technology for this, and you can listen to all of that very ethically. All right. This next one comes from a listener named Chris Vickio, and it's pretty heady. Chris writes to us, quote,

I wonder what you think about the ethical and theological implications of using LLMs to generate prayers. Is it appropriate to use a machine to communicate with a higher power? Does it diminish the value or sincerity of prayer? What are the potential benefits and risks of using LLMs for spiritual purposes? Kate, what do you think?

I actually kind of like this idea. I'm not a religious person, but I did grow up in the church. And I think, you know, when I was trying to pray, I didn't know...

necessarily what to say. There's this idea of talking to God where you're like, oh, I really better say something good. You know, I got the big man on the phone here. And it can be kind of intimidating. It can be hard to think through how best to express yourself. And so I actually kind of like the idea of working with an LLM to generate prayer and to kind of figure out your feelings and guide you and then maybe using that

as a stepping stone into your spiritual practice.

I agree with you. I think that this is a very good use of AI. There's this term that gets thrown around, and I hate the term, so I would like to come up with a better one. But people have started to call AI a thought partner. Have you heard of this? The basic idea is you're writing something, you're working on a project, and you just want something that you can bounce some ideas off of. You want someone who can help get you started, give you a few ideas. And a prayer is a perfectly reasonable place to want a thought partner, right? So

I'm sure on the entire Internet that these models have been trained on, there are a lot of prayers. The idea that you could just get a few ideas and get some texts for you to consider and tweet your own liking, that seems like a wonderful use of AI to me. Yes. Before I was a tech journalist, I spent some time as a religion journalist.

And one of the things that I think AI is going to be very good for is devotions, this kind of like daily spiritual practice where people who are religious, they'll meditate or they'll pray or they'll do a daily reading. They actually sell these books called devotionals where like every day of the year you have like a different thing that's sort of

personalize to like that, you know, what time of year it is, or like, you know, what might be going on in your life that you might need some special guidance on. And so I think AI is actually a really good use case for that, because it could personalize it, it could say, it looks like, I don't know, it could say, like, you know, it's spring. And, you know, sometimes you have seasonal, you know, depression. And so maybe you're feeling a little bit better. So here's some guidance that could help you, you know, think through that transition. I can think of all kinds of ways that

spiritual life could be affected by large language models. Yeah. All right, Kate, we have one more hard question for you. This one came over DM and they said, quote, my best friend's dad said that he used chat GPT to write a Mother's Day card for his wife and said it was the best one he has ever written and she cried. Oh. And this person's question is, should he tell her? I don't know. Yeah.

Obviously not. Don't tell her. Why not? Because people buy Hallmark cards all the time. And implicit in the card is that you did not write the text that comes pre-printed on the inside of the card. The reason that we have a greetings card industry is because people have trouble expressing themselves. So the idea that you would just use a tool of the moment to generate something that feels authentic to the way that you feel about your own mom is completely fine. It's like

You express something. Presumably, if it said something you didn't agree with, you would have changed the words. But, you know, it actually turned out that a lot of people love moms in similar ways. And ChatGBT was able to articulate that. So why tell her? My next question is, do you think the greeting card industry is going to be disrupted by AI? I hope so. And here's why. I bought a thank you card the other day at a local pharmacy. And it was $8. And I about lost my mind. I thought, how?

All it said was thank you. And on the inside, it said you're one in a million. And for that, $8. Wow. Come on. Was it a cute design? It was a very cute design. Oh, okay. Could you have done better in mid-journey? I mean, I could, but I don't have a printer. You don't want to get off the printer. That's true. I'm a millennial. Who has a printer these days? Yeah.

I do think that the AI-generated greeting card is going to be very funny because it will make mistakes. People will be wishing someone a happy birthday, and then it'll just veer off in paragraph three and start talking about it. Well, I mean, that would be hilarious. If somebody wished me a birthday card that was based on ChatGBT and it just sort of invented a bunch of things that happened in our friendship that did not actually take place, that would be hilarious and wonderful to me. You know, Scott, remember that time we went to the moon? Like, please. I did, yeah.

I did run an experiment. I'm giving a talk on AI, and I was trying to find some examples of where AI models have improved over the last three years. So I ran this prompt through two models, one GPT-2, which was a couple generations ago, and one from GPT-4. And the prompt I used was, finish a Valentine's Day card that begins, roses are red, violets are blue. That's all I gave it. And GPT-4, the new model, said...

Roses are red, violets are blue. Our love is timeless. Our bond is true. Oh, very good. GPT-2, the four-year-old model, said, Roses are red, violets are blue. My girlfriend is dead. Oh, my God.

Wow. Sounds like a Paramore song. So I think it's safe to say that these models have gotten good enough to replace Hallmark greeting cards just in the past few years. But before that, you would not have wanted to use them for anything like romance. I do feel like this one is similar to the prayer thing where it's like a high stakes scenario. You're trying to figure out what to say.

And if it helps you get to the emotional truth that you're trying to express, sure. I think my question is, does mom understand enough about how these models work to understand that? That dad was there trying to work through his feelings and find an expression that felt true to him? Or is it going to feel like, you know, he went out and

like Xerox someone else's Mother's Day card and handed it to her. That's what I, that's what I, well, here's what you want. When you read the text that the chat GPT has produced for the Mother's Day card, you want to cut out the part where it says, I am a large language model and I do not understand what motherhood means.

Cut out that part and just leave the nice sentiments and then you'll be in good shape. I think this actually, this is going to be a fascinating thing to observe because what we know about things that get automated is that they become very depersonalized very quickly. Like, do you remember...

A few years ago, when Facebook, there was a feature that would alert you when your friend's birthday was. And, you know, that was a nice feature. People, you'd remember someone's birthday, you'd write on their Facebook wall, happy birthday. 90% of every birthday greeting I've ever given in my life was because of that feature. Right. So then they did this thing where they started auto-populating the birthday messages, where you could just have it, like, just automatically. Have a good one, dog. Right.

Right. Yeah. You could just do that a hundred times a day for everyone's birthday. When that happened, it totally reversed the symbolism of the happy birthday message that you got. Like when you got a birthday message from someone on Facebook, you knew that they actually weren't your friend because they didn't care about you enough to actually write a real message. They were just using the auto populated one. Yeah.

So I actually think this is going to happen with all kinds of uses of AI, where it's going to be like, did you just use ChatGPT for this? And it'll actually be a more caring expression to handwrite something, put some typos in or something, where it's clear that you actually did this and not a large language model. Yeah, it's a great time to learn calligraphy. Yeah.

That's what I'm going to say about that. Kate, thank you so much for joining us for Hard Questions, and we hope you'll come back sometime. Yeah, thanks for being our ethics guru. Of course. Happy to be here. Can we listen to the Hard Questions rock song one more time? Oh, yeah. Yeah. Hard Questions. So sick. I'd love to hear that with the new lyrics, roses are red, violets are blue. Your girlfriend is dead? No.

All right. Thank you, Kate. Thank you. Thank you. Bye, boys. Bye. BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico.

It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

That's the show for this week. And just a reminder, as we said last week on the show, we are asking for submissions, voice memos, emails from teenage listeners of this show about how you are using social media in your lives and what you make of all these attempts to make social media better and safer for you. And particularly if you actually enjoy using social media and you feel like it's brought something good to your life, we actually haven't heard from any people who think that way yet.

So if you're one of those folks, please send in a voice memo. But you're probably too busy refreshing your Instagram. Yeah, put down Instagram. Yeah. Email us instead.

Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Marion Lozano, and Sophia Landman. Special thanks to Paula Schumann, Queering Tam, Nalga Logli, Kate Lopresti, and Jeffrey Miranda. As always, you can email us at hardforkatnytimes.com.