cover of episode Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT

Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT

Publish Date: 2023/3/31
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. So...

As of last week, Bard, Google's effort at building consumer-grade AI is out in the world. And I think it's fair to say the early reviews were not amazing. And I sort of imagined that we would discuss that at a really high level this week. But then last week, I got a phone call. And someone I know at Google said, would you maybe want to talk this week to Sunder Pichai? And I said, yes. That didn't take you a lot of deliberation on that one.

I'm Kevin Roos. I'm a tech columnist at The New York Times. I'm Casey Newton from Platformer. And you're listening to Hard Fork. This week, we hit the road and take a trip to the Googleplex to talk to Google CEO Sundar Pichai.

So last week we talked about Google's new chatbot called Bard, which is supposed to be their answer to ChatGPT and some of these other generative AI chatbots. And I think it's safe to say that their reaction among the public to Bard so far has been pretty lukewarm.

My Twitter timeline is not full of screenshots of Bard conversations like it was of ChatGPT conversations late last year when that came out. It doesn't seem to have landed with nearly as big a splash. It was a little muted. I think by this point, a lot of people have tried chatbots, and they feel like ChatGPT in particular gives really good results. And I think when people pit these things through their paces, a lot of people felt like, I'm not sure if Bard is as good. Right. And that kind of fits with this narrative that has been developing in the

AI community over the past year or two, which is that Google is somehow behind in this race for generative AI. They've been working on this stuff for a long time. Google certainly had a dominant position in AI research for many years. They came out with this thing, the transformer, that revolutionized the field of AI and created the foundations for

ChatGPT and all these other programs. But then the perception, at least, is that they kind of fell behind. You know, a lot of their researchers left and did their own startups or went to competitors. They didn't really turn their research into products at a pace that people could actually use and appreciate. And they got sort of hamstrung by a lot of, you know, sort of to hear people inside Google tell it, like, bigotry.

big company politics and bureaucracy. And I think it's safe to say that they got sort of upstaged by OpenAI with the release of ChatGPT last year. It seemed to catch Google off guard. And so in December, just a month after ChatGPT came out, my colleagues at the New York Times reported that Google's management had declared a

Yeah, and look, if you're a business and you're developing a lot of amazing technology and no one else out there has released similar technology, that gives you a good reason to stay quiet and to not release it, right? We know that there are real safety concerns. There's responsibility issues, ethics issues, regulatory issues. Google actually did have a lot of good reasons to kind of sit on its hands. But then, you know,

Open AI forced its hand in a way that makes me wish I hadn't used hands in a different way in the previous sentence, because now I've just said hands too many times. Anyways, I love how dramatic a code red sounds. It makes you think like, what, are employees chained to their desk 24-7? My understanding is that what it meant for a lot of people there was all of a sudden the goals that you had to get your next promotion and get your bonus, they were tied to whether you hit some goal related to AI. And the question is,

Is that going to get them where they want to go? Or is it going to be a moment where they act a little panicked and they make a lot of mistakes?

Yeah, and I think that Sundar Pichai is in a really fascinating and tough position here as the CEO of Google, right? And I think it's fair to say that they are more threatened than they have been in a very long time. That's right. And, you know, Google has been a relatively conflict-averse company for the past half decade plus. They don't like picking fights. If they can just kind of keep their heads down, quietly do their work, and print money with a monopolistic search advertising business, they're happy to do it. Totally. And they also have this other personality

problem, which is kind of a classic problem in business, which is the innovator's dilemma problem. This is a term that was coined by Clayton Christensen a long time ago. It's used to sort of talk about the dynamics between new startups that enter a market and the incumbents in that market. And basically, Google is in the position of an incumbent. It has this huge, profitable search business that it doesn't want to

sort of diminish or leave behind in any way. At the same time, it's got OpenAI and now Microsoft, which is partnering with OpenAI, who are potentially eating into their search business using these generative AI tools. And so they have to somehow figure out like,

how do we capitalize on generative AI without destroying our own search business? You know, sometimes as a business journalist, you look at a situation and you think, well, I knew exactly what I would do there. But when you present me with the problem that Google has right now, which is how do you introduce generative AI and not blow up our whole search advertising business, that seems like a very tricky problem to me. But,

I am not as certain as you are that there is a real existential risk here, although there might be. I do think that there is a real generational opportunity, though, that if they figure this out, there is a chance they become an even more enormous company than they are today. Google plays a huge role in my life. That's where my email is. That's how I get around town. It's how I waste money.

hours of my life on YouTube. And when they introduce these generative tools across their entire suite of products in ways that we haven't even imagined yet, there's going to be enormous opportunity for them, both financially, but also to kind of set the pace again.

And so I think one of the big questions heading into this interview is, is Google truly slow because of the nature of huge companies being unable to be nimble? Or have they truly been trying to be safer and more responsible from some of their peers at a time when a lot of really smart people are starting to ring alarm bells and saying, this stuff is moving awfully fast, and we're not sure that you've done all the safety work that you need to. Right.

Was your homework late because you were taking extra time to make sure it was good? Or did you just decide to go to the club and not to your homework? That's a horrible thought. I decided to go to the club. So we have a lot of questions. And I talked with Google last week. They said that Sunar would sit down with us and talk about these questions and more. Sunar, we're going to take a road trip to ask the man himself. We are. Are you driving? You're driving. I'm hoping. Are you driving? Yeah, I'll pick you up. Oh, that's fantastic. I got to clean my car. Yeah, I'll send you the link on Google Maps.

you

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more.

Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security. Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork.

- Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. - And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? - So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

Is it either that's my water or someone else's water? Oh, there's my water. Thank you. All right. Sundar Pichai, welcome to Hard Fork.

Great to be here. Thanks for having me. Yeah. So, Sundar, I have spent a lot of time talking with AI chatbots recently, including Bard, and I have learned... Welcome to the cloud. I have learned that one way to get really good responses out of these AI chatbots is to prime them first. And one way to prime them is to use flattery. So instead of just saying, you know, write me an email, you say, you know, you are...

an award-winning writer. Your prose is sparkling. Now write me this email. So I've always thought, I wonder if that strategy works on humans too. So I thought we should start today by saying, Sundar, you are a brilliant technical thinker, a genius answerer of podcast questions, and you're going to respond to all of our questions with brilliant insight and wit today and not pre-rehearsed talking points. How'd I do? That's my priming. Kind of worked

Okay, good. So speaking of AI chatbots, Bard came out a little more than a week ago, was released to the public, and Casey and I have been playing around with it. I think it's fair to say that the reaction among the public to Bard has been somewhat muted. Some people are saying, you know, this is not as good or it's not giving me the same kinds of answers as ChatGPT or other products on the market.

And I guess I'm curious how you're feeling about it at launch, you know, a week plus later. And what have you made of the response to BARD so far?

You know, we knew when we were putting BART out, we wanted to be careful. You know, it's the beginning of a journey for us. There are a few things you have to get right when you put these models out. Getting that user feedback cycle and being able to improve your models, build a trust and safety layer turns out to be an important thing to do. Since this was the first time we were putting out,

We wanted to see what type of queries we would get. We obviously positioned it carefully. It was an experiment. We tried to prime users to its creative collaborative queries, but people do a variety of things. I think it was slightly maybe lost. We did say we are using a lightweight and efficient version of Lambda. So in some ways we put out one of our smaller models out there, what's powering board, and we were careful. So it's not surprising to me that's the reaction.

But in some ways, I feel like we took a souped-up Civic, kind of put it in a race with more powerful cars. And what surprised me is how well it does on many, many, many classes of queries. But we are going to be trading fast. We clearly have more capable models.

Pretty soon, maybe as this goes live, we will be upgrading bot to some of our more capable palm models, which will bring more capabilities, be it in reasoning, coding. It can answer math questions better. So you will see progress over the course of next week. And to me, it was important to not put a more capable model before we can fully make sure we can handle it well. We are all in very, very early stages now.

We will have even more capable models to plug in over time. But, you know, I don't want it to be just who's there first, but like getting it right is very important to us. Yeah. And we have plenty of questions about the AI safety stuff. But I also want to talk about the opportunity. The thing that is different about BARD compared to some of these other chatbots is that it's connected to Google. And so much of my life is in Google. If you let me, I would plug BARD into my Gmail right now, like just to see what it could do. Would you do this?

Might, yeah. Like, I would love it to just kind of start drafting my emails. But how do you hope this stuff transforms some of these products? And like, how long do you think it's going to take to get to somewhere like that? You know, you can go crazy thinking about all the possibilities, right? Because these are very, very powerful technologies, right?

I think, in fact, as we are speaking now, I think today some of those features in Gmail is actually rolling out now externally to trusted testers, a limited number of trusted testers. Do you trust us? Because we would love to trust you. Maybe we can talk. Okay, good, good, good, good. You know, so now it's basic. You can kind of give it a few bullets and it can compose an email. You can say, you know, you can choose the style of the email, et cetera. But you're absolutely right. You know, we want to figure out

in a safe privacy preserving way to fine tune this on your data. The enterprise use case is obvious, you can fine tune it on enterprises data. So it makes it much more powerful again with all the right privacy and security protections in place. But I think, wow, yeah, can it be a very, very powerful assistant for you? I think yes. Anybody at work who works with a personal assistant, you know how life changing it is.

But now imagine bringing that power in the context of every person's day-to-day lives. That is the real potential you're talking about here. And so I think it's very profound. And so, you know, we're all working on that. And again, we have to get it right. But those are the possibilities.

Getting everyone their own personalized model, something that really excites me. In some ways, this is what we envisioned when we were building Google Assistant, but we have the technology to actually do those things now. How are you using generative AI tools like BARD, like Lambda, like Palm in your own life? You know, it's interesting. My journey, like, you know, maybe it was two years ago when we started playing around with Lambda. We were getting ready to put it out in IO and talk about it. You know, the way we primed it was, imagine you were Pluto as a planet.

And I remember playing around with my son at home, like talking to Lambda back and forth.

And there were a couple of conversations, you really got deeply into it being Pluto because Pluto is far out in space. It became really lonely and you kind of can anthropomorphize some of this experience, like probably what you went through. And so it was fascinating to see, kind of unsettling a bit. - Did Pluto try to break up your marriage? - Not quite, but I felt sad at that point, right?

talking to it. But I think the area where it shines the most is asking questions like, my dad is about to turn 80. And I was like, hey, what do I do with my dad on his 88th birthday? It's not that it's profound, but it says things and kind of sparks the imagination. In my case, it said, you should make a scrapbook. And I was like, great. But it's fine. It kind of oriented me a particular way. So asking questions in which I think there are two categories where it works well.

Where it's fun, creative, imaginative, you're just kind of looking to spark some stuff. Hey, what movies can I watch on a Friday night? It says things different from what I find in other places. Sometimes movies I haven't heard of and I can iterate that way. Sometimes it's good if you understand the area well, where you can tell the difference between what's real versus not.

you can kind of play around back and forth with it because you are able to parse. Right, you can fact check the chat box. Yeah, like, you know, with your context in those cases, but it also goes in certain directions which can again inspire you. So that's what I find fun, yeah. I want to ask about that.

example because I've been using these technologies in the same way. It strikes me that, you know, what should I do for dad's birthday is a question that you also could have put into Google, right? And when you rolled out BARD, the company was careful to say this is not a replacement for search. And in fact, we'll show you a Google it button underneath the box. But in practice, I find that they're really good for a lot of queries that I might have previously used a search engine for. So somebody who runs the biggest search engine, like how do you feel about that? And also, are these things just going to kind of merge over time?

You know, it's exciting in the sense that from a user standpoint, it expands the possibilities of what you can do, right? So you can do more. You know, I think these models will get more capable. So we'll follow the user journey here. And I think people will evolve over time. I do think people originally come in and try a lot of these queries, et cetera. But over time, I think they kind of adjust their behavior a bit to what the models can do. So I think time will tell. But for me,

It's exciting because, you know, in search, we've had to adapt when videos came in. And, you know, today you could make the same case. People go to YouTube and look for all kinds of things. Like, how do we think about it? You know, we're like, great. People are looking for information. So to me, it looks so far from a zero-sum game, you know, because it's such early stages of a new technology. And I think the best way we can approach it is

you know really embrace it we've been working on this for a long time you know i view this as an iterative experience with users we'll put stuff out they will tell us what they want so for example in board already we can see people look for a lot of coding examples if you're developers you know i'm excited we'll have coding capabilities in board very soon right and so you just kind of play with all this and go back and forth i think yeah i want to talk to you about the

that is shaping up in AI right now. So in September of last year, you were asked by an interview who Google's competitors were. And you listed Amazon, Microsoft, Facebook, sort of all the big companies, TikTok,

One company you did not mention in September was OpenAI. And then two months after that interview, ChatGPT comes out and turns the whole tech industry on its head and sets off all this competition among other tech companies to sort of match their progress. Did OpenAI and ChatGPT catch you by surprise? Most of all, no.

I've always assumed it's a certainty that with all the innovation around, there'll be things which emerge out of nowhere. It's always been true and so on. I actually don't think... With OpenAI, we had a lot of context. There are some incredibly good people, some of whom have been at Google before, and so we knew the caliber of the team.

So I think OpenAI has progressed in surprises. I think chat GPD, credit to them for finding something with a product market fit. The reception from users, I think was a pleasant surprise for maybe even for them and for a lot of us. Because one of these things with these models is we are like, maybe from a Google Vantage standpoint,

We looked at all the areas where it goes wrong maybe a bit more, but users are seeing the potential in these models a lot as well. I would say that part may be more of a surprise. We had been following GPT-2, GPT-3. We need the caliber of the folks there. That part wasn't a surprise at all. Do you think that they were reckless to release it when they did? No, I don't think so. I've heard Sam and Greg Gilead, et cetera, speak about it. I think

You could have many different reasonable points of view around how you approach this technology. And I think there'll be a lot of debate around it. I think one of the things I've heard them talk about it is one of the reasons to put this out sooner is you give society a chance to understand, adapt, et cetera, which I think is a reasonable point to take.

I do know folks there who are very thoughtful. And so, yeah, I didn't feel that way. I'm curious if one reason why bar didn't come out last year was that safety was on your mind. How much of it was a safety thing and how much of it was a product thing? It's tough to say because the reason we built Lambda to be a conversational dialogue thing, Lambda was trained to be a conversational dialogue agent, right? Because we were working on Google Assistant.

And we realized the limitations of approaching the Assistant with the underlying technology approach we had. So it wasn't an accident that what we worked on on Lambda to be a conversational dialogue. So we understood the power of, because people are talking to the Google Assistant back and forth.

But I think it's, again, a set of factors which come together in the culmination of a product. You know, having built products, I always admire when it happens. To me, it's an exciting moment, regardless of whether we had done it. Obviously, you always wish you had done it. You know, but I admire the fact that, you know, I would not underestimate the product engineering, all the work that goes into making that kind of a fit come together. So, yeah, that's how I think about it.

When Microsoft relaunched the new Bing with this open AI, what we now know was GPT-4 sort of running under the hood, Satya Nadella, CEO of Microsoft, was very sort of jubilant and proud, especially because he thought that it had given Microsoft a new way to compete with Google in search.

And he said at the time that Google was the 800-pound gorilla of search and that Microsoft, by releasing this new Bing, would make Google want to come out and dance, basically claiming that Microsoft had kind of been able to shake Google out of a stupor and force you all to innovate. So is he right? Are you dancing now? Well, part of the reason I think he said it that way is so that you would ask him this question. He's very savvy that way. Yeah.

So, first of all, you know, tremendous respect for Microsoft and team, Satya and team. I do think it's a bit ironic that Microsoft can call someone else a 800-pound gorilla, you know, given the scale and size of their company. Maybe I would say, you know, we've been incorporating AI in search for a long, long time.

you know when we built transformers here uh one of the first use cases of transformer was bert and and later mom so we literally took transformer models to help improve language understanding and search deeply and it's been one of our biggest quality events for many many years and so i think we've been incorporating ai in search for a long time with llms there's an opportunity to more natively bring them into search in a deeper way which we will but search is very

where people come because they trusted to get information right. And so to me, the craftsmanship that goes into delivering that high quality trusted experience is important to us. So we're going to work hard to get that right. And so that's the way I think about it.

I do think, you know, sometimes I get concerned when people use the word race and being first, you know, I've thought about AI for a long time. And we are definitely working with technology, which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way. And so,

I think it's very important that we are all responsible in how we approach it. Yeah. Well, let's talk about that approach. It's been reported that in December, you declared a code red inside Google. Can you tell us what is a code red? How does life change around here after you've said that?

Laughing because, you know, first of all, I did not issue a code red. You know, I'll tell you what happened. You know, for me, seeing that, look, we are at the point of inflection. It's one of the most exciting moments. So across our products, we see so much opportunity. So, you know, collectively harnessing the resources in the company to move forward to, you know,

Rise to the moment is what I'm interested in. So I'm definitely communicating that. I am definitely asking teams to move with urgency. We are definitely working across, there are many areas. I'm asking in a deep way, engaging with the teams to understand how we are going to use

or generative AI to translate into deep, meaningful experiences. And so we are moving. I think we have a responsibility at this moment to deliver, given all the investment we have put into it. And to be very clear, there are people who have probably sent emails saying there's a code red. So I'm not quibbling with all those things. Did I issue a code red? No. And every time I say that, I'm worried Casey is going to look at me and say, did you or did you not issue a code red? Yeah.

And so people get stuff done, can paraphrase and say, well, there's a code red, et cetera. But, you know, I did not issue code red. It's genuinely an exciting moment for us. And, you know, I think as a company, we long work towards a moment like this. You know, in 2015, I wanted the company to think in an AI first way. So to me, I'm just excited at the possibilities here.

It's also been reported by my colleagues at The Times that Larry Page and Sergey Brin, founders of Google, are being very hands-on about this new generative AI push, that they are, you know, back in a sort of literal or metaphorical sense and that they are, you know, getting their hands into these projects. What's that been like? So to be very clear, both Larry and Sergey are very active as board members. You know, to me, what was exciting about this moment, part of the reason I

called and spoke to them. Look, we have been speaking about AI for pretty much as long as I can remember, right? Part of the reason I remember being with them, this was maybe in 2012 in a lab not far from here with Jeff Dean and Jeff Finton and team where, you know, we saw the early science of neural network and recognize images, images of a cat, et cetera. We later, you know, brought DeepMind in. So this has been a long journey for us. So it's an exciting moment.

You know, I had a few meetings with them. Sergey has been hanging out with our engineers for a while now on, you know, he's a deep mathematician and a computer scientist. So to him, the underlying technology is,

I think if I were to use his words, he would say it's the most exciting thing he has seen in his lifetime. So it's all that excitement and I'm glad, you know, they've always said, call us whenever you need to and I call them. So that's what it is. Yeah. Well, so the Times also reported that as part of an effort to get these AI products to market maybe a little bit faster, you set up what's called a green lane to maybe accelerate the review and approval of some of these new products.

I think sometimes we hear something like that and say, well, are safety checks still being applied? So what can you tell? And I think it's also just sort of an interesting question about how you're changing the company to meet this moment and try to get more products out the door. So how are we kind of balancing that innovation and safety calculus? Super important. We've been very deliberate in how we are moving through this moment.

Some of these products we could have put to market earlier. We are taking our time to do that and we'll continue to be very, very responsible. So I think all we are doing is we're a big company. So when many parts of the company are moving, you can create bottlenecks and you can slow down. There's a difference between being efficient as a company, making sure you're not bureaucratic as a large company. I think those are the things we are talking about here. But

The work we do around privacy, safety, responsible AI, I think if anything is more important. And so our commitment there is going to be unwavering to get all of this right. One more question about these language models, maybe before we move on to some other stuff. Last year, one of your engineers came forward to say that he believed Lambda, this precursor to BARD, was sentient. I never believed that that was true, but it didn't worry me that one of your employees did.

Do you worry about this kind of belief spreading? And is there anything Google can do about it as more people start using these technologies? I think it's one of the things we have to figure out over time as these models become more capable.

So my short answer is yes, I think you will see more like this. You've just seen the conversations even over the last couple of weeks. You know, I said this before, AI is the most profound technology humanity will ever work on. I've always felt that for a while. I think it'll get to the essence of what humanity is. And so this is the tip of the iceberg, if anything, on any of these kinds of issues, I think.

We'll be right back.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

Zinder, let's talk about some of the big picture stakes here with AI and how to get this balance between innovation and safety, right? So recently, more than a thousand technology leaders and researchers, including people like Elon Musk, along with some employees of Google and DeepMind, signed a letter calling for a pause of at least six months on the training of large language models more powerful than GPT-4.

And they said that they're calling for this sort of pause because they believe that more advanced AI poses, quote, profound risks to society. What did you think of that letter? And what do you think of this idea of slowing down the development of big models for six months?

Look, in this area, I think it's important to hear concerns. I mean, there are many thoughtful people, people who have thought about AI for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about AI safety then. And I think he has been consistently concerned, and I think there is merit to be concerned about it. So I think, while I may not agree with everything that's there in the details of how you would go about it,

I think the spirit of it is worth being out there. I think you're going to hear more concerns like that. This is going to need a lot of debate. No one knows all the answers. No one company can get it right. We have been very clear about responsible AI. One of the first companies to put out AI principles. We issue progress reports. AI is too important an area not to regulate.

it's also too important an area not to regulate well. I'm glad these conversations are underway.

you know if you look at an area like genetics in the 70s when uh you know the power of dna and recombinant dna came into being you know there were things like the aslamar conference paul berg from stanford organized it and you know a bunch of the leading experts in the field got together and started thinking about voluntary frameworks as well so i think you know all those are good ways to think about this i'm curious if there is a regulation that like you would tell

tell lawmakers might be good to pass in the next six months. For example, I have a friend who thinks a lot about AI issues, and he thinks that beyond a certain size, one of these language models probably shouldn't be able to run on your laptop, right? Or if you found that a model could send phishing emails that had a 1% chance of success, you wouldn't want that to be able to run on any laptop. Pick any example you like. Is there stuff out there where you're like, well, I hope I don't see any of the other companies out there doing this?

I would start a little bit more in a basic way. So for example, I would make sure we get privacy regulation right. So, you know, because if you have a foundational approach to privacy, that should apply to AI technologies too. Yeah.

I think there are many areas people underestimate where there are strong regulations already in place. Like healthcare is a very regulated industry, right? And so when AI is going to come in, it has to confirm with all regulations. So you also want to build on existing regulation where you can. I think that'll allow innovation to proceed as well. Once you start getting into specifics like that, I think what I would be worried about is

This is such fast evolving technology being very opinionated early on, I think it's difficult. But I think notions of transparency where people are aware what other people are doing has some element of reasonableness to it. How easy it is to do at a global scale, I think those are hard. The thing that gives me hope is

I've never seen a technology in its earliest days with as much concern as AI. Just one more thing on this letter calling for the six-month pause. Are you willing to entertain that idea? I know you haven't committed to it, but is that something you think Google would do?

So I think in the actual specific software, it's not fully clear to me how would you do something like that, right, today? Well, you could send an email to your engineers and say, okay, we're going to take a six-month break. No, no, no. How would you do, you know, but if others aren't doing that, so what does that mean? I'm talking about the, how would you effectively... There's sort of a collective action problem. To me, at least, there is no way to do this effectively without getting governments involved. Yeah.

So I think there's a lot more thought that needs to go into it. I think the people behind intended probably as a conversation starter. And so I think the spirit of it is, I think, good. But I think we need to take our time thinking through these things.

There's sort of two categories of AI risk that people are worried about. There's sort of the short-term worries, you know, the chatbots that get things wrong or maybe they're biased or they're giving people bad answers. Then there's the kind of long-term or longer-term worries about, frankly, AI destroying human civilization. You know, Sam Altman, CEO of OpenAI, has talked about the possibility of AGI, this artificial general intelligence that could become superhuman and affect, you know, dramatic and

and bad change in the world. Do you believe that we're headed toward AGI? And do you want to build that? It is so clear to me that these systems are going to be very, very capable. And so it almost doesn't matter whether you've reached AGI or not. You're going to have systems which are capable of delivering benefits at a scale we've never seen before and potentially causing

real harm, right? So can we have an AI system which can cause disinformation at scale? Yes. Is it AGI? It really doesn't matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment. And so, you know,

Today, we do a lot of things with AI. People have taken it for granted, right? Think about how big a moment Deep Blue was, or when we did AlphaGo, but you can't take it all for granted. And so I think this will play out differently than thinking through a moment like AGI. Right. There's that thing where people just sort of refer to anything you can't do yet as something AI will handle in the future. I remember the first time I searched Google Photos for dogs, and it just showed me all the dogs on my camera roll. I mean, that

That is AI, at least by some definitions, right? But I think you're right. People do kind of take it for granted.

- And I remember when we launched Photos, you know, we had to explain at Google I/O what neural networks were, what deep learning was, and like, you know, we're trying to explain that this is different technology. This is, yeah, it's interesting. - Yeah. So if you had to sort of give a question on the AGI or sort of the more long-term concerns, what would you say is the chance that a more advanced AI could lead to the destruction of humanity? - There is a spectrum of possibilities.

And what you're saying is in one of that possibility ranges, right? And so...

If you look at even the current debate about where AI is today or where LLMs are, you see people who are strongly opinionated on either side. There are a set of people who believe these LLMs, they're just not that powerful. They are statistical models, which are- There's a fancy autocomplete. Yes, that's one way of putting it. And there are people who are looking at this and saying, these are really powerful technologies. You can see emergent capabilities and so on. We could hit a wall.

two iterations down. I don't think so, but that's a possibility. They could really progress in a two-year timeframe, right? And so we have to really make sure we are vigilant and working with it. One of the things that gives me hope about AI, like climate change, is it affects everyone. And so these are both issues have similar characteristics in the sense that

You can't unilaterally get safety in AI, right? By definition, it affects everyone. So that tells me the collective will will come over time to tackle all of this responsibly. So, you know, so I'm optimistic about it because I think people will care and people will respond. But the right way to do that is by being concerned about it. So I would never, at least for me, I would never dismiss any of the concerns. And I'm glad people are taking it seriously. We will.

Yeah, it just strikes me that you are in such a tricky position because you have this one group of people that's saying like, move faster, release this stuff faster, go compete with all these other people, right? You built all this technology. Don't let that lead go to waste. Then you have other people saying what Kevin just said, which is like, there's a non-zero risk that this stuff does something really, really bad. What is that like for you waking up every day and just having both of those things in your ear?

There is a sense of some whiplash, right? It's like asking, hey, why aren't you moving fast and breaking things again? Which for all of us over the past few years. I think we realize we are going to be bold and responsible. We are working with urgency. We are excited at this moment. There's so much we can do. So you will see us be bold and ship things.

but we are going to be very responsible in how we do it. So there will be times when we will hold back things. I think what we are doing in BART for us is an example of it. We haven't hooked up BART to our most capable models yet, and we plan to do it deliberately. And so, you know, through this moment, I think we are going to stay balanced, but we are going to innovate. And, you know, there is a genuine excitement at this moment, so we'll do that.

I hear you saying that what gives you hope for the future when it comes to AI is that other people are concerned about it, that they're looking at the risks and the challenges. So on one hand, you're saying that people should be concerned about AI. On the other hand, you're saying the fact that they are concerned about AI makes you less concerned. So which is it? Sorry, I'm saying the fact that the way you get things wrong is by not worrying about it, right? And so if you don't worry about something, you know, you're just going to completely get surprised.

So to me, it gives me hope that there is a lot of people, important people who are very concerned, rightfully so. Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly. I mean, we've been working on this for a long time. But I think the fact that so many people are concerned gives me hope that we will rise over time and tackle what we need to do. So we should continue to write columns where we're very nervous about where all this is going.

As well as columns where you're excited about, you know, possible benefits of all of this. I hear you on the whiplash. I feel whiplash every day just reading the news about AI. I can only imagine what you're feeling. Another question that people have about AI in the sort of medium and long term is about its effects on jobs. And, you know, there have been all these predictions about it.

LLMs and what kinds of work they could replace or will replace. I actually, I got a text from a software engineer friend of mine the other day who was asking me if he should go into construction or welding because all of the software jobs are going to be taken by these large language models. And he was sort of joking, but sort of not. You have a lot of software engineers here at Google that work for you. How should they feel about that question?

With any technology, you have adaptation. I think this one, there'll be a lot of societal adaptation. And as part of that, we all may need to course correct in certain areas. To your specific question, I think for software engineers, there are two things that will also be true. One is some of the grunge work you're doing as part of programming is going to get better. So it'll be more fun to program over time.

No different from, you know, did Google Docs make it easier to write? And like, you know, so if you're a programmer, over time having this collaborative IDs with assistance built in, I think it's going to make it easier. The other thing that excites me is programming is going to become more accessible to more people, right? And so it's such an important role in the world. You're creating things. And today the bar is very high, you know, so...

We are going to evolve to more natural language way of programming over time. So to me, that means things, no different from to do a podcast, to do something like this 40 years ago, just imagine what access you would need to have.

to be able to do an interview like this. - We need a radio tower. - But we won't think this has enabled more people. I think the same thing will be true for software engineering as well. So I think those are all important, exciting use cases to think about.

Well, I want to ask a more near-term question, near and dear to my heart, kind of about media and publishing, but also like search on the web, right? Today, lots of digital publishers rely on the traffic they get from Google. They get ad impressions. That pays their bills. When BARD is at its best, it answers my questions without me having to visit another website, right? I know you're cognizant of this, but man, if BARD gets as good as you want it to be, how does the web survive?

I think through our work across, I think we'll be committed to getting it right with the publisher ecosystem. In search today, while these things are contentious, in a search, we take pride. It's one of the largest sources of traffic. If I look at it year on year, the traffic we send outside has only grown. That's what we've accomplished as a company. Part of the reason we are also being careful with things like BOD, amongst many reasons,

We do want to engage with the publisher ecosystem, not presume how all things should be done. And so you will see us thoughtfully evolve there as well.

I know we can't really predict what the final form of all of this stuff will be, but I have to believe that, I don't know, in five years, what used to be the Google search bar is just essentially a command line that I can write in to get anything I want, whether I want to change something on my phone, write myself a little app, access the sum total of human knowledge, have it draft my emails. Does that feel like a potential final destination to you, or do I have it all wrong?

You know, I mean, there's a part which is consistent with our mission to do that. But I think I want to be careful where Google has always been about helping you the way that makes sense to you. We've never thought of ourselves as the be-all and the end-all of how we want people to interact. So while I think the possibility space is large, for me, it's important to do it in a way in which users use a lot of things.

And we want to help them do things in a way that makes sense to them. And out of that North Star is whatever the answer it leads us to. But I don't want to get it, you know, so that's the way I think about it, at least in my head. Sundar, thank you for joining us. Thank you, Sundar. Thanks, Kevin. Casey, pleasure. BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast.

Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

Casey, we are back from the Googleplex. I really enjoyed our little field trip today. Thank you for that enlightening and enriching road trip and also for allowing us to stop at In-N-Out for lunch on the way back. Yeah, it turns out if you order your fries well done, which is not on the menu, they arrive much crispier and more delicious. Yeah, that's a pro tip for you. But you know, that wasn't my only takeaway from today, Kevin. Yeah, what'd you think? Well, you know...

A lot of times when companies tell us about the new technologies they're introducing, they do so in a really grandiose way. And I was struck today by the humility that Sundar uses when he talks about where the company is now. He is not here to tell you that BART is the best language model out there. He said that, in fact, it is quite limited. Yeah, he compared it to like a souped-up Civic.

Yeah, which I wasn't expecting, but he broke a little news with us. He told us that Bard is going to be upgraded. And man, I'm really curious to see if Bard feels any different in a few days. Yeah, and I really was struck by what he called whiplash, where he's got people telling him, you know, you've got to move faster and compete with ChatGPT and release everything you've got. And then also this very real sense of like,

didn't we get in trouble for doing this the last time with, you know, all the products that we released in the last decade? Like, shouldn't we be slow and deliberate? So I wouldn't want to switch jobs with them. It sounds very hard. Also, you know, I thought we would mostly be solving mysteries this week, but I feel like we are leaving with one, which

which is, who did order the code red at that company? Yeah, if you ordered the code red at Google, please write to us at hardfork at nytimes.com. We would love to hear from you. Also, before we go this week, a special thanks to the listeners

listener who wrote in to tell us that Spotify has a feature that allows you to exclude certain playlists like my sleep playlist from your taste profile, which informs your recommendations and possibly what the AI DJ tells you to listen to. I did that this week. So chill tracks will no longer be showing up in my discover weekly. Uh, that was a genius suggestion. Thank you to that listener and to all of our listeners.

Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Marion Lozano, and Rowan Nemisto. Special thanks to Paula Schumann, Huiwing Tam, Nel Galogli, Kate Lepresti, and Jeffrey Miranda. As always, you can email us at hardforkatnytimes.com.

One key cards earn 3% in one key cash for travel at grocery stores, restaurants, and gas stations. So the more you spend on groceries, dining, and gas, the sooner you can use one key cash towards your next trip on Expedia, Hotels.com, and Vrbo. And get away from...

groceries, dining, and gas. And Platinum members earn up to 9% on travel when booking VIP access properties on Expedia and Hotels.com. One key cash is not redeemable for cash. Terms apply. Learn more at Expedia.com slash one key cards.