cover of episode Inflection AI: Personalized and More "Woke" Than The “TruthGPT” Elon Might Make

Inflection AI: Personalized and More "Woke" Than The “TruthGPT” Elon Might Make

Publish Date: 2023/6/15
logo of podcast On with Kara Swisher

On with Kara Swisher

Chapters

Shownotes Transcript

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org.com.

On September 28th, the Global Citizen Festival will gather thousands of people who took action to end extreme poverty. Join Post Malone, Doja Cat, Lisa, Jelly Roll, and Raul Alejandro as they take the stage with world leaders and activists to defeat poverty, defend the planet, and demand equity. Download the Global Citizen app today and earn your spot at the festival. Learn more at globalcitizen.org slash bots. It's on!

From New York Magazine and the Vox Media Podcast Network, this is AI Kara Swisher. Or is it the real Kara Swisher? Or is it AI Kara Swisher? Soon, you'll never know. That is actually Kara Swisher, and I'm Neha Miraza. And there is so much AI around us. When's the last time you saw a tech revolution like this? Was it Web3, social media? Mobile. The shift to mobile. You understood that it was going to unleash an enormous amount of companies. Uber didn't exist. There's all kinds of companies that the iPhone introduction really did exist.

shift, the whole app revolution, certainly the mobile era. And this is the same thing. You don't really understand what's going to be made because nobody's creative enough to think about the various applications. But there'll be a Yahoo of this, a Google of this. There'll be an Uber of this. There'll be a, you know, whatever. Name your app. Yes. Our guest today is one of these AI company founders. So we have a number of companies operating in the space, OpenAI, Anthropic, DeepMind, Curve.

Google. Today's guest, Mustafa Suleiman, actually founded DeepMind, which sold to Google. A story I believe you broke, right? I did. That was a long time ago, 2014 or something like that. And since then, he's left Google. He worked on Lambda, Google's large language learning model, and has moved on to start the company Inflection AI, along with Reid Hoffman, who we've had on the podcast as well. Mm-hmm.

So do you know Mustafa Saliman? Have you met him? I have met him many times when he was at Google and working on this stuff. I was very taken by what they were doing in AI, more so than anything else. And I remember when they bought it, I thought, this is probably a bigger deal than I realized. I wasn't as up to speed, and most people thought AI was going to come very slowly. But when Google bought it, I started to pay a lot of attention to the sector.

And he's a really interesting person in Silicon Valley because he doesn't have the same background as many of the CEOs that we've seen. He grew up in the UK. He was raised by a Syrian-born father who drove a cab and a British mother and then dropped out of Oxford, I believe. DeepMind was in England, which was another unusual thing I remember at the time. It was way ahead of everybody else. It was a really big deal. When they bought YouTube, it was a similar feeling. This is a big deal.

They had to have it. So have you been surprised that DeepMind has been kind of kept under the cloth a little bit? They have. Google's got to be very careful. They've been doing a lot of AI stuff, but they've been very slow to roll things out, largely because they're worried about the safety and everything else. And so I think they now realize they've really got to lean into this area.

But, you know, I never count Google out on things like this. You think they're holding back because they're a publicly traded company as well, right? The safety would have real... Yeah. And then they did all their silly stuff, their silliness, silly walks everywhere. What do you mean silly walks? You know, all those companies they started, all those... Oh, yeah. All the alphabet kind of portfolio companies that they are now closing down. Yeah, the Barge in San Francisco Bay. They should have focused on AI. They never innovated search. That was always interesting to me. And if you think about it,

AI, the way it's being delivered right now, is innovative search. And ads. Yeah. Like really actually serving up ads based on your search results, right? Yeah. But these days, Mustafa Salman's onto a new company, this Inflection AI with Reid Hoffman. Their goal is to create a personal AI. Yeah. And they released a chatbot called Pi, which is, they want it to be warm and fuzzy, and it is like that if you hang out with it. And it reminded me of, do you remember?

Clara AI? No. So I remember when I was meeting with people from Stripe, like Claire Hughes-Johnson, et cetera, they had an assistant named Clara. And so Clara was always helping me schedule things. And this is probably 2014. And I show up one day and I had brought macaroons and a card or something because she had helped

you know, so much. And I'm like, can I meet Clara? Because I just want to give her something. And they're like, oh, no, no. Clara's not real. No. Clara's an AI. Yeah. I mean, people in Silicon Valley have been working on this for a very long time. And it's a big sci-fi thing is that there are robots who respond to us or anthropomorphic

creatures that are not real, that they're digital. You know, whether you watch Iron Man where he has Jarvis or any number of, even going back to Lost in Space or Star Trek's computer, tell me this. Well, you know, that's been a dream of technology and it hasn't been met, but I do think you will have a personal AI in the next 10 years and it will talk to other people's personal AI. Yes.

Have you ever had an assistant? You don't like having assistants. I did many years ago, but no, I'm not good with the assistants. I like doing my own stuff. I might have an AI assistant. I just think they'll be more efficient. So if I'm wandering around my house, I'm like, I need laundry detergent. Instead of stopping myself ordering it, the AI would know that.

Kara, do you want me to order that? Yeah, okay, please do that. I can see doing that. I don't want a person following me around. I love it. That's what you want. Or something. No, whatever I want. I want to get a, hey, get me some flights to LA. Hey, we're thinking about going to here. What would you, give me some information about it. You know, and not print it out or searched on. Just that it starts to

Well, based on what you've gone to, you might like this. A lot of people, and I did this when I first got to LA and was trying to make movies and stuff. A lot of people come up with their own fake assistants and then you create your own email address. So I had OliviaGrayLA at gmail.com. She was so good. I mean, it was me. I was just writing to people. You sound like Donald Trump. This is the PR guy with John, whatever. Oh my God. So many writer friends of mine have done this. They go to LA and you have one. And the strength of the AI is knowing you and

That also creates a fear with this kind of personalized AI because the deeper, the closer they are to your juggler veins, the more data, the more security, the more access and proximity I'm giving you, which one, I'm paying you probably to use your product, but I myself am giving you data, which is valuable. And two-

in a world where we're already kind of disintermediated from other humans by devices, and the Surgeon General, Marthie, is saying that we have a loneliness epidemic, this is pushing us further into technology or ourselves. We're already there. You're already on the matrix. I'm sorry to give you that information. You're already being tracked. And it's very depersonalized. So I think this is kind of interesting. I think it's very interesting. I mean, it's got a competitive edge that is interesting. I wouldn't mind a little more personalization and empathy from a

Do you think they're a bit like Niva, like you're paying for extra privacy in some way? Yeah, you will. They won't sell your information. And some people, if they don't want to pay, you'll agree to give some information. I think Congress has got to be here regulating all this stuff. Yeah. What do you think is happening? Nothing. I think they're going to do nothing. Even these calls from the U.N. Secretary of State? I mean, I know.

UN Secretary General calls me a lot. Yeah, they like to make calls, but they don't like to actually do anything. They want some kind of international watchdog agency. But there are a lot of people showing up in Washington to have conversations with regulators.

Yes. Sam Altman, apparently. They are. They're trying to front load this thing, and they should. They're being much more responsible than everyone's like their virtual signaling. I've never saw any of the tech companies do this. And so I'm happy that they're doing it. They know who showed up a lot in D.C.? Who? In the crypto world? Well, Sam Bankman Freed. Yeah. I get it. But-

These people are not Sam Bankman-free. I'm not saying Sam Altman is Sam Bankman-free at all. I'm not saying that, but I'm just saying. I never had a trust of crypto, but this I do believe. But sometimes showing up can be correlated with having an agenda. Maybe. And sometimes showing up can be correlated with altruism. It's hard to know. Yeah. I don't mind that they have self-interest. I just want to know that our legislators are participating in the development of it. 100%. I think that's a good thing to know, especially because what's scary is this technology either being not good enough or...

Or too good. That's the interesting thing about AI. It's going to be too good. Because, yeah, when it's not good enough, it's giving you misinformation. We saw that in New York. It's going to be great. I'm sorry to tell you it's going to be great. Why are you sorry to tell me? I'm excited. Okay. Because a lot of people are scared of it, but it doesn't matter. The other day, someone was worried about AI to me, and they're like, oh, I'm really worried. And I literally said to them—

Are you still churning your butter? How's that working out for you? You said that to me about autonomous vehicles. Are you telling me a story about it? Oh, it was about you. I was like, I can't remember who I told you. But this is the way it's going. It doesn't really matter what you think. This is how it's going to be. Well, I think on AI, I agree with you. On autonomous vehicles, I think it's

Probably going to get there. Not liking something and it being inevitable are two different things. I do think AI is coming, and we will be back with our guest. If you don't like it, don't do it. That's my feeling. But you've got to do it. Yeah, you're going to do it. You're going to do it, and you forgot whenever you didn't want to do it. You'll forget. Anyways, let's take a quick break, and we'll be back with our guest, Mustafa Sileman. This episode is brought to you by Shopify. Shopify.

Mustafa, thanks for joining me.

I think we should start off with some recent AI news, and we're going to dive into Inflection AI and your new chatbot, Pi, and we'll end up with some broader questions about content moderation, regulation, and ethics. Sure, yeah. A couple weeks ago, you and a long list of AI luminaries signed just a 22-word statement that says, mitigating the risk of extinction from AI should be the global priority alongside other societal scale risks, such as pandemics and nuclear war. But if it presents an extinction risk...

Why are you commercializing it? I want you to explain it to people, why you released the statement and why you think this. Well, I think the first thing to say is that we've been trying to put these issues on the public agenda for the best part of a decade. I mean, when we co-founded DeepMind, our mission was to build safe and ethical AGI.

So right from the outset and throughout the course of the last decade, we've been trying to figure out governance structures, business models, technical safety mechanisms, industry collaborations, encouraging public criticism and open discussion about this. So to me, that statement is in a kind of long line of various efforts that have been going on by me and many other people to try to say,

In the theoretical limit, there is a potential that these systems could become recursively self-improving. That is that they make themselves better. They improve over time in ways that might potentially be out of our control. And that theoretical possibility is something that I think people have

fixated on a little bit in the grand. Sure. I mean, sci-fi has that happening pretty much in every movie. Right. And so it's easy to kind of grasp onto that image of the Terminator getting out of control. And as far as I'm concerned, that's no bad thing. I've always been a believer in the precautionary principle.

And I think that we're approaching a moment sometime in the next few years where it's right to slow down and think about the potential negative consequences, setting aside the potential benefits. That's the trade-off we're going to have to make. We're going to have to say it's pretty clear that advancing the deployment of these models could potentially save, you know, real lives. I mean, just look at the trade-off that was made

with self-driving cars or autopilot in Tesla. I think there's almost 17 casualties now that have been associated with autopilot. And, you know, I think it's possible maybe to quantify the benefit of making progress towards self-driving, but it's not crystal clear, has to be said, right? Right, right. But in this case, I don't think we're going to feel extinction level from Tesla, right? That's a word. Totally. Right? I feel like maybe we'll get...

bopped by one of Elon's cars. But in this case, you used the word extinction. So explain why you all decided to put this out. Well, because I think that, like I say, in the most extreme cases, if we really are successful in creating an intelligence system,

that is capable of performing all the tasks that humans perform better than humans, then it's quite likely that it would quickly get better than us at every conceivable future task. And so the challenge then becomes, how do you contain

the power and ability to act of a system like this, such that it is always under meaningful human control and always operates within the kind of boundaries of how we collectively as a species would like it to, that's a very tall order, right? And then doing that in a provably safe way is even harder. By the way, I do think that this is quite a long way out. Some people have

speculated that this is something that we have to worry about within the next few years and I think that's alarmist, exaggerated and plain wrong. I think that we're talking about more like a multi-decade time scale here. I see no evidence that we're on the cusp of losing control of recursively self-improving autonomous agents that will wander around the internet and secure their own resources and act against us.

And I don't think that we're on a trajectory to do that within the next few years. Although many people didn't think generative AI would move so quickly, right? Most people were surprised. Well, I think that's a cheap soundbite that a lot of people have said because they're super excited. I actually think that this has been a trajectory that we've been battling on for 15 years. Neural networks are 40 years old in theory. When you actually look back at the progress on the underlying algorithms...

It's actually been incremental. It's really the compute and the data that has grown exponentially. So that's good. Meaning what we could put into it and the power of the computing. Right. The amount of compute that has been used to train these models has grown exponentially.

by nine orders of magnitude in the last decade. So generative AI didn't just come out of nowhere. We've seen this progressive and actually quite predictable improvement in capabilities as we add more compute. So that's why I think we and others have been sounding the alarm at this point, because we're like, okay, the difference between GPT-2...

which is two orders of magnitude of compute less than GPT-4, is staggering. The last two orders of magnitude is absolutely eye-wateringly impressive, the difference between the two models. So what's the next two orders going to look like? I don't think it's going to be some intelligence explosion where suddenly the AI is trying to get out of the box and manipulate us and commandeer physical infrastructure and all these other kinds of things. No, that's just real people. That's just real people who do that. Exactly. We've got people to do that. Oh, I've...

I would trust computers more than people, I'll be honest with you. But not long after you signed the statement, Marc Andreessen, who loves to do this, published a long blog post titled Why AI Will Save the World. He calls people who say AI poses an existential risk Baptists.

Then he references Oppenheimer, who invented the nuclear bomb, and an argument over his feelings of guilt. He says, some people confess guilt to claim credit for sin. What is the most dramatic way one can claim credit for the importance of one's work without sounding overtly boastful? This explains the mismatch between the words and the actions of the Baptists, who are actually building and funding AI. Watch their actions, not their words. What's your response? He's essentially...

saying you're baptist i'm not sure i quite follow it given that he's investing in all of the companies that are making this happen so i'm not quite sure what he is in this little scenario oh nor am i sure what i am he's the best man ever just so you know so anyway he might be the ultimate vc troll yes he really is he's not saying it's an existential risk um and i agree he's not an objective source let's say that in a nice way

But I like VC Troll quite a bit. But when he says this, what do you say to it? Look, the confusing thing is that there are going to be many seemingly contradictory statements which are all broadly correct. And instead of acknowledging the weaknesses of our own statements and the strengths of our enemy's statements, you know, he and others like to frame it as some black and white adversarial, they're wrong, I'm right kind of exchange.

If you actually think about what he's saying, it's very reasonable. AI has every chance of saving the world. That's why we're building it, because we're naive and utopian and passionate in believing that it can do a huge amount of good, right? That's not necessarily contradictory with us claiming that there is a long-term potential and theoretical risk that we have to attend to, just like with the arrival of any new technology. And what's good about, I think, this new wave of AI is that

Everybody is wised up to, you know, the potential threats compared to where we were with social media. I think a slightly younger and more aware crop of leaders of tech, AI leaders, are proactively calling for slowing down, calling for the use of the precautionary principle, you know, raising the alarm about potential harms.

in a way that I think the last generation of tech leaders didn't do. And I don't think they're contradictory statements at all. I think that's actually... So one of the things, though, is that you benefit from raising the alarm because you say, look, I warned you. It does create a perception of responsibility, not with accountability yet. That doesn't mean it's not going to happen. So I'd love to know what is the broad outlines of the sort of regulation you'd like to see?

So to begin with, I think we have to draw some guardrails about who can speak publicly. I think that it shouldn't be okay for an AI system to imitate a human being publicly without that being explicit. Right. So that's just the easy thing for us to take off the table. Second thing is that we have to have some watermark for content that

that allows the producer of that content to tie it back to them, right? So we don't have this imitation issue, right? And I think that can be cryptographically signed. And so that deals with some... Provenance. Provenance, yeah, exactly. And that deals with some of the issues. It doesn't deal with all of them. And I think that's table stakes.

The next thing that needs to be possible is that there has to be independent third-party adversarial red teamers who can attack a model and constantly try to break it. We do that internally, but we shouldn't be marking our own homework. We welcome other people, and ideally they should be qualified and well-paid and funded by independent third-party groups that aren't attached to us, who can try and do their very best to induce the model to say,

toxic, racist, harmful content, right? So availability of third parties to do this, which social media sites have been very sketchy about. Sometimes they do, sometimes they don't.

Yeah, exactly. And I think that's one of the failings of the social media age is that they haven't allowed transparency of how their platforms are being used. It should be possible to share with responsible, trusted third party academics, researchers and government regulators, you know, why a particular piece of content has been shown to a certain person. What is it about the profile that has been created about them, which has led

the social media company to personalize content in this way. And we should be able to do research on that. It doesn't mean that it's going to expose sensitive company IP. It doesn't mean that, you know, suddenly we're going to leak personally identifiable information from the user. There are certainly ways of doing all of that in a privacy-preserving, safe and responsible way. None of that has happened, though, at all. There's no privacy legislation. You would welcome that kind of thing, regulated transparency. Absolutely.

Absolutely. Transparency is the path to trust. But the same companies that did this before are doing it now. Do you see them changing that attitude? Well, that's sort of true. It is and it isn't. I mean, you know, I think that, you know, inflection, anthropic, deep mind, open AI, we're a group of, you know, friends and colleagues that have been working together for the best part of 10, 15 years. And I think we have slightly different perspectives.

you know, a different flavor. I mean, we're still fundamentally working in big tech. I'm not, you know, going to sort of, you know, that's important to say. But, you know, I think there's a different flavor to the kind of approach that we're trying to take now. And it represents the next step forward in the evolution of companies and how we run big tech.

Speaking of Big Tech, Jeffrey Hinton, who is one of the early researchers and creators of AI, recently quit his job at Google. He said he's very concerned about disinformation, deep fakes, job losses, AI warfare. I think killer robots was his concern, etc. And you worked with him at Google. Did that surprise you? Look, I think Jeff has been fairly distant from all of these debates throughout all my time at Google. And in fact, most of the time in the industry, I can't think of the last time when he spoke up on anything about this.

So he certainly has had some kind of road to Damascus experience because this is not what he's been campaigning on for any of this time, as far as I can tell. Obviously, he's been an eminent contributor to the field and everyone acknowledges that he's the founding father in many ways of neural networks. But it is a little bit surprising that he's getting involved in it now. And I think

You know, that sort of shows how challenging it is to predict the consequences of scale, even though scale had delivered very objective and measurable performance improvements over the last three or four years. Why do you think he did this? The road to Damascus is a great metaphor, actually. Well, I guess he thought the timelines were longer than they were, but I think, you know...

He cares very deeply about the consequences of his work and the field's work for everybody. And so, I mean, it makes sense that he would...

just like the rest of us, want to engage more people in the debate. And this was a, he has a platform and he hasn't used it at all before. And it's great that he's now got everybody paying attention. Certainly. Now you also quit Google after it was described as a rocky tenure by the New York Times. The journal had reported there were complaints about your bullying staff. What was the reason you left there? Because after taking a leave of absence, you were elevated to a VP position at Google, and then you've since left. Can you talk about your own journey?

No, I mean, you know, I spent 10 years at DeepMind, you know, delivering on our applied AI division. And it was, you know, an incredible time. We did many, many launches. But towards the end of it, I got pretty knackered. I was pretty hard charging. I was tough to work for, you know, and I took that very seriously. And I got some coaching. I took almost four or five months out.

and had some time to reflect on how I was acting and how I was operating, what my management style was like, and made a bunch of improvements. And so then switched over to working at Google where I ended up actually working on Lambda for the best part of 2020 and 2021. Explain what Lambda is for those who don't know. Yeah, so Lambda was the first application of the transformer model at Google.

It stood for Language Model for Dialogue Applications. And it was really ChatGPT before ChatGPT. I mean, we had a conversational, interactive, very high quality, large language model working. And we were in the process of, when I was there, connecting it up to some parts of Google Search to improve its factuality and grounding.

And we were really just plugging away at trying to improve the quality of the experience.

and make it more and more safe. And that was what I was working on in 2021. And was your experience of being knackered, whatever that means, you can explain it to me from a British point of view, but did that make you want to leave Google and start your own thing? Or was it working at a big environment? No, so I moved to Google in 2019. Yes, you did, yeah. And then I worked at Google for two years and had an amazing time.

and worked with lots of people in Google research, in the products team, policy team. That was when we were working on Lambda. And Google is a big organization and isn't always the fastest to move. And so for me, the potential of actually

getting these models into production and building a small startup around this was very attractive. That's what I went off to do in the beginning of last year. You and Reid Hoffman, who we've interviewed on this topic, have started Inflection AI. You raised $225 million and you're reportedly in conversations to raise $675 million more. That's a lot of money. What's your take on that?

What do you hope to do with all that money? Do you think this is a better way to, because, you know, you're within Google, there's enormous power. If you're in Microsoft, any of these companies, you can certainly do a lot from these big companies, even if they're slower moving. What's the difference here? Look, I think there are going to be many, many different AIs, right? This is the beginning of a complete transformation in how we interact with computers.

So, of course, Microsoft and Google will launch lots of different AIs and they're leading the pack at the moment. But in the future, I expect every business, every brand, every person, every digital influencer, you know, every government, NGO, each of them will have their own

AIs that are conversational, interactive, that dynamically generate new UI, new images, and that interact with you on whatever it is they're motivated by. So they might be trying to sell you something or teach you something or support you in your healthcare journey.

AIs are really going to be the new interface. And so that was really my hypothesis for starting a new company. What does a billion dollars give you to do this? What do you hope to do with the money?

Well, we have a pretty small team at the moment. We are only 35 people or so. But we train some of the largest language models in the world. We have currently the largest GPU H100 cluster, which is in operation. So H100s are the new version of chips from NVIDIA that are super performant and give you huge amounts of processing power.

And so what we build is large language models that interact with real people. So we've tried to design an empathetic, a personal, a very conversational AI that I think in the future is going to become one of the main ways that you access other digital services in the world. You'll rely on your personal intelligence, your Pi,

You call it personal AI. Yeah, I call it personal AI, just in the same way that you would have a business AI, or there would be a government AI, or a brand AI, or a music and entertainment AI. I think you as an individual are going to have a personal AI, and that AI is going to act on your behalf, right? It's going to

find useful information for you by talking to other AIs and to other people, of course. Like an assistant, essentially. It's exactly like an assistant. Yeah, that's right. And when you want to go buy something, it will negotiate with other AIs. If, say, you and I wanted to meet, maybe our two AIs would have a tête-à-tête beforehand, just like two chiefs of staff. Like, this is what's on Kara's mind. This is what Mustafa's thinking about at the moment. You might be interested to talk to him about X, Y, Z.

That kind of thing. Now, according to what your company says, it has good EQ, it's kind, supportive, curious, humble, creative, and fun. How do you then differentiate from, because this is what Bing, Claude, Hugging Chat, Character AI is trying to do, be your pal.

Yeah, so ours is a much more informal, relaxed conversational experience. It's very kind and polite. It helps you think through tricky decisions. It's there for you when you want to vent at the end of the day. It gives you feedback periodically. It might challenge you and help you think through something that you're working through. But it's also super smart, very knowledgeable. It's personalized to you. So remember what you've talked about previously across many sessions.

across all the different platforms. So you can talk to it on WhatsApp, Instagram, Facebook Messenger. There's an iOS app. No, you would have to text it on one of those applications, right? And once you do, it will text you back on that platform, assuming that's your chosen platform. But it will be able to keep its memory of all the different things you've been talking about across those different sessions. So you said in the future, I think there will be an ever-present relationship you had with AI that helps you make sense of the world around you. Yeah.

Obviously, whoever controls the AI that's in a deep, ever-present relationship with millions of people will wield tremendous power. Is that worrisome to you? Or why did you go this direction? Because you think this is the way it's going to be, that people will each have one of these. I just think that if you think where things end up in five to ten years, at the moment...

Every big tech company has a trillion dollar AI that is trying to sell you something, right? All the AIs today are trying to sell your product on Amazon, trying to find information and make you the product on Google or YouTube so that it can sell ads. All of these things are AIs that are acting at you or on you or towards you.

And I think that what people will value is having an AI that can be adversarially engaged on your side, in your corner, on your team. And just as a good chief of staff or a good advisor, like you're going to have a lawyer, you're going to have a great doctor, you're going to have a good person to schedule and organize and plan and prioritize your day.

I think in five years' time, that's going to be available to everybody. You are selling this assistant, right? Let's talk about the business, your revenue model. Well, that's why the revenue model is so important because in the past business model, you've basically been the product. Right. By giving you free things, you get a trade. Correct. And-

People don't like paying for stuff, right? And that's going to be a problem in this new era. My opinion, you're going to want to pay for your own AI because that's the best way to align your interests, right? You know what you're getting. The AI is more accountable to you because you're the only person who's paying for it.

And that's the business model that we're going to be pursuing. And I think it's the one that will end up being the most valuable because it will enable the AI to do really, really useful things for you because it is so aligned with your interests and you'll come to trust it more. But how much will that cost for people?

Look, I think the problem is not everybody wants to pay for stuff. And some people aren't so worried about the privacy thing. Some people are like, I'm smart enough to be able to decide whether or not I want to buy this thing. It doesn't matter that there's a sales AI that's trying to sell me this thing. And so...

We'll be back in a minute.

Let's move on to the competition. Let's do a lightning round where we run down some of your competitors and their products. Give one compliment and then tell me why Pi is better, if it is. OpenAI and ChatGPT.

I mean, I love that they got there first and they have a huge amount of scale and they focused on factuality. I think that one strength of Pi over that is that it remembers your sessions and your history. So it's a little bit more personalized. It's a lot more informal and friendly. And

and kind of more chatty. So rather than just regurgitating Wikipedia and being a question answering engine, it really gets to know you over time. Okay, Google and BARD. I think Google's strength is that it's fast and it has access to fresh information. So because of obviously access to Google, the knowledge is very up to date and that's pretty useful. But on the flip side,

Pi is going to have access to the freshest information in real time in two weeks when we release our new web search tool, which will basically allow you to say, find me the nearest restaurant, check what time it's open, you know, what time is the cinema going to be open and what is it showing, et cetera, et cetera. So all kinds of fresh information will be available in Pi soon. And so Google presumably is not as friendly anymore.

I always used to say Google never was good at social networks because they aren't social. Anthropic and Claude. I mean, Anthropic's great. You know, they're a great team. They're very much focused on safety. And so they're kind of more on the research side, publishing research and stuff. So my understanding is that their product is really a way of enabling them to advance the research agenda, which is quite different to us. We're not in the research business and we're not publishing, even though

We do do cutting-edge model development, but we do it in production. DeepMind and Sparrow, which they may release this year? Strengths of DeepMind are that it really is a world-class research team. They've focused slightly on reinforcement learning in simulated environments.

So they haven't been able to deploy and get into production and learn the challenges of doing inference in production at scale. And in many ways, the core thesis of DeepMind that you could learn everything in simulation without having to deploy meant that DeepMind was a little bit late to catch up with the LLM revolution. The 280 billion parameter gopher model was quite a bit behind compared to other LLMs. Mm-hmm.

But, you know, with the new Google DeepMind, the team's obviously going to catch up and they're a pretty outstanding research team. Meta says it plans to incorporate generative AI into a lot of its product, including WhatsApp and Messenger. What is their advantage?

Well, I mean, they are going after the open source approach. The logic of that is give everybody access to the underlying weights and hope that the tidal wave of innovation that comes from every developer being able to kind of adapt and fork and experiment with their underlying source code is going to mean that all of the new advances are

get kind of created on the core meta platform, in this case, Lama. And that will help meta over the next few years because the rate of innovation will be faster than the rate of innovation at like Google DeepMind or at OpenAI and Microsoft and so on.

Remains to be seen whether that is the case. I think it's a huge bet. And it may be that the open source movement ends up just helping all of the other big companies just as much. So who knows? Right. Which you could use. Yeah. I wonder if they'll change the name of their company now to something else. AI maybe. LLM. LLM. LLMeta. So when we asked Sam Altman which competitor he gets scared of the most, he said it was probably

kids in a garage. Presumably that means he's not worried about well-funded startups like Inflection AI. Do you think about that? And who are you worried about? I mean, the last three months has been pretty incredible. The rate of experimentation and innovation and so on. My sense is that people are going to be able to get to 85, 90% quality experiences in open source.

But to really get the kind of 99th percentile in terms of quality, safety, and factuality, it takes, in my opinion, the best researchers and world-class AI scientists who have access to very, very large amounts of compute. So whilst I think people can do great experiments and build great demos and move things forward,

And there will be like many successful companies that arise out of this little explosion of innovation. Obviously, my personal bet is that putting together a team of world class AI scientists is going to be an advantage for us. And that's what we're trying to do.

You obviously, in this case, though, needed a ton of computing power, as you said, in order to train the AI model. Inflection AI and OpenAI are both using Microsoft's Azure Cloud to run Pi and chat GPT. Would it be possible to create Pi without partnering with a giant like this in that case because of the computing power? It's challenging. So, you know, we have a great relationship with Microsoft. They're a big partner of ours, and they've enabled us to get access to the cutting-edge infrastructure on Azure platform. So, yeah.

I think, again, if you really want to run high quality production workloads, then you're going to need to use one of the big cloud providers. No matter what. At the same time, I do think the trajectory, if you look out over five years, the trajectory is that these models are going to get smaller and more efficient. So if you look at GPT-3, for example, that came out in June 2020, so almost three years ago now, that was 175 billion parameter model.

Today, there are nano versions of LLMs, which are 3 billion parameters, which roughly achieve the same performance on all of the academic benchmarks that GBT-3 was successful on. That's a remarkable, you know, 60x reduction. By the way, I'd be remiss if I didn't say Reid Hoffman is on the board of Microsoft, which probably creates...

more smooth partnership presumably yeah although what I'm saying is that at this moment you need them where we are in the next like year or two years but over three to five years I

I do think that GPT-4 level performance, which is where we're at at inflection, will get 60 times, maybe 100 times smaller, just as the nano-LLMs are now 60 times smaller than GPT-3 was three years ago. That has profound consequences because once that thing has been trained and it's available in open source,

Anyone who can run a 3 billion parameter model is going to be able to integrate that into their application and use basically something which is as good as GPT-4-like in terms of quality for whatever they want to do. And they won't be dependent on the cloud service providers as much. So it becomes a commodity in some fashion. It becomes a commodity and it naturally proliferates. Actually, and this is one of my core arguments that we have to accept that

that in the history of invention, everything that is useful gets cheaper and easier to use, and so it spreads far and wide. It sounds simplistic to say that because it's almost so obvious that that's the case, but it's easy to lose sight of what that means. If that applies in the case of LLMs, it means that in five years or ten years, it would track the same exponential trajectory that we've seen over the last 60 years with the transistor.

which is that it's got radically cheaper, radically smaller, radically faster. More powerful. And therefore, it's going to proliferate far and wide. And that's really the other side of the existential extinction threat that everybody is talking about. So let's talk about safety and ethics. It's a broad field. There's lots to talk about. I'll try to hit a lot of topics very quickly. What's your approach to content moderation? Yeah.

I think that we are going to have to take a more interventionist approach than we have done previously in the social media age. The idea that the platforms are just neutral purveyors of any content wasn't true and still isn't true, right? Ranking

is a decision that the company makes, which prioritizes some content over others with respect to their policy, right? And the more that we can be transparent about what that policy is and how it actually affects ranking, the better. I mean, I think in the most recent years, five years or so, companies have been more transparent about that policy. But now we need to tie that to the actual ranking of the algorithm itself. In the case of the training of LLMs,

We have a behavior policy which governs what the AI can and can't say, right? You can see that Pi has a very particular style. It tries to be even-handed. It tries to be really respectful. You know, it tries not to be too biased. But there are some topics that are off-limits, right? Let me give an example. When we asked Pi if it was woke, it conceded that some people think it might be so because it gave responses that are, quote, supportive of LGBTQ plus rights.

immigration reform and racial justice, it restricted access. And we asked about groomers in schools. I think that's great, but does that mean you can't be personal AI to say people who don't like gay and lesbian people or think they're groomers, et cetera? Because you do get immediately enmeshed in the real world. They're not going to be my customers. That's the bottom line. I mean, I have to accept that. I'm not trying to build a platform and an experience that, you know,

every single one of the 300 plus million Americans likes, or let alone everybody in the world. There's going to be many, many different AIs, whether we like it or not. There'll be a truth GPT that Elon builds that maybe has a different set of values, and we'll have to contend with that. And so the meta moderation challenge is actually going to be

How do these interacting AIs communicate with one another in a respectful way? That's the challenge. Probably just as badly as people, you know. Truth AI will be a lot of boob jokes and memes. Stupid memes will be Elon's truth AI. Anyway, but should AI-generated content be covered by Section 230? It isn't now, from what I understand from regulators. Obviously, that covered a lot of SIN's

for the current internet industry, as you and I both know. I think if an AI has to disclose that it's an AI and take responsibility for what it says, then that's going to be a very different regime, right? It's a different regulatory regime altogether. Yeah, you can be sued. It's not the same as just making available access to everybody on the internet or connecting everybody, right?

It needs a different theoretical framework. AI might become a pathway for misinformation, as you know. This has been talked a lot. One of them is called hallucinations. Sometimes large language models just fabricate false information out of thin air. When we asked Pi about hallucinations, first it said it didn't know what AI hallucinations were. When we explained, it assured us it didn't hallucinate. When pressed, it

clarified it wasn't immune to hallucinations, but said it didn't do it intentionally and called them confabulations, which is a tremendous word, by the way. That's true. Finally, when we asked how long it would take to reach 95% accuracy, Pai said, I would estimate it will take at least 10 to 20 years to achieve 95% accuracy or better at generating truthful and accurate responses. This is due to the complexity of AI confabulations problem and the limitations of current algorithms.

How do you like that answer? Oh, that's a pretty good answer. I'm proud of Pi. All right. That's not bad. I mean, I don't agree with 10 to 20 years Pi, but in general, referring to them as confabulations, which is after the kind of patients that have been investigated in neuroscience where people make things up based on context. And, you know, that's a completely reasonable analogy to draw. And that's a pretty good

Pretty decent explanation. So you think it's quicker, the consensus in the field? I think it would be much quicker. Right. I've actually said, I think I've said publicly that I think that... Yeah, you did. We will largely eliminate hallucinations in the next few years. You said by June 2025, you tweeted this. Yeah, exactly. Some people think they're unsolvable. You...

you're aware of that. Well, my take is the trajectory of progress between GPT-2, 3, and 4 has been staggering. What this shows is that these models are eminently controllable.

Actually, the larger they get, the more prone they are to being directed in controlled and constrained ways. That doesn't eliminate the risks and it doesn't make the problems go away because some people will use those things to precisely design bad AIs that do bad things.

But we are now dealing with a much more malleable tool which can be crafted into very particular behaviors. And that's good news for those who want to use it to do useful things that help us, you know, be smarter and learn more and be more productive and so on. One of the, just a few more questions. One of the biggest issues people worry about is job losses, obviously. You've said that governments have to help people who lose their jobs, maybe with universal basic income. It's a massive social change. So do you...

for example, higher taxes on the wealthy in order to pay for programs that create higher levels of unemployment? Or as many people like to say, there'll be more in different jobs. That was something Mark Andreessen said. Many people say that. Yes, I absolutely do support more taxes on the wealthy, on corporations and wherever there are large corporations

tranches of capital which are not being used, whether that's in land or property or stocks. We should turn over these assets and make their value available to large swathes of the population so that they can be supported through this transition. The problem with the narrative here is that, again, everybody is right depending on the timeline, right? So Marc Andreessen is right

If you look out over 10 or 20 years, right, it's probably true that we're going to create new jobs, maybe even net new jobs in aggregate, new jobs that we can't yet predict. We didn't know that there were going to be prompt engineers or AI teachers. That's probably true. We're going to create all these new disciplines. And that's great.

But when you look out over 20 years or 30 years, it's pretty clear that AIs are climbing the ladder of human cognitive abilities, right? They're already superhuman at language translation. They're already superhuman at face recognition. So that's only going to continue. And personally, I think that's a great thing because if they can be constrained and contained...

And if we can harness the benefits, then this will be the greatest explosion of productivity in the history of our species. And we will use it to fight our climate crisis. You mean we're using it to fight the climate crisis, but we're not busy doing stupid things. I think that's essentially what you're saying. Exactly.

Yeah, I'm saying that we will use it to address our big social challenges that we face from transport and healthcare to sustainable food to water desalination to renewable energies to carbon capture storage. AI is going to help us make progress on all these problems over the next 20 and 30 years.

And that's going to be amazing. Everyone is going to benefit massively from that explosion of productivity. At the same time, it's going to mean that many people are not going to be able to compete in the labor market and their skills just won't be sufficiently valuable in 20 to 30 years. And we have to face that. I mean, I've been saying that since 2012, right? We have to collectively embrace that reality and

and not call that doomerism or pessimism. We have to confront that reality in a responsible way and ask, what is it going to take to carry people through that transition and be respectful of their livelihoods? And we should start with progressive taxation. We don't have to jump to UBI because that's easily dismissed and it's hard to see how that gets funded in this level of productivity output.

But we can certainly start with massive taxes on the wealthy and massive subsidies for those who aren't able to contribute their skills. And those should be focused around education and retraining and community welfare and supporting people who are already adding value to society but don't necessarily get paid for it, like in the way that we care for our elderly and so on, where that we care for people who have disabilities. That we find valuable but is not paid for, in other words. Yeah.

Yeah. The European Parliament recently introduced the AI Act, a set of broad regulations that include transparency and safety requirements. What's your stance on that bill? I'm quite supportive. I mean, I think that, you know, it's rough around the edges and overreaches in a bunch of places, but in general, it's headed in the right direction. We clearly want a requirement for there to be more transparency over the data that's been used, over the way that the algorithm has been trained, over the way that the ranking function has been developed.

you know, as an AI engineer with that hat on, I feel irritated by it because it's really hard. Yeah. It's a hard thing to do. But as somebody who cares deeply about the future of humanity, I realize it's totally the right thing to do. And if there was just a level playing field and a requirement, then we would all just figure out how to solve that problem. Right. And so, yeah,

Likewise with explainabilities, you know, we want to have good, reliable explanations for why an AI has made either some decision or produced some generation. And I think we can also make progress on that, even though it's super hard. Just for people who don't know, Sam Alton said he might pull back chat GPT from Europe if it passed, but he walked that back. But there's clearly some pushback.

Europe tends to overreach, US doesn't reach at all. But Antonio Guterres, the UN General Secretary, says he likes the idea of an international watchdog agency, kind of like the International Atomic Energy Agency. Would you like to see that? I think everyone I talk to thinks a global agency is necessary. Yeah, I definitely think that there has to be somebody with audit powers that can scrutinize the scale of the models that we're building above some threshold.

and report on the kind of safety environment that they're operating in. So we definitely can't have it, you know, suddenly like

like grow the number of labs like they have with BSL-4 labs that are, you know, pretty leaky and where there's lots of accidents and stuff like that. The more we can learn from those kinds of experiences, the better. Okay, last question. I spoke to Tristan Harris a few weeks ago. He thinks we're at the beginning of an AI arms race that will have catastrophic societal consequences. Putting aside the extinction worries, if we don't get regulation, what is your biggest worry? Obviously, Tristan is more worried than yourself. But

When you think about it, do you think our regulators are up to the task of doing something about it? They haven't seemed to be able to, sometimes impacted by the money that the Googles of the world spend, the Microsofts of the world spend. I think that one of the most valuable contributions a lot of senators and congressmen and women could make is

would be to step down if they don't really understand or care about or are deeply engaged in technology. I mean, we have many, many political representatives now that are in their 70s and 80s that didn't grow up with technology. And in many ways, whilst I fully respect their contributions they've made in the past, it's unlikely that they're going to be able to

upskill themselves and have a finger on the pulse here. So I think we should make way for a new generation of regulators and give them the freedom and power to operate and move quickly. We also have to have more experimentation and more risk-taking in regulation.

And we need to be more forgiving. We need to be more forgiving both of the companies and of the regulators for getting it wrong. Because if we continue this kind of adversarial polarized battle, then people will shrink into their corners and not want to be proactive and engage. And so far in the last couple of years, I think that the AI companies have been quite forthright in calling for proactive regulation. We've certainly at DeepMind been saying this for a long time.

And that's great. And we should try to keep building that trusting relationship whilst acknowledging that we're still inviting scrutiny on ourselves. All right, Mustafa, thank you so much. We really appreciate it. Sure.

I love that he has called for the gerontocracy to step down. Yes, old people go away. Make room for the young or the knowledgeable. It's not actually about ageism. It's about proximity to the trend. Yeah, absolutely. I think he's right. He said a lot of spicy things. I liked it. I liked him calling Marc Andreessen a VC troll. The ultimate VC troll. The ultimate, and he certainly is. Is that fair? Oh, really? Yes. Even more? I would have thought you would have named someone else, Cara. VC troll. David Sachs? Oh, is he a VC really? Okay, sure. Whatever.

No, because he's not good at it, actually. Mark is much better at it in terms of quality of trolliness. Oh, yeah. I think that you're attributing ultimate as stuck to VC and not ultimate as stuck to troll.

Look, they're all trolls, but Mark is the king troll. And that was great. I think him talking about Elon was interesting. I think him talking about a bunch of things. He was quite about like if they don't like the way he runs his pie, they don't have to use it. If he doesn't like it being nice to gay people, well, don't use mine. Go to Elon's truth AI.

I love that truth AI. Also, he was very circumspect and respectful of one person, though, Jeffrey Hinton. Yes, he was. I think he was kind of saying he wasn't really paying attention to what was going on if he thought this was a surprise. Yeah, or he was saying that whatever it was was inexplicable. I love the road to Damascus, which if people don't know, it's New Testament, right? Yes, Paul. Paul the Apostle gets suddenly converted to love Jesus after having persecuted Christians. Yes. The sudden and kind of transformational change in

He said he hadn't spoken up and then suddenly he did. He never heard him have a problem with it and then he had a problem with it. But, you know, people keep things to themselves. I love the point he made, which is that everybody is right. It just depends on the timeline. Yes, that is correct in a lot of ways. Such a smart way to think about it, actually, because it is about nuance and it's about timeline. And he also made another really interesting point about timeline, which is that

We think this has been really sudden with chat GPT, but it had been incremental for so long. And I had forgotten that. We spoke before the interview about Clara AI, but I had forgotten that experience from 2014. No, they've been working on it. You know, years before there was an iPhone, there were other things that were like an iPhone. And then when iPhone came, everyone was like,

oh my God, but if you had been a student of any history, it would have been like, it's like that, it's like that. General magic, right? Yes, that's correct. It was one of the most important one, actually. That was a very early version of the iPhone and it was by people who would later be involved in the iPod and Android and things like that. Just like these vision glasses, this has been around and so now they're starting to reach people

or autonomous or anything else. So I think a lot of people who work in AI were expecting this. And it's not just, you know, neural networks, as he noted, has been around for 40 years, the concepts around them. He's envisioning a world where everybody has a personal AI, kind of like a dupe, a person. I think he's right, 100% right. Do you think that startups are like him or big companies will be the ultimate winners here? The startups will come up with all the really interesting applications. You'll have an AI for

everything you do. Like, you could have a shopping AI, you could have, you know, or you'd interact with a shop. You probably have a personal AI and then there'll be all kinds of companies that have AIs that will interact with the other AIs. Why bother with humans? Humans are terrible. Thanks, Kara. Yeah. And part of the reason why startups will win is because they're able to deploy this stuff. I mean, he didn't say it, but he almost said it with Google, this idea that the technology was kind of

It moves so slowly. It does. But there'll be big companies like the Googles and Microsofts, but then there'll be all kinds of startups that will do different things and then take hegemony over the bigger companies. I mean, like, there was lots of stuff before Google, and then there was Google. There was a lot of stuff before Microsoft, but then there was Microsoft. And so you forget a lot of these companies started as small ones. You just don't know what's going to happen. Yeah. Except I know everyone's going to have one.

We don't know what's going to happen except you know what's going to happen. Well, I know you're going to have a personal AI assistant whether you like it or not. And I think you'll like it. Yeah, because you basically can double yourself. Well, it reminds me a little bit of when...

was arguing with people about the cell phone. And I had one and everyone was like, I'm not going to use that. I like my telephone by the wall. What? Yeah. What? I wrote a piece for the Wall Street Journal. You can go back and look at it saying you are not going to be at the office tethered to a phone. And it was called Cutting the Cord. And I got so much flack for it.

When you were at Georgetown, didn't you cover the end of the payphone? Yes, I did. The end of the payphone. It's going away. It doesn't make any sense. Yeah. Well, this has been a great show. Thank you, Kara AI, for hanging out with me. No problem. Kara AI is a lot nicer than regular Kara. Oh, all right. She doesn't sleep. Well, then I guess we know it was actually Kara. Just like real Kara, she doesn't sleep. Can you read us out, please? Yes. Yes.

Today's show is produced by Naeem Araza, Blake Nishik, Christian Castro-Rossell, and Megan Burney. Special thanks to Mary Mathis. Rick Kwan engineered this episode. Our theme music is by Trackademics. If you're already following the show, Pi will not rise up like the Terminator and destroy us.

If not, we're doomed and it's your fault. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network and us. We'll be back on Monday with more.