cover of episode Mayhem at OpenAI + Our Interview With Sam Altman

Mayhem at OpenAI + Our Interview With Sam Altman

Publish Date: 2023/11/21
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Casey, how was your weekend? There was no weekend. There was only work. That was a trick question. And there will only ever be work. Yeah. What is happening, Kevin?

I don't know, man. I am on two hours of sleep. I've been working all weekend, and I'm increasingly certain that we are, in fact, living in a simulation. I mean, it would be nice if we were living in a simulation because that would suggest that there is at least some sort of plan for what might be about to happen next. But I think recent events would suggest that actually there is not. Yeah, I had a moment this morning where I woke up and I looked at my phone from my two-hour nap, and I was like...

I'm huffing fumes. This can't be real. I mean, let's just say, like, over the course of a weekend, OpenAI as we know it ceased to exist. And by the time this podcast gets put in the air, I would believe anything you told me about the future of OpenAI, up to and including it had been purchased by Etsy and was becoming a maker of handcrafted coffee mugs.

Honestly, would not be the strangest thing that's happened this weekend. Not remotely. Wouldn't be the top five.

I'm Kevin Russo, tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week on the show, one of the wildest weekends in recent memory. We'll tell you everything that happened at OpenAI and what's going on with Sam Altman. And then later in the show, we will present to you our interview with Sam Altman from last week. So before he was fired, we asked him about the future of AI. And we're going to share that conversation with you.

So this episode is going to have two parts. The first part, we're going to talk about the news that happened at OpenAI over the weekend and run down all of the latest drama and talk about where we think things are headed from here. And then we are going to play that Sam Altman interview that we discussed on our last emergency podcast, the one that we conducted last week and plan to run this week, but that has since become fascinating for very different reasons. So let's just run down

what has happened so far because there's been so much, it's like enough to fill one of those epic Russian novels or something. So on Friday, the good news, by the way, is that the events are all very easy to understand. There's no way you'll mess up while trying to describe what happened over the past three days. Yeah. Let's try this on a couple hours of sleep. Okay. So Friday, when we recorded our last emergency podcast episode, Sam Altman had just been fired by the board of open AI, uh,

He was fired for what were essentially vague and unspecified reasons. The board put out a statement sort of saying that he had not been candid with them, but they didn't say more about what exactly had led them to decide that he was no longer fit to run the company.

So he's fired. It's this huge deal, huge shock to all of the people at OpenAI and in the tech industry. And then it just keeps getting weirder. So Greg Brockman, the president and co-founder of OpenAI, announces that he too is quitting. Some other senior researchers resign as well. And then...

Saturday rolls around and we still don't really know what happened. Brad Lightcap, who is OpenAI's COO, sent out a memo to employees explaining that they know that

Sam was not fired for any kind of malfeasance, right? This wasn't like a financial crime or anything related to like a big data leak or anything. He says, quote, this was a breakdown in communication between Sam and the board. And let's say that by the time that Brad put that letter out, there had been reporting that at an

all hands, Ilya Sutskovor, the chief scientist at the company and a member of the board, had told employees that getting rid of SAM was the only way to ensure that OpenAI could safely build AI, which led to a lot of speculation and commentary that this was an

AI safety issue driven by effective altruists on the board. So it was very significant when we then get a letter from Brad saying explicitly, this was not an AI safety issue. And of course, that only served to make us even more confused, but lucky for us, further confusion would then follow. This was actually the clearest that things would be for the rest of the next 48 hours.

So OpenAI, its executives are saying this isn't about safety or anything related to our practices. But what we...

know from reporting that I and my colleagues did over the weekend is that this actually was at least partially about AI safety and that one of the big fault lines between Sam Altman and the board was over the safety issue, was over whether he was moving too aggressively without taking the proper precautions. Yeah.

After this memo from the COO went out, there were reports that investors in OpenAI, including Sequoia Capital, Thrive Capital, and also Microsoft, which is the biggest investor in OpenAI, were exerting pressure on the board to reverse their decision and to reinstate Sam as CEO and then for the entire board to resign.

They had sort of a deadline for figuring some of this stuff out, which is 5 p.m. on Saturday. That came and went with no resolution. And then Sunday, a bunch of senior OpenAI people, including Sam Altman, who is, by the way, now no longer the CEO of this company officially, gather at the offices of OpenAI in San Francisco to try to work

through this all. That's right. There is some reporting that all of a sudden, at least some people on the board are open to the idea of Sam returning, which was one of those moments that was both shocking and not at all surprising. Shocking because they had just gotten rid of him. Not at all surprising because I think by that point, it had started to dawn on

on the world and on OpenAI in particular on what it would mean for Altman to no longer be associated with this company where he had recruited most of the star talent. Totally. And the employees of OpenAI were sort of making their feelings known as well. They did this sort of campaign on X on Saturday where they were posting, you know, heart emojis

emojis in sort of quote posts of Sam, sort of indicating that they stood by him and that they would follow him if he decided to leave and start another company or something like that.

Yeah, it was something to behold. It was essentially a labor action aimed at the board. And what I will say was, in this moment, you realize the degree to which the odds were weirdly stacked against the board. Because on one hand, the board has all of the power when it came to firing Sam, but

Beyond that, there is still a company to run. There is still technology to build. And so now you had many employees of the company being very public in saying, hey, we do not have your back. We did not sign up for this and you're in trouble. Yeah. And so on Sunday, there was a moment where it sort of looked like Sam Altman was going to retire.

return and sort of take his spot back as the CEO of this company. He posted a photo of himself in the OpenAI office wearing a guest badge, like one that you would give to a visitor to your office. I will say I have worn that exact badge at OpenAI headquarters before.

Yeah. And so the caption on the photo was something like, this is the first and last time I'll ever wear one of these. So it kind of sounded like he was setting the scene for a return as well. And I would say there was just a feeling among especially the company's investors, but also a lot of employees and just people who, you know, work in the industry that like this

wasn't going to stand, that there were too many mad employees, that the stakes of kind of blowing this company up over this disagreement were too high. And if there really wasn't a smoking gun, right, if there was really nothing concrete that the board was going to hold up and say, this is why we fired Sam Altman, there was this sense that that just wasn't going to work, that there was no way that the board could actually go through with this firing. Yeah.

Yeah. And I think one way that the employees and the former executives were very effective was in using social media to create this picture of the overwhelming support that was behind them, right? So if you were an observer to this situation, you're only seeing one side of the story, right? Because the board is not out there posting, they haven't issued a statement that

lays out any details about what Sam allegedly did. And so instead, you just have a bunch of people saying like, hey, Sam was a great CEO. I love working for the guy. OpenAI is nothing without him. All these posts are getting massively reshared. It's easy to look at that and think, oh yeah, he's probably going to be back in power by the end of the day.

Totally. So that was the scene as of Sunday afternoon. But then Sunday evening Pacific time, this new deadline, 5 p.m. Pacific time has been given for some kind of resolution that also comes and goes. And there is no word from OpenAI's headquarters about what the heck is going on. It sort of feels like there's like a papal conclave and everyone is waiting for the white smoke to emerge from the chimney.

And then we get word that the board of directors of open AI has sent a note to employees announcing that Sam Altman will not return as CEO after all, and, uh, sort of standing by its decision. They still didn't give a firm reason or a specific reason why they pushed him out, but they said that, uh,

quote,

And they announced that they have appointed a new interim CEO. Now, remember, this company already had an interim CEO, Mira Murady, the former chief technology officer of OpenAI, who had been appointed on Friday. She also signaled her support for Sam and Greg, and reporting suggests that she actually tried to have them brought back. And...

Because of that, the board decided to replace her as well. So Mira Murady's reign as the temporary CEO of OpenAI lasted, uh,

about 48 hours before she was replaced by Emmett Shear, who is the former CEO of Twitch and who was the board's choice to take over on an interim basis. The board found an alternative man or altman to lead the company. So that was already mind-blowing. This happened at night on Sunday, and I thought, well, clearly things cannot get any crazier than this. That's when I went to bed, by the way.

I was like, whatever's happening with these people can wait till the morning. And then, of course, I wake up and an additional four years worth of news has happened.

Yes. So after this announcement about Sam Altman not returning and Emmett Shearer being appointed as the interim CEO, there is a full on staff revolt at OpenAI. The employees are outraged. They start sort of threatening to quit. And then just a couple of hours after this note from the board of directors comes out,

Microsoft announces that it is hiring Sam Altman and Greg Brockman to lead an advanced research lab at the company. An advanced research lab, I assume, means that Satya has just given those to a fiefdom and they will be given an unlimited budget to do whatever the heck they want. But of course, because Microsoft owns 49% of OpenAI, uh,

At this advanced research unit, Sam and Greg and all their old friends from OpenAI will have access to all of the APIs, everything that they were doing before. They will just get to pick up where they lift off and build everything that they were going to do, but now firmly under the auspices of a for-profit corporation. And by the way, one of the very biggest giants in the world.

Yeah, so I think it's worth just pausing a beat on this move because it is truly a wild twist in this saga. So just to explain why this is so crazy. So Microsoft is the biggest investor in OpenAI. They've put $13 billion into the company. They're also sort of highly dependent on OpenAI because they've now built OpenAI's models into a bunch of their products that they are kind of betting the future of Microsoft on in some sense. And

this was a big bet for them that over the course of a weekend was threatening to fall apart, right? Sam Altman and Greg Brockman were the leaders of OpenAI. They were the people that Microsoft was most interested in having run the company. Microsoft did not like this new plan to have Emmett Shear take over as CEO. And they were... They said it's Shear madness, Kevin. And so they did kind of...

kind of the next best thing, which was to poach the leaders of OpenAI, the deposed leaders, and bring them into Microsoft, along with presumably many of their colleagues who will be leaving OpenAI in protest if the board sticks to this decision.

Yeah, man. So this one threw me for a loop because if you have spoken with Sam or Greg or many of the people who work at OpenAI, you got the strong impression, these people like working at a startup.

Okay, working to open AI is in many ways the opposite of working at a company like Microsoft, which is this massive bureaucracy with, you know, so much process for doing anything. I think they really liked working at this nimble thing, at a new thing, being able to chart their own destiny. Keep in mind, open AI was about to become

the sort of only big new consumer technology company that we have seen in a long time in Silicon Valley. And so initially it's like, okay, they're going to work at Microsoft. What the heck? Because Kevin, one thing you didn't mention, which is fine because we didn't have to get through that timeline, but it's like,

The instant that Sam was fired, reporting started leaking out. He was starting a new company with Greg, right? So my assumption had been these guys are going to go off back into startup land. They're going to raise an unlimited amount of money and do whatever they want.

At the same time, you think about where they were in their mission when all of this happened on Friday. And they had a very clear roadmap, I think, for the next year. And if they would have to go out, raise money, build a new team, train a large language model, think about how much time it would take them just to get back to where they were before, right? They would probably lose a year, if not more, of development.

So, and this is pure speculation, but my guess is that part of their calculus was, look, if we deal with the devil we know and we go to Microsoft, we get to play with all of our old toys, we will have an unlimited budget and we can skip the fundraising and team building stage and just get back to work. So I have to believe that that was the calculus, but that said, it still was a very unexpected outcome, at least to me. It's a crazy outcome. And it means that Microsoft now

has a hand in two essentially warring AI companies, right? They have what remains of OpenAI, and they have this long-term deal with OpenAI. And they also control, by the way, the computing power that OpenAI uses to run its models, which gives them some leverage there. So it is a fascinating position that Microsoft is now in and really makes them look even more dominant

in AI than they already did. That's right. But listen, all of that said, everything that you just said is true as we record. However, Kevin, by the end of the day, I would believe any of the following scenarios. Greg and Sam have quit Microsoft. Greg and Sam are starting their own company. Greg and Sam have returned to OpenAI.

Greg and Sam have retired from public life. Greg and Sam have opened an Etsy store. This is all within the realm of possibility to me. Okay. So if we're back doing another one of these emergency pods tomorrow, I just want to say that while I accept that everything that Kevin just said is true, I'm only 5% confident that any of it lasts to the end of the week.

Yes, we are still in the zone where anything can happen. In fact, there have been some things that have happened even since the Microsoft announcement. So super early on Monday morning, like 1 a.m. Pacific time when I was still up, but I guess you were asleep because some of us are built for the grind set.

Emmett Shear, the new interim CEO of OpenAI, put out a statement saying that he would basically be digging into what happened over the past weekend, speaking to employees and customers, and then trying to kind of restore stability at the company. And my read of this letter was that he was basically telling OpenAI employees, you know, please don't quit. I am not...

the doomer that you think I am and you can continue to work here. Because one other thing that we should say about Emmett Shear is that while we don't know a ton about his views on AI and AI progress, he has done some interviews where he's indicated that he is something of a

AI pessimist, that he doesn't think that AI should be going so quickly ahead that he wants to actually slow it down, which is a position that is at odds with what we know Sam Altman believes. Yeah, as soon as he was named, people found a recent interview online.

He gave where he said that his P doom, his probability that AI will cause doom was between five and 50%. But if you listen to that interview, it sure sounds like the P doom is closer to 50 than just a five, I would say. The other interesting thing in that statement is that Emmett said,

Before he took the job, he checked on the reasoning behind firing Sam. And he said, quote, the board did not remove Sam over any specific disagreement on safety. Their reasoning was completely different from that. So once again, we have someone talking about the firing without telling us anything and making it even more confusing.

Totally. But that is not even the end of the timeline. We are still going because after this 1 a.m. memo from Emmett Shear, OpenAI employees start collecting signatures on what amounts to an ultimatum saying that they will quit if the board does not resign and replace Sam and

If the board does not resign and bring Sam Altman back as CEO, this letter starts going around OpenAI and eventually collects the signatures of the vast majority of the company's roughly 700 employees, almost all of its senior leadership and the rank and file saying that if the board does not resign and bring back Sam Altman, they will go work for Microsoft or just leave OpenAI.

And do you know how much you have to hate your job to go work for Microsoft? These people are pissed, Kevin. And then, as if it couldn't get any crazier, just Monday morning, Ilya Setskovor, the open

OpenAI co-founder and chief scientist and board member who started all of this, who led the coup against Sam Altman and rallied the board to force him out, posted on X saying that he, quote, "'deeply regrets his participation with the board.'" He said, quote, "'I never intended to harm OpenAI. I love everything we've built together, and I will do everything I can to reunite the company.'"

So that is it. That is the entire timeline of the weekend up to the point where that we are recording this episode. Casey.

Are you okay? Do you need to lie down? Well, I do need to lie down. But, you know, sometimes, Kevin, when you're watching, like, a TV show or a movie and, like, the central antagonist has a sudden change of heart that's completely unexplained, there's no obvious motivation, I always feel like, wow, the writers really copped out on this one. You know, at least give us some sort of arc. That was the moment Ilya Sutskovor had this moment where, as you say, after leading the charge to get rid of Sam for reasons that the board...

did not specify, but that Ilya strongly hinted had something to do with AI safety. He now spins around and says, hey, time to get the band back together. I mean, just a tremendously undermining moment for the board generally and for Ilya in particular.

Totally. So right now, as things stand, there are a lot of different factions who have different feelings and emotions about what's going on. There's the people at OpenAI, the vast majority of whom are opposed to the board's actions here and are threatening to walk out if they're not reversed. There are the investors in OpenAI who are

furious about how all this is playing out. So a lot of people with a lot of high emotions and a lot of uncertainty yelling at these, what used to be four and are now three OpenAI board members who have decided to just stand their ground and stick it out.

So let's pause there, because I think that while all of us agree that the board badly mishandled this situation, it is worth taking a beat on what this board's role is. When I listen back to the episode that we did on Friday, this is a place where I wish I had drilled down a little bit deeper.

The mission of this board is to safely develop a super intelligence absent any commercial motive. That is the goal of this board. This board was put together with the idea that if you have a big company, like let's say a Microsoft that is in charge of a super intelligence, something that is smarter than us, something that will out-compete us in natural selection, they didn't want that to be owned by a for-profit corporation.

and something happened where three of the at one point at least four of the people on this board and now it's down to three but three of those people thought we are not achieving this mission sam did something or he didn't do something or he behaved in some way that made us feel like we cannot safely build a super intelligence and so we need to find somebody else to run that

company. And until we know why they felt that way, there is part of me that just feels like we just can't fully process our feelings on this, right? Like, you know, I think it was actually really depressing to see how quickly polarizing this became on social media as it sort of turned into Team Sam versus Team Safety. That's actually a really bad outcome for societies.

right? Because I think we do want, if we're going to build a super intelligence, I would like to see it built safely. Uh, I'm not sure that it is a for-profit corporation that is going to do the best job with that. Having watched for profit corporations, um,

create a lot of social harm in my lifetime, right? So I just want to say that, that, you know, I'm sure before the end of this podcast, we will continue to criticize the board for the way that it handled this. But at the same time, it's important to remember what their mission was and to assume that they had at least some reasons for doing what they did.

Yeah, I mean, I was talking to people all day yesterday who thought that, you know, the money would win here, basically, that these investors and Microsoft, they were powerful enough and they had, you know, enough stake in the outcome here that they would sort of by any means necessary get Sam Altman and Greg Brockman back to OpenAI. And I was very surprised when that didn't happen, but maybe I shouldn't have been because, you

As someone pointed out to me when I talked to them yesterday, who was sort of involved with the situation, said, you know, the board has the ultimate leverage here. This structure, this convoluted governance structure where there's a nonprofit that controls a for-profit and the nonprofit can vote to fire the CEO at any time. It was set up for this purpose.

I mean, you can argue with how they executed it, and I would say they executed it very badly, but it was meant to give the board the power to shut this all down if they determined that what was happening at OpenAI was unsafe or was not going to lead to broadly beneficial AGI. And...

It sounds like that's what happened. That's right. Another piece that I would point to, my friend Eric Newcomer wrote a good column about this, just pointing out that Sam has had abrupt breaks with folks in the past, right? He had an abrupt break with Y Combinator, where he used to lead the startup incubator. He had an abrupt break with Elon Musk, who co-founded OpenAI with him. He had an abrupt break with the folks who left

OpenAI to go start Anthropic for what they described as AI safety reasons, right? So there is a history there that suggests that, you know, right now, a lot of people think that the board is crazy, but these are not the first people to say Sam Altman is not building AI safely.

Right. Here's the thing. Like, I still think there has to have been some inciting incident, right? This does not feel to me like it was kind of a slow accumulation of worry by Ilya Sutskovor and the kind of more safety-minded board members that just woke up one day and said, you know what? Like, it's just gotten a little too aggressive over there. So let's shut this thing down. I still think there had to have been some

some incidents, something that Ilya Setskiver and the board saw that made them think that they had to act now. So, you know,

So much is changing. We have to keep going back to this caveat of like, we still don't know what is going to happen in the next hour to say nothing of the next day or week or month. But that is the state of play right now. And I think this is, I mean, Casey, I don't know how you feel about this, but I would say this is the most fascinating and crazy story that I have ever covered in my career as a tech journalist. I cannot remember anything that made my head spin as much as this. Yeah.

Yeah, certainly in terms of the number of unexplained and unexpected twists, it's hard for me to think of another story that comes close. But I think we should look forward a little bit and talk about what this might mean for OpenAI in particular. OpenAI was described to me over the weekend by a former employee as a money incinerator.

ChatGPT does not make money. Training these models is incredibly expensive. The whole reason OpenAI became a for-profit company was because it costs so much money to build and maintain and run these models. When Sam was fired, it has been reported that he was out there raising money to put back into the incinerator.

So think about the position that that leaves the OpenAI board at. Let's say that they're able to staunch the bleeding and retain a couple hundred people who are closely associated with the mission, and the board thinks that these are the right people. Who is going to give them the money to continue their work?

After what has just happened, right? Now look, Emmett Shear is very well regarded in Silicon Valley. He was texting with sources last night who were sort of very excited that he was the guy that they chose. And so no disrespect to him, but this board has shown that it is serious when it says it does not have a...

So unless it's going to go out there and start raising money from foundations and philanthropists and kindly billionaires, I do not see how they get the money to keep maintaining the status quo. And so in a very real sense, over the weekend, OpenAI truly may have died.

It truly may have. I mean, you're right. Like we are going to take a bunch of money and incinerate it. And by the way, we're also going to move very slowly and not accelerate AI progress is not a compelling pitch to investors. And so I don't think that the sort of new open

OpenAI is going to have a good time when it goes out to raise its next round of funding. Or, by the way, and this is another factor that we haven't talked about, to close this tender offer, this round of secondary investment that was going to give OpenAI employees a chance to cash out some of their shares. That, I would say, is doomed.

Yeah. I'm sure that motivated a lot of the signatures on the letter demanding that Sam and Greg come back, because those people are about to get paid and not anymore. Totally. That's some of what lies ahead for Microsoft and OpenAI, although anything could change.

And that brings us to the interview that we had with Sam Altman last week. So last week, before any of this happened on Wednesday, which is two days before he was fired. It was a simpler, more innocent time. It's true. I actually do feel like that was about a year and a half ago. Yeah.

So we sat down with Sam Altman and we asked him all kinds of questions, both about sort of the year since ChatGPT was launched and what had happened since then, and also about the future and his thoughts about where AI was headed and where OpenAI was headed. So then all this news broke and we thought, well, what do we do with this interview now?

And we thought about, you know, should we even run it? Should we, you know, chop it up and just play the most relevant bits? But we ultimately decided, like, we should just put the whole thing out. Put it out there. Yeah. So I would just say to listeners, like, as you listen to this interview, you may be thinking, like, why are these guys asking about ChatGPT? Who cares about ChatGPT? We've got bigger fish to fry here, people. But just keep in mind that...

when we recorded this, none of this drama had happened yet. The biggest news in Sam Altman's world was that the one-year anniversary of ChatGPT was coming up, and we wanted to ask him to reflect on that. So just keep in mind, these are questions from Earth 1, and we are now on Earth 2, and just bear that in mind as you listen. But I would say that the...

issues that we talked about with Sam, some of the things around the acceleration of progress at OpenAI and his view of the future and his optimism about what building powerful AI could do. Those are some of the key issues that seem to have motivated this coup by the board. So I think it's still very relevant, even though the space

specific facts on the ground have changed so much since we recorded with him. So in this interview, you'll hear us talk about existential risk, AI safety. If that's a subject you haven't been paying much attention to, the fear here is that as these systems

grow more powerful, and they are already growing exponentially more powerful year by year, at some point, they may become smarter than us. Their goals may disalign from ours. And so for folks who follow this stuff closely, there's a big debate on how seriously we should take that risk.

Right. And there's also a big debate in the tech world more broadly about whether AI and technology in general should be progressing faster or whether things are already going too fast and they should be slowed down. And so when we ask him about being an accelerationist, that's what we're talking about. And I should say, I texted Sam this morning to see if there was anything that he wanted to say or add. And as we record, have not heard back from him yet.

When we come back, our interview from last week with Sam Baldwin.

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

Sam Altman, welcome back to Hardfork. Thank you. Sam, it has been just about a year since ChachiPT was released, and I wonder if you have been doing some reflecting over the past year and kind of where it has brought us in the development of AI. Frankly, it has been such a busy year. There has not been a ton of time for reflection. Well, that's why we brought you in. We want you to reflect here. Great. I can do it now. I mean, I definitely think this was...

The year so far, there will be maybe more in the future, but the year so far where the general average tech person went from taking AI not that seriously to taking it pretty seriously. Yeah. And the sort of recompiling of expectations. Yeah.

Given that. So I think in some sense that's like the most significant update of the year. I would imagine that for you a lot of the past year has been watching the world catch up to things that you have been thinking about for some time. Does it feel that way? Yeah, it does. You know, we kind of always thought on like the inside of OpenAI that it was strange that

the rest of the world didn't take this more seriously. Like, it wasn't more excited about it. I mean, I think if five years ago you had explained, like, what ChatGPT was going to be, I would have thought, wow, that sounds pretty cool. And, you know, presumably I could have just looked into it more and I would have smartened myself up. But I think until I actually used it, as is often the case, it was just hard to know what it was going to be. Yeah, I actually think we could have explained it and it wouldn't have made that much of a difference. We tried. Yeah.

People are busy with their lives. They don't have a lot of time to sit there and listen to some tech people prognosticate about something that may or may not happen.

But you ship a product that people use, like get real value out of, and then it's different. Yeah. I remember reading about the early days of the run-up to the launch of ChatGPT, and I think you all have said that you did not expect it to be a hit when it launched. No, we thought it would be a hit. We didn't think it would be like this. We did it because we thought it was going to be a hit. We didn't think it was going to be like this big of a hit. Right. As we're sitting here today, I believe it's the case that you can't actually sign up for ChatGPT Plus right now. Is that right? Correct. Yeah. So what's that all about?

we have like not enough capacity always, but at some point it gets really bad. So over the last 10 days or so, we have done, you know, we've like done everything we can. We've rolled out new optimizations. We've like disabled some features and then people just keep signing up. It keeps getting slower and slower. And,

there's like a limit at some point to what you can do there and you can't. We just don't want to offer like a bad quality of service. And so it gets like slow enough that we just say, you know what, until we can make more progress either with more GPUs or more optimizations, we're going to put this on hold. Not a great idea

place to be in, to be honest. But, you know, it was like the least of several bad options. Sure. And I feel like in the history of tech development, there often is a moment with really popular products where you just have to close signups for a little while, right? The thing that's different about this than others is it's just, it's so much more compute intensive than the world is used to for internet services. So you don't usually have to do this. Like usually by the time you're at this scale, you've like

solved your scaling bottlenecks. Yeah. One of the interesting things for me about covering all the AI changes over the past year is that it often feels like journalists and researchers and companies are discovering properties of these systems sort of at the same time altogether. I mean, I remember when we had you and Kevin Scott from Microsoft on the show earlier this year around the Bing relaunch.

And you both said something to the effect of, well, to discover what these models are or what they're capable of, you kind of have to put them out into the world and have millions of people using them. Then we saw, you know, all kinds of crazy but also inspiring things. You had Bing Sydney, but you also had people starting to use these things in their lives. So I guess I'm curious what you feel like you have learned about

language models and your language models specifically from putting them out into the world. So what we don't want to be surprised by is the capabilities of the model. That would be bad. And we're not. You know, with GPT-4, for example, we took a long time between finishing that model and releasing it. Red team did heavily, really studied it, did all of the work internally, externally. And there's

I'd say there's, at least so far, and maybe now it's been long enough that we would have, we have not been surprised by any capabilities the model had that we just didn't know about at all, in a way that we were for GPT-3, frankly, sometimes, that people found stuff. But what I think you can't do in the lab is understand how

technology and society are going to co-evolve. So you can say, here's what the model can do and not do, but you can't say like, and here's exactly how society is going to progress given that. And that's where you just have to see what people are doing, how they're using it. And that like, well, one thing is they use it a lot. Like that's one takeaway that we did not, clearly we did not appropriately plan for. But more interesting than that is that

the way in which this is transforming people's productivity, personal lives, how they're learning, and how, like, you know, one example that I think is instructive because it was the first and the loudest is what happened with ChatGPT in education. Days, at least weeks, but I think days after the release of ChatGPT, school districts were like falling all over themselves to ban ChatGPT. And that...

didn't really surprise us. Like that, we could have predicted and did predict. The thing that happened after quickly was that

you know, like weeks to months, was school districts and teachers saying, hey, actually, we made a mistake. And this is a really important part of the future of education and the benefits far outweigh the downside. And not only are we unbanning it, we're encouraging our teachers to make use of it in the classroom. We're encouraging our students to get really good at this tool because it's going to be part of the way people live. And, you know, then there was like a big discussion about what the kind of

path forward should be. And that is just not something that could have happened without releasing. And part... Can I say one more thing? Part of the decision that we made with the chat GPT release, the original plan had been to do the chat interface and GPT-4 together in March. And we really believe in this idea of iterative deployment. And we had realized that

The chat interface plus GPT-4 was a lot. I don't think we realized quite how much it was. Like too much for society to take in. So we split it and we put it out with GPT-3.5 first, which we thought was a much weaker model. It turned out to still be powerful enough for a lot of use cases. But I think that, in retrospect, was a really good decision and helped with that process of...

gradual adaptation for society. Looking back, do you wish that you had done more to sort of, I don't know, give people some sort of a manual to say, here's how you can use this at school or at work? Two things. One, I wish we had done something intermediate between the release of 3.5 in the API and

chat GPT. Now, I don't know how well that would have worked because I think there was just going to be some moment where it went viral in the mind of society. And I don't know how incremental that could have been. That's sort of a like, either it goes like this or it doesn't kind of thing. And I think...

I have reflected on this question a lot. I think the world was going to have to have that moment. It was better sooner than later. It was good we did it when we did. Maybe we should have tried to push it even a little earlier, but it's a little chancy about when it hits. And I think only a consumer product could have done what happened there. Now, the second thing is, should we have released more of a how-to manual? And I honestly don't,

No, I think we could have done some things there that would have been helpful. But I really believe that it's not optimal for tech companies to tell people like, here is how to use this technology and here's how to do whatever. And the organic thing that happened there actually was pretty good. I'm curious about the thing that you just said about we thought it was important to get this stuff into folks' hands sooner rather than later. Say more about what that is. More time to...

adapt for our institutions and leaders to understand, for people to think about what the next version of the model should do, what they'd like, what would be useful, what would not be useful, what would be really bad, how society and the economy need to co-evolve. Like, the thing that many people

in the field or adjacent to the field have advocated or used to advocate for, which I always thought was super bad, was, you know, this is so disruptive, such a big deal. It's got to be done in secret by the small group of us that can understand it. And then we will fully build the AGI and push a button all at once when it's ready. And I think that'd be quite bad.

Yeah, because it would just be way too much change too fast. Yeah, again, society and technology have to co-evolve and people have to decide what's going to work for them and not and how they want to use it. And we're, you know, you can criticize OpenAI about many, many things, but we do try to like really listen to people and adapt it in ways that make it better or more useful. And we're able to do that, but we wouldn't get it right without that feedback. Yeah. I want to talk about AGI and the path to AGI later on, but first I want to just

define AGI and have you talk about sort of where we are on the continuum. So I think it's a ridiculous and meaningless term. Yeah. Sorry, I apologize that I keep using it. It's like deep in the muscle memory. I mean, I just never know what people are talking about when they're talking about it. They mean like really smart AI. Yeah. So it stands for artificial generalization.

general intelligence, and you could probably ask a hundred different AI researchers and they would give you a hundred different definitions of what AGI is. Researchers at Google DeepMind just released a paper this month that sort of offers a framework. They have five levels, level, or I guess they have...

Levels ranging from level zero, which is no AI, all the way up to level five, which is superhuman. And they suggest that currently ChatGPT, BARD, Lama 2 are all at level one, which is sort of equal to or slightly better than an unskilled human. Would you agree with that? Like, where are we? If you would, if you'd say this is a term that means something and you sort of define it that way, how close are we? Yeah.

I think the thing that matters is the curve and the rate of progress. And there's not going to be some milestone that we all agree like, okay, we've passed it and now it's called AGI. Like what I would say is we currently have systems that are like, there will be researchers who will write papers like that. And, you know, academics will debate it and people in the industry will debate it. And I think most of the world just cares like, is this thing useful to me or not? And yeah,

We currently have systems that are somewhat useful, clearly. And whether we want to say it's a level one or two, I don't know, but people use it a lot and they really love it. There's huge weaknesses in the current systems, but it doesn't mean that... I'm a little embarrassed by GPTs, but people still like them.

And that's good. It's nice to do useful stuff for people. So yeah, call it a level one. It doesn't bother me at all. I am embarrassed by it. We will make them much better. But at their current state, they're still delighting people and being useful to people.

Yeah. I also think it underrates them slightly to say that they're just better than unskilled humans. When I use ChatGPT, it is better than skilled humans for some things. It's some things and worse than any human and many other things. But I guess this is one of the questions that people ask me the most and I imagine ask you is like, what are today's AI systems useful and not useful for doing? I would say the main thing that they're bad at, well, many things, but one that is on my mind a lot is they're bad at reasoning.

And a lot of the valuable human things require some degree of complex reasoning. But they're good at a lot of other things. Like, you know, GPT-4 is vastly superhuman in terms of its world knowledge. It knows there's a lot of things in there. And it's just, it's like very different than how we think about evaluating human intelligence. So it can't do these basic reasoning tasks. On the other hand, it knows more than any human has ever known.

On the other hand, again, sometimes it totally makes stuff up in a way that a human would not. But if you're using it to be a coder, for example, it can hugely increase your productivity.

And there's value there, even though it has all of these other weak points. If you were a student, you can learn a lot more than you could without using this tool in some ways. Value there too. Let's talk about GPTs, which you announced at your recent developer conference. For those who haven't had a chance to use one yet, Sam, what's a GPT? It's like a custom version of ChatGPT that you can get to behave in a certain way. You can give it limited ability to do actions. You can give it knowledge to refer to. You can say like, act this way.

But it's super easy to make, and it's a first step towards more powerful AI systems and agents. We've had some fun with them on the show. There's a hard fork bot that you can sort of ask about anything that's happened on any episode of the show. It works pretty well, we found, when we did some testing. But I want to talk about where this is going. What is the GPTs that you've released a first step toward? AIs that can accomplish useful tasks. Like the...

I think we need to move towards this with great care. You know, we don't, I think it would be a bad idea to put like turn powerful agents free on the internet. But AIs that can act on your behalf to do something with a company that can access your data that can like help you be good at a task. I think that's, that's going to be a exciting way we use computers. Like we have this belief that we're heading towards a vision where there are

new interfaces, new user experience as possible because finally the computer can understand you and think. And so the sci-fi vision of a computer that you just like tell what you want and it figures out how to do it, this is a step towards that. Right now, I think what's holding a lot of people back in

a lot of companies and organizations back in sort of using this kind of AI in their work is that it can be unreliable. It can make up things, it can give wrong answers, which is fine if you're doing creative writing assignments, but not if you're a hospital or a law firm or something else with big stakes.

How do we solve this problem of reliability? And do you think we'll ever get to the sort of low fault tolerance that is needed for these really high stakes applications? So first of all, I think this is like a great example of people understanding the technology, making smart decisions with it, society and the technology co-evolving together. Like what you see is that people are

using it where appropriate and where it's helpful and not using it where you shouldn't. And for all of the sort of like fear that people have had, like both users and companies seem to really understand the limitations and are making appropriate decisions about where to roll it out. It, the kind of controllability, reliability, whatever you want to call it, that is going to get much better. I think we'll see a big step forward there over the coming years. And yeah,

And I think that there will be a time, I don't know if it's like 2026, 2028, 2030, whatever, but there will be a time where we just don't talk about this anymore.

Yeah. It seems to me, though, that that is something that becomes very important to get right as you build these more powerful GPTs, right? Once I tell, like, I would love to have a GPT be my assistant, go through my emails, hey, don't forget to respond to this before the end of the day. The reliability has got to be way up before that happens. Yeah, yeah. That makes sense. You mentioned as we started to talk about GPTs that you have to do this carefully. Yeah.

for folks who haven't spent as much time reading about this, explain what are some things that could go, you know, you guys are obviously going to be very careful with this. Other people are going to build GPT-like things, might not put the same kind of controls in place. So what can you imagine other people's doing that like you as the CEO would say your folks, hey, it's not gonna be able to do that? Well, that example that you just gave, like if you let it act as your assistant and go like,

send emails, do financial transfers for you. It's very easy to imagine how that could go wrong. But I think most people who would use this don't want that to happen on their behalf either. And so there's more resilience to this sort of stuff than people think.

I think those are, I mean, for what it's worth on the whole, on the hallucination thing, which it does feel like has maybe been the longest conversation that we've had about ChatGPT in general since it launched. I just always think about Wikipedia as a resource I use all the time. And I don't want Wikipedia to be wrong, but 100% of the time, it doesn't matter if it does. I am not relying on it for life-saving information, right? ChatGPT for me is the same, right? It's like, hey, you know, it's, I mean, it's like great and just kind of bar trivia. Like, hey, you know, what's like the history of this conflict in the world? Yeah, I mean,

We want to get that a lot better, and we will. I think the next model will just hallucinate much less. Is there an optimal level of hallucination in an AI model? Because I've heard researchers say, well, you actually don't want it to never hallucinate because that would mean making it not credible.

That new ideas come from making stuff up. That's not necessarily tethered to... This is why I tend to use the word controllability and not reliability. You want it to be reliable when you want. You want it to... Either you instruct it or it just knows based off of the context that you are asking a factual query and you want the 100% black and white answer. But you also want it to know

When you want it to hallucinate or you want it to make stuff up, as you just said, like new discovery happens because you come up with new ideas, most of which are wrong. And you discard those and keep the good ones and sort of add those to your understanding of reality. Or if you're telling a creative story, you want that. So, so if these models like,

If these models didn't hallucinate at all, ever, they wouldn't be so exciting. They wouldn't do a lot of the things that they can do. But you only want them to do that when you want them to do that. And so the way I think about it is model capability, personalization, and controllability. And those are the three axes we have to push on. And controllability means no hallucinations when you don't want, lots of it when you're trying to invent something new.

Let's maybe start moving into some of the debates that we've been having about AI over the past year. And I actually want to start with something that I haven't heard as much, but that I do bump into when I use your products, which is like,

they can be quite restrictive in how you use them. I think mostly for great reasons, right? Like, I think you guys have learned a lot of lessons from the past era of tech development. At the same time, I feel like, like, I've tried to ask ChatGPT a question about sexual health. I feel like it's going to call the police on me, right? So I'm just curious how you've approached that subject. Yeah, look, one thing, no one wants to be scolded by a computer ever. Like, that is not a good feeling. And so you should never feel like you're going to

Have the police call them. It's more like horrible, horrible, horrible. We have started very conservative, which I think is a defensible choice. Other people may have made a different one. But again, that principle of controllability, what we'd like to get to is a world where if you want some of the guardrails relaxed a lot, and that's like, you know, you're not like a child or something, then fine, we'll relax the guardrails. It should be up to you. But I think

starting super conservative here, although annoying, is a defensible decision and I wouldn't have gone back and made it differently. We have relaxed it already. We will relax it much more, but we want to do it in a way where it's user controlled. Yeah. Are there certain red lines you won't cross? Things that you will never let your models be used for other than things that are like obviously illegal or dangerous? Yeah, certainly things that are illegal and dangerous we won't. There's like a lot of other things that I consider

could say, but they so depend, like where those red lines will be so depend on how the technology evolves that it's hard to say right now, like here's the exhaustive set. Like we really try to just study the models and predict capabilities as we go, but we get, you know, if we learn something new, we change our plans. Yeah. One other area where things have been shifting a lot over the past year is in AI regulation and governance. I think a year ago, if you'd asked, you know, the average congressperson, what do you think of AI? They would have said, what's that?

Get out of my office. Right. You know, we just recently saw the Biden White House put out an executive order about AI. You have obviously been meeting a lot with lawmakers and regulators, not just in the U.S., but around the world. What's your view of how AI regulation is shaping up? It's a really tricky point to get across. What we believe is that on the frontier systems, there does need to be proactive regulation there. But

heading into overreach and regulatory capture would be really bad. And there's a lot of amazing work that's going to happen with smaller models, smaller companies, open source efforts. And it's really important that regulation not strangle that. So it's like, I've sort of become a villain for this, but I think there was... You have? Yeah. How do you feel about this? Like annoyed, but have bigger problems in my life right now. Right. But this message of like regulate...

us regulate the really capable models that can have significant consequences, but leave the rest of the industry alone. It's just, it's a hard message to get across. Here is an argument that was made to me by a high-ranking executive at a major tech company as some of this debate was playing out. This person said to me that there is essentially no harms that these models can have that the internet itself doesn't enable, right? And

and that to do any sort of work like is proposed in this executive order to have to inform the Biden administration is just essentially pulling up the ladder behind you and ensuring that the folks who've already raised the money can sort of reap all of the profits of this new world and will leave the little people behind. So I'm curious what you make of that argument. I disagree with it on a bunch of levels. First of all, I wish the threshold for when you do have to report

was set differently and based off of like, you know, evals and capability thresholds. Not flops? Not flops. But...

There's no small company trained with that many flops anyway. So that's like a little bit, you know. For the listener who maybe didn't listen to our last episode about this. Listen to our flops episode. The flops are the sort of measure of the amount of computing that is used to train these models. The executive order says if you're above a certain computing threshold, you have to tell the government that you're training a model that big. But no small effort is training at 10 to the 26 flops. Currently, no big effort is either. So that's like a dishonest comment. Second of all,

The burden of just saying, like, here's what we're doing is not that great. But third of all, the underlying thing there, there's nothing you can do here that you couldn't already do on the internet. That's the real either dishonesty or lack of understanding.

You could maybe say with GPT-4, you can't do anything. You can't do it on the internet, but I don't think that's really true even at GPT-4. Like, there are some new things. And GPT-5 and 6, there will be, like, very new things. And saying that we're going to be, like, cautious and responsible and have some testing around that, uh...

I think that's going to look more prudent in retrospect than it maybe sounds right now. I'd say for me, these seem like the absolute gentlest regulations you could imagine. It's like tell the government and report on any safety testing you did. Seems reasonable. Yeah. And people are not just saying that these fears are unjustified of AI and sort of existential risk. Some people, some of the more vocal critics of open AI have said that AI,

that OpenAI, that you are specifically lying about the risks of human extinction from AI, creating fear so that regulators will come in and make laws or give executive orders that prevent smaller competitors from being able to compete with you. Andrew Ng, who's I think one of your professors at Stanford, recently said something to this effect. What's your response to that? I'm curious if you have thoughts about that. Yeah, like I actually...

don't think we're all gonna go extinct i think it's gonna be great i think we're like heading towards the best world ever um but when we deal with a dangerous technology as a society um we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits and that's like a pretty consensus thing um i don't i don't think that's like a radical position um you

I can imagine that if this technology stays on the same curve, there are systems that are capable of significant harm in the future. And, you know, like Andrew also said not that long ago that he thought it was like totally irresponsible to talk about AGI because it was just never happening. I think he compared it to worrying about overpopulation on Mars. And I think now he might say something different. So like it's...

Humans are very bad at having intuition for exponentials. Again, I think it's going to be great. I wouldn't work on this if I didn't think it was going to be great. People love it already. I think they're going to love it a lot more. But that doesn't mean we don't need to be responsible and accountable and thoughtful about what the downsides could be. And in fact, I think the tech industry often has only talked about the good and not the bad and positive.

that doesn't go well either. - The exponential thing is real. I have dealt with this. I've talked about the fact that I was only using GPT 3.5 until a few months ago and finally at the urging of a friend upgraded and I thought, oh. - I would have given you a free account. Sorry you waited. - I wish I should have asked. - But it's a real improvement.

It is a real improvement, and not just in the sense of, oh, the copy that it generates is better. It actually transformed my sense of how quickly the industry was moving. It made me think, oh, like the next generation of things is going to be sort of radically better. And so I think that part of what we're dealing with is just that it has not been widely distributed enough to get people to reckon with the implications. Yeah.

I disagree with that. I mean, I think that like, you know, maybe the tech experts say like, oh, this is like, you know, not a big deal, whatever. Like most of the world is like, who has used even the free version is like, oh man, they got real AI. Yeah.

Yeah. And you went around the world this year talking to people in a lot of different countries. I'd be curious what, you know, to what extent that informed what you just said. Significantly. I mean, I was, I had a little bit of a sample bias, right? Because the people that wanted to meet me were probably like pretty excited, but you do get a sense and there's like quite a lot of excitement. Maybe

maybe more excitement in the rest of the world than the U.S. Sam, I want to ask you about something else that people are not happy about when it comes to these language and image models, which is this issue of copyright. I think a lot of people view what OpenAI and other companies did, which is sort of

you know, hoovering up work from across the internet, using it to train these models that can, in some cases, output things that are similar to the work of living authors or writers or artists. And they just think, like, this is the original sin of the AI industry, and we are never going to forgive them for doing this.

What do you think about that? And what would you say to artists or writers who just think that this was a moral lapse? Forget about the legal, whether you're allowed to do it or not, that it was just unethical for you and other companies to do that in the first place. Well, we block that stuff. Like, you can't go to, like, Dolly and generate some, I mean, you could, speaking of being annoyed, like, we may be too aggressive on that. But I think, I think it's the right thing to do until we figure out some sort of economic model that

works for people um and you know we're doing some things there now but we've got more to do other people in the industry like do allow quite a lot of that

And I get why artists are annoyed. I guess I'm talking less about the output question than just the act of taking all of this work, much of it copyrighted, without the explicit permission of the people who created it and using it to train these models. What would you say to the people who just say, Sam, that was the wrong move, you should have asked, and we will never forgive you for it? Yeah.

Well, first of all, like always have empathy for people who are like, hey, you did this thing and it's affecting me. And, you know, we didn't talk about it first or it was just like a new thing. Like I do think that in the same way humans can read the internet and learn, AI should be allowed to read the internet and learn. Shouldn't be regurgitating, shouldn't be, you know, violating any copyright laws. But if we're really going to say that like AI doesn't get to read the internet and learn,

Um, and if you read a physics textbook and learn how to do a physics calculation, not every time you do that in the rest of your life, like you got to like figure out how to like, uh, that seems like not a good solution to me, but on individuals, private work, um,

Yeah, we try not to train on that stuff. We really don't want to be here upsetting people. Again, I think other people in the industry have taken different approaches. And we've also done some things that I think now that we understand more, we will do differently in the future. Like what? Like what we do differently. Okay.

We want to figure out new economic models so that, say, if you're an artist, we don't just totally block you. We don't just not train on your data, which a lot of artists also say, no, I want this in here. I want like whatever. But we have a way to like help share revenue with you. GPTs are...

Maybe going to be an interesting first example of this because people will be able to put private data in there and say, hey, use this version and there could be a revenue share around it. I feel like that might be a good place to take a break and then come back and talk about the future. Yes. Let's take a break. ♪♪♪

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

Well, I had one question about the future that kind of came out of what we were talking about before the break, which is – and it's so big, but I truly need to hear your thoughts on this – which is what is the future of the internet as ChatGPT rises? And the reason I ask is I now have a hotkey on my computer that I type when I want to know something, and it just accesses ChatGPT directly through software called Raycast.

And because of this, I am using Google search, not nearly as much. I am visiting websites, not nearly as much. That has implications for all the publishers and for, frankly, just the model itself, because presumably if the economics change, there'll be fewer web pages created. There's less data for chat GPT to access. So I'm just curious what you have thought about the Internet in a world where your product succeeds in the way you want it to. I do think.

If this all works, it should really change how we use the internet. There's a lot of things that the interface for is perfect. If you want to mindlessly watch TikTok videos, perfect. But if you're trying to get information or get a task accomplished, it's actually quite bad relative to what we should all aspire for. And you can totally imagine a world where

You have a task that right now takes like hours of stuff clicking around the internet and bringing stuff together. And you just ask ChatGPT to do one thing and it goes off and computes and you get the answer back. And I'll be disappointed if we don't use the internet differently. Yeah. Do you think that the economics of the internet as it is today are robust enough to withstand the challenge that AI poses? Probably. Okay. What do you think?

Well, I worry in particular about the publishers. The publishers have been having a hard time already for a million other reasons. But to the extent that they're driven by advertising and visits to web pages, and to the extent that the visits to the web pages are driven by Google search in particular, a world where web search is just no longer the front page to most of the internet, I think does require a different kind of web economics. I think it does require...

a shift but I think the value is so what I thought you were asking about was like is there going to be enough value there for some economic model to work and I think that's definitely going to be the case yeah the model may have to shift I would love it if ads become less a part of the internet like I was thinking the other day like I just had this like

for whatever reason, like this thought in my head as I was like browsing around the internet being like, there's more ads than content everywhere. I was reading a story today, scrolling on my phone, and I managed to get it to a point where between all of the ads on my relatively large phone screen, there was one line of text from the article visible. You know, one of the reasons I think people like

chat gbt even if they can't articulate is we don't do ads yeah like as a as a intentional choice because there's plenty of ways you could imagine us putting ads totally um but we made the choice that ads plus ai can get a little dystopic we're not saying never like we do want to offer a free service but like a big part of our mission fulfillment i think is if we can continue to offer

chat GPT for free at a high quality of service to anybody who wants it and just say like, hey, here's free AI and good free AI and no ads because I think that really does, especially as the AI like gets really smart, that really does get a little strange. Yeah, yeah. I know we talked about AGI and it not being your favorite term, but it is a term that people in the industry use as sort of a benchmark or a milestone or something that they're aiming for. And, um,

I'm curious what you think the barriers between here and AGI are. Maybe let's define AGI as sort of a computer that can do any cognitive task that a human can. Let's say we make an AI that is really good, but it can't go discover novel physics. Would you call that AGI? I probably would. You would, okay. Would you? Well, again, I don't like the term, but I wouldn't call that we're done with the mission. I'd say we still got a lot more work to do.

The vision is to create something that is better at humans than doing original science that can invent, can discover. Well, I am a believer that all real sustainable human progress comes from scientific and technological progress. And if we can have a lot more of that, I think it's great. And if the system can do things that we unaided on our own can't,

just even as a tool that helps us go do that, then I will consider that a massive triumph and happily, you know, I can happily retire at that point. But before that, I can imagine that we do something that creates incredible economic value, but is not the kind of AGI, super intelligence, whatever you want to call it, thing that we should aspire to. Right. What are some of the barriers to getting to that place where we're doing novel physics research? Um,

And keep in mind, Kevin and I don't know anything about technology. That seems unlikely to be true. Well, if you start talking about retrieval augmented generation or anything, you might lose me. I'll follow, but you'll lose Casey. He'll follow, yeah. We talked earlier about just the model's limited ability to reason. And I think that's one thing that needs to be better. The model needs to be better at reasoning. Like, GBT-4...

An example of this that my co-founder Ilya uses sometimes that's really stuck in my mind is there was like a time in Newton's life where the right thing for him to do... You're talking, of course, about Isaac Newton, not my life. Isaac Newton. Yeah, okay. Well, maybe you do. But maybe my life. We'll find out. Stay tuned. Where the right thing for him to do is to read every math textbook he could get his hands on. He should talk to every smart professor, talk to his peers, do problem sets, whatever. And that's kind of what our models do today.

And at some point, he was never going to invent calculus doing that, what didn't exist in any textbook. And at some point, he had to go think of new ideas and then test them out and build on them, whatever else. And that phase, that second phase, we don't do yet.

And I think you need that before. It's something I want to call an AGI. Yeah. One thing that I hear from AI researchers is that a lot of the progress that has been made over the past, call it five years, in this type of AI has been just the result of just things getting bigger, right? Bigger models, more compute. Obviously, there's work around the edges in how you build these things that makes them more useful. But

there hasn't really been a shift on the architectural level, you know, of the systems that these models are built on. Do you think that that is going to remain true or do you think that we need to invent some new process or new, you know, new mode or new technique to get through some of these barriers? We will need new research ideas and we have needed them. I don't think it's fair to say there haven't been any here yet.

I think a lot of the people who say that are not the people building GPT-4, but they're the people sort of opining from the sidelines. But there is some kernel of truth to it. And the answer is we have... OpenAI has a philosophy of we will just do whatever works. Like if it's time to scale the models and work on the engineering challenges, we'll go do that. If now we need a new algorithm breakthrough, we'll go work on that. If now we need a different kind of data mix, we'll go work on that. So like...

We just do the thing in front of us and then the next one and then the next one and the next one. And there are a lot of other people who want to write papers about, you know, level one, two, three and whatever. And there are a lot of other people who want to say, well, it's not real progress. They just made this like incredible thing that people are using and loving and it's not real science. But our belief is like, we will just do whatever we can to usefully drive the progress forward and progress.

We're kind of open-minded about how we do that. What is super alignment? You all just recently announced that you are devoting a lot of resources and time and computing power to super alignment, and I don't know what it is. So can you help me understand? It's alignment that comes with sour cream and guacamole at a San Francisco taco shop. That's a very San Francisco-specific joke, but it's pretty good. I'm sorry. Go ahead, Sam.

Can I leave it at that? I don't really want to fall. I mean, that was such a good answer. So alignment is how you sort of get these models to behave in accordance with the human who's using them, what they want. And super alignment is how you do that for super capable systems. So we know how to align GPT-4 pretty well.

But, like, better than people thought we were going to be able to do. Now there's this, like, when we put out GPT-2 and 3, people were like, oh, it's irresponsible research because this is always going to just, like, spew toxic shit. You're never going to get it. And it actually turns out, like,

We're able to align GPT-4 reasonably well. Maybe too well. Yeah. I mean, good luck getting it to talk about sex is my official comment about GPT-4. But that's, you know, in some sense, that's an alignment failure because it's not doing what you wanted there. But now we have that. Now we have like the social part of the problem. We can technically do it. Right. But we don't yet know

what the new challenges will be for much more capable systems. And so that's what that team research is. So like what kinds of questions are they investigating or what research are they doing? Because I, you know, I confess I sort of, I lose my grounding in reality when you start talking about super capable systems and the problems that can emerge with them. Is this sort of a theoretical future forecasting team? Well, they try to do work

useful today, but for the theoretical systems of the future. So they'll have their first result coming out, I think, pretty soon. But yeah, they're interested in these questions of as the systems get more capable than humans, what is it going to take to restore

reliably solve the alignment challenge. Yeah, and I mean, this is the stuff where my brain does feel like it starts to melt as I ponder the implications, right? Because you've made something that is smarter than every human, but you, the human, have to be smart enough to ensure that it always acts in your interest, even though by definition it is way smarter than you. Yeah, we need some help there. Yeah, I do want to stick on this issue of alignment or super alignment because I think there's an unspoken assumption in there that...

Well, you just put it as alignment is sort of what the user wants it to behave like. And obviously there are a lot of users with good intentions. No, no. Yeah, it has to be like what society and the user can intersect on. There are going to have to be some rules here.

And I guess, where do you derive those rules? Because, you know, if you're anthropic, you use, you know, the UN Declaration of Human Rights and the Apple Terms of Service. The two most important documents in rights governance. If you're not just going to borrow someone else's rules, how do you decide which values these things should align themselves to? So we're doing this thing. We've been doing this thing. We've been doing these like democratic input governance grants.

where we're giving different research teams money to go off and study different proposals. There's some very interesting ideas in there about how to kind of fairly decide that. The naive approach to this that I have always been interested in, maybe we'll try at some point, is what if you had hundreds of millions of ChatGPT users spend an hour, a few hours a year, answering questions about what they thought

the default settings should be, what the wide bounds should be. Eventually, you need more than just ChatGPT users. You need the whole world represented in some way because even if you're not using it, you're still impacted by it. But to start, what if you literally just had ChatGPT chat with its users? It can, I think it's very important, it would be very important in this case to let

The users make final decisions, of course, but you could imagine it saying like, hey, you answered this question this way. Here's how this would impact other users in a way you might not have thought of. If you want to stick with your answer, that's totally up to you. But are you sure given this new data? And then you could imagine like GPT-5 or whatever, just learning that collective preference set. And I think that's interesting to consider. Better than the Apple terms of service, let's say. Yeah.

I want to ask you about this feeling. Kevin and I call it AI vertigo. Is this a widespread term that people use? No, I think you invented this. It's just sort of us. So...

There is this moment when you contemplate even just kind of the medium AI future. You start to think about what it might mean for the job market, your own job, your daily life for society. And there is this kind of dizziness that I find sets in. This year I actually had a nightmare about AGI. And then I sort of asked around and I feel like people who work on this stuff, like that's not uncommon.

I wonder for you if you have had these moments of AI Vertigo, if you continue to have them, or is there at some point where you think about it long enough that you feel like you get your legs underneath you? I used to have. I mean, there were some point to these moments, but there were some very strange, extreme Vertigo moments. Mm-hmm.

particularly around the launch of GPT-3. But you do get your legs under you. Yeah. And I think the future will somehow be less different than we think. Like, it's this amazing thing to say, right? Like, we invent AGI and it matters less than we think. It doesn't sound like a sentence that parses. And yet it's what I expect to happen. Why is that? Um...

There's like a lot of inertia in society and humans are remarkably adaptable to any amount of change. One question I get a lot that I imagine you do too is from people who want to know what they can do. You mentioned adaptation as being necessary on the societal level. I think for many years, the conventional wisdom was that if you wanted to adapt to a changing world, you should learn how to code, right? That was like the classic advice. May not be such good advice anymore.

Exactly. So now, you know, AI systems can code pretty well. For a long time, the conventional wisdom was that creative work was sort of untouchable by machines. If you were a factory worker, you might get automated out of your job. But if you were an artist or a writer, that was impossible for computers to do. Now we see that's no longer safe. So where is this sort of

high ground here like where can people focus their energy if they want skills and abilities that AI is not going to be able to replace my answer is my meta answer is you always it's always the right bet to just get good at the most powerful new tools most capable new tools and so when computer programming was that you did want to become a programmer and now that AI tools like totally change what one person can do you want to get really good at using AI tools and and so like

Having a sense for how to work with ChaiGPT and other things, that is the high ground. And that's like, we're not going back. That's going to be part of the world. And you can use it in all sorts of ways, but getting fluent at it, I think is really important.

I want to challenge that because I think you're partially right in that I think there is an opportunity for people to embrace AI and sort of become more resilient to disruption that way. But I also think if you look back through history, it's not like we learn how to do something new and then the old way just goes away, right? We still make things by hand. There's still an artisanal market. So do you think there's going to be people who just decide, you know what, I don't want to use this stuff. Totally. And

And there's going to be something valuable in their sort of, I don't know, non-AI assisted work. I expect that if we look forward to the future, things that we want to be cheap can get much cheaper. And things that we want to be expensive are going to be astronomically expensive. Like what? Real estate, like handmade goods, art.

And so totally, like there'll be a huge premium on things like that. And there'll be many people who like really, you know, there's always been like a, even when machine-made products have been much better, there has always been a premium on handmade products. And I'd expect that to intensify. This is also a bit of a curveball. Very curious to get your thoughts. Where do you come down on the idea of AI romances? Are these net good for society? I don't want one personally. You don't want one. Okay. Yeah.

But it's clear that there is a huge demand for this, right? Yeah. Like, I think that, I mean, you know, Replica is building these. They seem like they're doing very well. I would be shocked if this is not a multi-billion dollar company, right?

Someone will make a multi-billion dollar company. Yeah, for sure. I just personally think we're going to have a big culture war. I think Box News is going to be doing segments about the generation lost to AI girlfriends or boyfriends at some point within the next few years. But at the same time, you look at all the data on loneliness, and it seems like, well, if we can give people companions that make them happy during the day, it could be a net good thing.

It's complicated. Yeah. You know, I have misgivings, but this is not a place where I think I get to impose what I think is good on other people. Totally. Okay, but it sounds like this is not at the top of your product roadmap is building the boyfriend API. No. All right. You recently posted on X that you expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes. Right.

Can you expand on that? What are some things that AI might become very good at persuading us to do? And what are some of those strange outcomes you're worried about? The thing I was thinking about at that moment was the upcoming election. There's a huge focus on the US 2024 election. There's a huge focus on deepfakes and the impact of AI there. And I think that's reasonable to worry about, good to worry about. But we already have some societal issues.

towards people seeing like doctored photos or whatever. And yeah, they're going to get more compelling. There's going to be more, but we kind of know that those are there. There's a lot of discussion about that. There's almost no discussion about what are like the new things AI can do to influence an election. AI tools can do to influence an election. And one of those is to like carefully identify

you know, one-on-one persuade individual people. Tailored messages. Tailored messages. That's like a new thing that the content farms couldn't quite do. Right. And that's not AGI, but that could still be pretty harmful. I think so, yeah. I know we are running out of time, but I do want to push us a little bit further into the future than the sort of, I don't know, maybe five-year horizon we've been talking about. If you can imagine a good post-AGI world, a world in which we have reached...

this threshold, whatever it is. What does that world look like? Does it have a government? Does it have companies? What do people do all day? Like a lot of material abundance. People continue to be very busy, but the way we define work always moves. Like if you, our jobs would not have seemed like real jobs to people several hundred years ago, right? This would have seemed like incredibly silly entertainment. It's important to me, it's important to you. And hopefully it has some value to other people as well.

There will be, and the jobs of the future may seem, I hope they seem even sillier to us, but I hope the people get even more fulfillment and I hope society gets even more fulfillment out of them. But everybody can have a really great

Quality of life, like to a degree that I think we probably just can't imagine now. Of course, we'll still have governments. Of course, people will still squabble over whatever they squabble over. You know, less different in all of these ways than someone would think. And then like unbelievably different in terms of what you can get a computer to do for you.

One fun thing about becoming a very prominent person in the tech industry as you are is that people have all kinds of theories about you. One fun one that I heard the other day is that you have a secret Twitter account where you are way less measured and careful. I don't anymore. I did for a while. I decided I just couldn't keep up with the OPSEC. It's so hard to lead a double life. What was your secret Twitter account? Obviously, I can't.

I mean, I had a good alt. A lot of people have good alts, but you know. Your name is literally Sam Altman. I mean, it would have been weird if you didn't have one. But I think I just got...

yeah too too like too well known or something to be doing that yeah well and the the sort of theory that i heard attached to this was that you are you are secretly an accelerationist a person who wants ai to go as fast as possible and then all this careful diplomacy that you're doing and asking for regulation this is really just the sort of polite face that you put on for society but deep down you just think we should go all gas no brakes toward the future no i certainly don't think all gas no brakes the future but i do think we should go to the future

and that probably is what differentiates me than like most of the ai companies i think ai is good like i don't secretly hate what i do all day i think it's going to be awesome like i want to see this get built i want people to benefit from this so all gas no break certainly not um and i don't even think like most people who say it mean it but i am a believer that this is a tremendously beneficial technology and that we have got to find a way

safely and responsibly to get it into the hands of the people to confront the risk so that we get to enjoy the huge rewards. And like, you know, maybe relative to the prior of most people who work on AI, that does make me an accelerationist. But compared to those like accelerationist people, I'm clearly not them. So, you know, I'm like somewhere, I think you like want the CEO of this company to be somewhere in the middle, which I think I am. You're gas and brakes. I believe that

that this will be the most important and beneficial technology humanity has ever, has yet invented. And I also believe that if we're not careful about it, it can be quite disastrous. And so we have to navigate it carefully. Yeah. Sam, thanks so much for coming on Hard Fork. Thank you guys. When we come back, we'll have some notes on that interview now with the benefit of hindsight.

Indeed believes that better work begins with better hiring. So working at the forefront of AI technology and machine learning, Indeed continues to innovate with its matching and hiring platform to help employers find the people with the skills they need faster. True to Indeed's mission to make hiring simpler, faster, and more human, these efforts allow hiring managers to spend less time searching and more time doing what they do best, making real human connections with great new potential hires. Learn more at indeed.com slash hire.

So Casey, now with, you know, five days of hindsight on this interview and after everything that has transpired between the time that we originally recorded it and now, are there any things that Sam said that stuck out to you as being particularly relevant to understanding this conflict?

I keep coming back to the question that you asked him about whether he was a closet accelerationist. Is he somebody who is telling the world, "Hey, I'm trying to do this in a very gradual, iterative way, but behind the scenes is working to hit the accelerator." During the interview, he gave a very diplomatic answer, as you might expect to that question. But learning what we have learned over the past few days,

I do feel like he is on the more accelerationist side of things. And certainly all of the people rallying to his defense on social media over the weekend, a good number of them were rallying because they think he is the one who is pushing AI forward. How about you? What'd you think? Totally. I thought that was very interesting. And now with the additional context of the last, you know,

three days explains a lot about the conflict between Sam Altman and the board. We still don't obviously know exactly what happened, but I can imagine that Sam going around saying things like, "I think that the future is going to be amazing and I think that everything's going to be great with AI." I can see why that would land

poorly with board members who are much more concerned from the looks of things about how the future is going to look. So it seems like he's sort of an optimist who is running a company where the board of that company is less optimistic about AI. And that just seems like a fundamental tension that it sounds like they were not able to get past.

I was also struck by something else that he said. It was interesting. When we talked about GPTs, these like build-your-own chatbots that OpenAI released at Dev Day a few weeks ago, he said that he was embarrassed because they were so simple and sort of not all that functional and pretty prosaic.

And that's just such a striking contrast because some of the reporting that came out over the weekend suggested that the GPTs were actually one of the things that scared Ilya Suskovor and the board, that giving these AIs more agency and more autonomy and allowing them to do things on the Internet was, at least if you believe the reporting, part of what made the board so anxious. Yeah.

Yes, and at the same time, if it is true that the board and Ilya found out about GPT's at Developer Day, that speaks to some fundamental problems in how this company was being run. And I don't know if that is a SAM thing or a board thing or what, but you would think that by the time the keynote was being delivered, all of those stakeholders would have been looped in. Totally. And I guess my other reflection on that interview is that

It just sounded like Sam had no idea that any of this was brewing. This did not sound like someone who was trying to carefully walk the line between being optimistic and being scared of existential risk. This did not sound like someone who thought that he was on thin ice with his board. This sounded like someone who was very confidently charging ahead with his vision for the future of AI.

That's right. I really hope we are not doing more emergency podcasts on this. Could the news just give us a little break for a minute? Well, if I were you, Kevin, I would clear your Tuesday morning. Oh, God. Happy Thanksgiving. Happy Thanksgiving. Heart Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant. Today's show is engineered by Rowan Nemisto. Original music by Marion Lozano, Rowan Nemisto, and Dan Powell.

Our audience editor is Nel Galogli. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Schumann, Pui Wing Tam, Kate Lepresti, and Jeffrey Miranda. As always, you can email us at hardfork at nytimes.com.

Imagine earning a degree that prepares you with real skills for the real world. Capella University's programs teach skills relevant to your career so you can apply what you learn right away. Learn how Capella can make a difference in your life at capella.edu.