cover of episode Who should be in charge of AI?

Who should be in charge of AI?

Publish Date: 2023/12/1
logo of podcast Search Engine

Search Engine

Chapters

Shownotes Transcript

Casey Newton, welcome to Search Engine. Hi! Also, is this another week where you're supposed to be on vacation? Uh, not really. I mean, like, today is a workday for me. I am supposed to be off starting tomorrow, but I fully expect I'll be making between three and seven emergency podcasts in the next week. Who invented the emergency podcast? You know what? It, like...

An emergency podcast is like a stupid, like self-aggrandizing name. But the point of a podcast is it's like people that you hang out with, like during these moments in your life. So when something happens in a world that you care about, you actually want to hang out with your friends who you talk about that stuff with. Yeah. And I actually, I get a real thrill when I see an emergency podcast. I do have this joke, which is like, there's certain things that if you put them in front of another word, it negates the meaning of the other word. And podcast is one of them. Like podcast famous. You're not famous. Emergency podcast is not an emergency. Yeah.

But I do get the adrenal thrill of an emergency podcast. Do you think podcast cancels out more words than like most other words? Yes. I think if you've had to put podcast in front of it, it's not that thing anymore. I'm podcast successful. This week on Search Engine, an emergency podcast. Can it be an emergency podcast two weeks after the news event? Sound off in the comments. But this week, our urgent question, who should actually be in charge of artificial intelligence? That's after Smads.

Search Engine is brought to you by Ford. As a Ford owner, there are lots of choices of where to get your vehicle serviced. You can choose to go to their place, the local dealership, your place, home, apartment, condo, your workplace, even your happy place, like your cottage on the lake. Go to your Ford dealer or choose Ford Pickup and Delivery to have your vehicle picked up, serviced, and brought right back. Or choose Mobile Service, where a technician will come to you and do routine maintenance right on the spot.

Both are complimentary and depend on your location. That's ownership built around you. Contact your participating dealer or visit FordService.com for important details and limitations. All right, here's a question to make pretty much any room uncomfortable. Get everybody's attention and ask.

Who do we think should be in charge here? Do we all agree that the right person is running things right now? Who should get to make the final decision in your family, in your workplace? Should one person really be in charge? Should power be shared? Sharing sounds good. Okay, with who? How much? According to what criteria? Look, sometimes we ask the fun questions on this show about toxic airplane coffee or the ethics of cannibalism, but these questions about power

I don't think these are the cute ones. These are the questions that start revolutions. These are the questions that transform places, or sometimes destroy them. Who should be in charge? Our country was founded as an answer to that question. We're told by the third grade that America is a democracy. The people are in charge. In junior high, they walk that back a little. They tell us it's a representative democracy, which is a bit different, much less exciting.

But just because our country's a representative democracy, that doesn't mean every institution in our country will be one. There's this word, governance, which is so boring your brain can't even save it. But ironically, it refers to the most interesting thing in the world. Who is in charge of you? Most American businesses have somewhat funky governance structures, which we stole from the British. The typical corporate governance structure goes like this. There's a boss, a CEO, with most of the power.

But they're accountable to a board above them, a small group of people who can depose them, at least in theory. And the board usually represents the shareholders. Often, the shareholders even vote to elect the board. This structure of collective decision-making, of voting, of elections, it has existed and evolved since way before American democracy. The corporate board model comes from England in the 1500s.

Back then, England was a monarchy, but its companies were not. They were like, not democracies, but democracy-esque organizations existing in a country devoted to the rule of the king. They represented a different answer to this who should be in charge question. We took that corporate structure with us when we divorced England. And in 1811, corporations really took off in America.

That year, New York State became the first to make it legal for people to form a corporation without the government's explicit permission. Over the next 200 years, corporations have become very powerful. And in that time, their CEOs have learned and taught one another how better to consolidate power. CEOs today, particularly the CEOs of big tech companies, are less likely to answer to their boards or to their shareholders, if they even have them.

These days, in America, our country is a democracy, and the corporations are the exceptions. Not monarchies exactly, but little monarchy-esque organizations in a country devoted to the rule of the people. Who should be in charge? In America, we know we don't trust kings, but we don't always trust the people. So for now, the people sort of run the country, and the techno kings mostly run its businesses.

But the tension about who should hold power remains unresolved. It crackles. Sometimes it erupts in minor revolutions in all sorts of places. And exactly two weeks ago, it erupted at a technology company.

Breaking news, Sam Altman is out as CEO of OpenAI, the company just announcing a leadership transition. Sam Altman, who has drawn comparisons to tech giants like Steve Jobs, was dismissed by the OpenAI board Friday. The godfather of chat GPT kicked out of the company he founded. On November 17th, OpenAI's company board, without really notifying anyone, fired the head of the company, Sam Altman.

And everyone waited for evidence of the scandalous behavior that had prompted a company KOing its own leader. What we got instead was an extremely vague statement from the board. Quote,

The board no longer has confidence in his ability to continue leading OpenAI. No details about what was said or what wasn't said or what this communication was about. In the meantime, Sam's loyalists staged a countercoup. Nearly every employee at the company signed a petition defending him. 90% of the company's 770 employees signed a letter threatening to leave unless the current board of directors resigned and reinstated Altman as head of OpenAI. Then Microsoft...

OpenAI's biggest shareholder stepped in, also supporting Sam. He was reinstated. OpenAI posting on X that Sam Altman will now officially return as CEO. It's also overhauling the board that fired him with new directors, ending a dramatic five-day standoff that's transfixed Silicon Valley and the artificial intelligence industry. We reached out to OpenAI for comment. We wanted to talk to someone there about what had transpired at the company in the past month. We didn't get a response.

For some people, all of this just looked like more drama from Silicon Valley, the country's leading drama manufacturer in this year of Musk. But the other way to look at this, the way most AI people see it, is that the technology they're working on has a potential to be incredibly world-alteringly powerful. So this never-resolved question, who should exercise power and how, it just got even more complicated.

Because now we have to decide which people or person should be in charge of artificial intelligence, a technology designed to become smarter than human beings. Well, let's take a step back. OpenAI is the most important company of this generation. For the past two weeks, as this story has unfolded, I've been talking to Casey Newton, who publishes the excellent newsletter Platformer.

When we spoke last week, he was reminding me exactly how important the story of OpenAI is, even before this latest chapter.

It is not super young. It was founded in 2015. But with the launch of ChatGPT last year, it started down a road that very few companies get to start down, which is the road to becoming a giant consumer platform that you had mentioned in the same breath as a Google or a Facebook or a Microsoft, right?

And when you are seeing that, in the case of ChatGPT, you have a product that is being used by 100 million people a week. And you have a CEO who has become the face of the industry, right? Sam Altman has become essentially the top diplomat of the AI industry over the past year. The number of reasons that you would fire that person with no warning is just extremely small. And...

The idea that even after he was fired, you still would not say with any specificity what he did is even smaller. Those are just some of the reasons why this has just been such a crazy story. And when you saw it, how did you get the news?

I'm happy to tell you that story. My parents were in town, and they asked if we could have lunch. And I thought, I'm going to take them to a really nice lunch in San Francisco at an institution called the Zuni Cafe. A Zuni Cafe is known for a roast chicken that is so good, but it does take an hour to cook.

So we order a few snacks and my parents being my parents said, hey, why don't we get a couple of cocktails and a bottle of wine? And I said, guys, it's 1145 a.m. But you know what? Let's do it.

So a bottle of wine comes, the cocktails come, we have our snacks, and we're waiting for the chicken. And I think I'm going to use the restroom. And I get up to use the restroom and look at my phone, and I see the news because 78 people have been texting me saying, holy motherfucking shit, what is happening?

And so I go back to the table and explain to my parents everything that I have to about opening eyes and everything. And then I walk outside and I get on a Google Meet with my podcast folks, because of course we're going to need to do an emergency episode. And I just stare at my parents through the window and watch the chicken arrive at the table and then start to eat it.

So you never got to eat the chicken? Well, eventually the Google meat ended and I got to have some chicken and it was delicious. But there was a while there where I was quite hungry and jealous of them. And so you guys, the initial thing is just like, holy crap, this was nuts. And like, was your instinct, oh, there's going to be like, like the board is going to come forward and say like, hey, he's done something awful. Like, were you waiting for a shoe to drop? Absolutely. Because there's,

There again, the number of reasons why the board would have fired him is just very small. Right. When I saw it, my thought was it's, it's always either money or sex is why a high profile person loses their position. Right. And the board's description didn't really lean one way or another in that direction. I started to, you know, people just started to speculate, throw wild theories at me, um,

But again, because this was such a consequential move, the expectation was always that even if the board wouldn't say it in their blog post, they would at least tell their top business partners, they would tell the top executives at OpenAI, and then it would just sort of filter out to the rest of us what actually happened.

But days later, that was still not the case. Even after the company was in open revolt with 95% plus of the company threatening to walk out the door if the situation wasn't reversed, the board still wouldn't say what happened. Have you ever seen anything like that before? Um...

Well, I mean, look, CEOs get fired. There's actually an argument that CEOs don't get fired enough, right? Like we live in this Silicon Valley bubble where we have a cult of the founder and there is a very strong feeling that the founder should almost never be removed because the company cannot survive without them. And so it's always very dramatic when a founder gets removed, right? Like probably the biggest founder drama I can remember before this one was the removal of Travis Kalanick from Uber.

The difference there was that Uber had been involved in a lot of public wrongdoing before he was removed. And so there was kind of a steady drumbeat of stories and people calling for him to resign before that happened. But even then, his board members turned on him. And in Silicon Valley, that is a taboo. For someone that you appoint to your board and you say, be a good steward of my company, the expectation is you are never going to remove the founder. And in fact, we have other Silicon Valley companies where the founders have

insulated themselves against this by just designing the board differently. So Mark Zuckerberg has a board that cannot remove him. Evan Spiegel at Snap has a board that cannot remove them. So again, that's just kind of the way things operate here. And how does a founder choose their board members?

So the most common way is that if you, PJ, run a venture capital firm, which I do think you should, but I'm going to talk to you about that. So I come to you and I want to get some of your money. You say, okay, like I will buy this percentage of your company for this amount, but I want to take a seat on your board, right? And the idea is, hey, if I'm going to have a lot of money locked up in your company, I want to be able to have a say in what happens there.

I see. And normally speaking, normal company, Facebook, whatever, you've got a board, they have a little bit of a say because it's their money, but a powerful founder of a powerful company will set it up so that they don't have much of a say.

Yeah, basically, they create a different kind of stock, and they will control the majority of that stock. And that stock has some sort of super voting powers. So when the board goes to vote on something, their votes will never exceed the number of votes cast by the founder. The OpenAI board was set up very differently, which I am sure we'll talk about. And so it made this sort of thing possible, but absolutely nobody saw it coming.

After the break, the strange origin story of OpenAI and how it led to the events of this month. Search Engine is brought to you by Greenlight. A new school year is starting soon. My partner has two young kids, both of whom use Greenlight. And honestly, it's been kind of great. Greenlight is a debit card and money app for families where kids learn how to save, invest, and spend wisely. And parents can keep an eye on kids' new money habits.

There's also Greenlight's Infinity Plan, which includes the same access to financial literacy education that makes Greenlight a valuable resource for millions of parents and kids, plus built-in safety to give you peace of mind. There's even a feature that detects car crashes and will connect your young drivers to 911 dispatch and alert emergency contacts if needed. No matter which features make the most sense for your household, Greenlight is the easy, convenient way for parents to raise financially smart kids and for families to navigate life together.

Sign up for Greenlight today and get your first month free when you go to greenlight.com slash search. That's greenlight.com slash search to try Greenlight for free. Greenlight.com slash search. Search Engine is brought to you by Seed Probiotics. Whether you're off to the pool, hiking, or traveling this summer, you're bringing your microbiome with you too. The 38 trillion bacteria that live in and on you, especially your gut, are essential to whole body health.

Seed's DS01 Daily Synbiotic benefits your gut, skin, and heart health in just two capsules a day. DS01 can help in areas like digestion, skin, etc. Your body is an ecosystem, and great whole body health starts in the gut. Your gut is a central hub for various pathways through the body, and a healthy gut microbiome means benefits for digestion, skin health, heart health, your immune system, and much more. Support your gut this summer with Seed's DS01 Daily Synbiotic.

Go to seed.com slash search and use code 25search to get 25% off your first month. That's 25% off your first month of Seed's DS01 daily symbiotic at seed.com slash search. Code 25search. Thank you all for sticking around this afternoon. We had some great conversations and we're hoping to have another great one. It's the fall of 2015, just a couple months before OpenAI would be willed into existence.

Elon Musk and Sam Altman are on stage together at this conference on a panel called What Will They Think of Next? Questions is about artificial intelligence. And one question they asked is about AI, this technology that in 2015 still felt way off in the future. Sam and Elon could share with us their positive vision of AI's impact on our coming life. Sam Altman, who at the time is the head of Y Combinator,

He goes first. I think there are, the science fiction version is either that we enslave it or it enslaves us. But there's this happy symbiotic vision, which I don't think is the default case, but what we should work towards. I think already... Tim's dressed like a typical 2015 startup guy. Blazer, colorful sneakers. What I notice is his eyes, which to me always look concerned. Like someone whose car just made a weird noise at the beginning of a long road trip.

In 2015, Sam Altman has a reputation as a highly strategic, deeply ambitious person, but also someone a bit outside of the typical Silicon Valley founder mold. He's made a lot of money, but says he's donated most of it. He's very obsessed with universal basic income. The kind of person who tells the New Yorker that one day he went on a day-long hike with his friends, and during it, made peace with the idea that intelligence might not be a uniquely human trait.

He tells the magazine, quote, "There are certain advantages to being a machine. We humans are limited by our input-output rate." He says that, "To a machine, we must seem like slowed-down whale songs." But I don't think there's any human left that understands all of how Google search results are ranked on that first page. It really is— Onstage, Sam's pointing out the ways in which AI is already here. We're already relying on machine learning algorithms we don't entirely understand.

Google search results or the algorithms that run dating websites. In this case, the computer matches us and then we have babies. But then have babies. And so in effect, you know, you have this like machine learning algorithm breeding humans. And so really, I mean, you do. And so there's this, and then, you know, those people like work on the algorithms later. And so I think the happy vision of the future is sort of

humans and AI in a symbiotic relationship, distributed AI where it sort of empowers a lot of different individuals, not this single AI that kind of governs everything that we all do that's a million times smarter, a billion times smarter than any other entity. So I think that's what we should work towards. Elon goes next. I agree with what Sam said. I mean, we are effectively already a human-machine collective symbiote. Like, we're like a

like a giant cyborg. That's actually what society is today. No one in the room particularly reacts, as Elon softly explains that human beings are already pretty much cyborgs anyway. And I do think we need to be careful about the development of AI and make sure it is ultimately beneficial to humanity, that the future is good.

This question they're answering about the future from a stage in 2015, at the time it feels very nerdy and theoretical. But the future moves pretty fast sometimes. A couple months after this event, these two will help birth OpenAI. It's a very non-traditional company, at least for Silicon Valley, a supergroup of tech bajillionaires funding something they're describing not as a for-profit company, but as some sort of future-looking research lab.

At the beginning, there's Sam and Elon, but also billionaire libertarian Peter Thiel, LinkedIn co-founder Reid Hoffman, Y Combinators Jessica Livingston. These are people who are convinced that they're creating something unusually powerful and who don't just want to figure out how to build the thing. They want to figure out the most responsible kind of governance system that could control it, if it works.

They decide they want to start a company that will be the first to create an artificial general intelligence, or AGI. There is a lot of dispute and debate about what AGI really is. In fact, Sam Altman told me that he hates that word. But the basic idea is,

It is some sort of computer something that is smarter than us and can just sort of do anything that we can do better. They thought we could get there, so they wanted to set something up.

And they thought about a few different options. The first option they thought was the government. You know, it's like the government did the Manhattan Project to invent the nuclear bomb. If we're going to create a new species of intelligence that's smarter than us, maybe the government should play a role in that. But they look around at the U.S. government in 2015 and they think, well, probably nobody's going to hand us $100 billion to go a little super intelligent. So maybe we'll just sort of put that idea to the side.

Right. Also, the idea that the government in the United States is always cool-headed and averting apocalypse instead of, like, steering wildly into it is not a thesis that survives modern times. Exactly.

And so then they think like, well, maybe we do it as a for-profit company, right? Like Sam Altman at the time was running Y Combinator, which is the most famous startup incubator in the United States. It's responsible for Stripe and Dropbox and a number of other famous companies. So the obvious thought was, well, why don't we just do it as a venture-backed startup?

But the more they think about it, they think, well, gosh, if we're, again, building a super intelligence, we don't want to put that into the hands of one company. We don't want to concentrate power in that way because we think this thing could be really beneficial. And so we want to make sure that everyone reaps the benefits of that. So that leaves them with a nonprofit, and that winds up being the direction they go in.

And this might be jumping ahead, but my guess would be, one of the reasons, as I understand it, that technology usually moves at the fastest pace it can instead of the most judicious pace it can, is because if you're moving slowly, someone else will move more quickly? More faster? Faster. Faster. Faster quickly. More faster quickly. And so why did the responsible company succeed this time? Well...

It had some advantages. One, it was probably the first mover in this space that was not connected to a giant company. So Google, for example, already had AI efforts underway. Facebook also had AI efforts underway. This was really the first serious AI company. I think that because it was...

a startup and because it was a nonprofit, it attracted talent that would be less inclined to go work for a Google or Facebook, right? There are recruiting advantages that come with telling people we do not have a profit motive. We are a research lab and our intentions are good.

And so they attracted a lot of really smart people. They also had the imprimatur of Elon Musk, who was one of the co-founders, who was a much more reliable operator in 2015 than he is today. And that served as a powerful recruiting signal.

And so all those people get together and they get to work. And they started working on a bunch of things and not everything worked out. They had a hardware division at one point. Like they were interested in robotics and it just kind of fizzled. But then they started working on this GPT thing and things got better for them. According to reporting from Semaphore, in early 2018, Elon Musk makes a bid to become OpenAI's president. The board shoots him down. Soon after, Elon quits OpenAI, publicly citing a conflict of interest with Tesla.

Semaphore also reported that Musk promised to invest $1 billion in OpenAI. When he left, he said he would keep the promise. He didn't. So OpenAI was short on money, which was a problem because the next year, 2019, the company announced their expensive new project, GPT-2, a much more primitive ancestor to the chat GPT you've likely used.

Training even this model was hugely expensive, and OpenAI realized it would not be able to get by on donations alone.

One thing that we've learned over the past year, as all of us have been educating ourselves about large language models like ChatGPT, is that they're incredibly expensive to run. I talked to a former employee of OpenAI this weekend who described the company to me as a money incinerator, right? They don't even, they don't make podcasts? They don't even make podcasts. That's how expensive they are. They're losing money without even making podcast, BJ. Can you imagine? Okay.

If you've ever used ChatGPT, you've cost OpenAI money. Some estimates are around $0.30 for you asking ChatGPT a question. And it has 100 million users a week. So you can imagine how much money they're losing on this thing. And is that $0.30 computing power? Yes.

Yes, I believe the technical term is an inference cost. So you type in your question to ChatGPT, and then it sort of has this large language model, and it generates a sort of series of predictions as to what the best answer to your question will be. And the cost of the electricity and the computing power is about $0.30.

Got it. So the technology is super expensive to run. So even in the early days, they're just burning money really quickly. Yes. And so they have a problem, which is that there is no billionaire, there's no philanthropy, there's no foundation, there's no government that is going to give them $100, $200 billion to try to get their project across the finish line. So they turn back to the model that they had rejected, the for-profit model.

But instead of just converting a nonprofit into a for-profit, which is incredibly difficult to do, they take a more unusual approach, which...

which is that the nonprofit will create a for-profit entity, the nonprofit board will oversee the for-profit entity, and the for-profit entity will be able to raise all those billions of dollars by offering investors the usual deal. You give us some amount of money in an exchange for some percentage of the company or for our revenues or our profits, and that will enable us to get further faster.

March 2019, OpenAI publishes a blog post announcing the change. The nonprofit will now have a for-profit company attached to it, and the CEO will be Sam Altman. He will not, however, take any kind of ownership stake, an almost unheard of move for a Silicon Valley founder. The blog post lists the names of the nonprofit board members who will keep the for-profit in check.

Sam Altman is on the OpenAI board, along with some other executives, like OpenAI's chief scientist, Ilya Setskiver. There's some Silicon Valley bigwigs, LinkedIn's Reid Hoffman, Quora's Adam D'Angelo. But also, importantly, there are some effective altruists, like Holden Karnofsky and scientist-engineer Tasha McCauley. If the idea is that this board is going to be, like, part of the idea is they are a hedge against AI going in the wrong direction, and they're going to try to get really, like,

skeptical, smart people. Like, how serious are these people as artificial intelligence thinkers? I mean, I think they do have credibility. You know, I don't know who in that year would have been considered the very best thinkers on that subject. But I would note that in the years since, Reid Hoffman left the board to start his own AI company with a co-founder. It's called Inflection AI. They've been doing good work.

Holden Karnofsky was the CEO of Open Philanthropy, which is one of the effective altruist organizations. They are a funder of funders, so they give money to scientists to research things. But Holden was essentially part of the group that were some of the very first people to worry about existential AI risk. At a time when absolutely no one was paying attention to this, Holden's organization was giving researchers money to study the potential implications of AI.

So there were people on that board who were thinking a lot about these issues before most other people were. And we can debate whether they had enough credibility, but certainly they were not just a bunch of dumb rubber stamps for Sam Altman. At this moment in 2019, OpenAI, the nonprofit company controlling a for-profit subsidiary, was a little unusual. But that unusual state of affairs would only become truly absurd a few years later.

November 2022, OpenAI releases, without much fanfare, without a very attractive name, a product called ChatGPT. Within five days, ChatGPT has a million users.

Two months after launch, it has 100 million monthly active users. At that point in time, it's the fastest growing consumer app in history. There's a new bot in town and it's taking the world by storm. ChatGPT was launched by OpenAI on the 30th of November. It's gaining popularity for its ability to craft emails, write research papers, and answer almost any question in a matter of seconds. The CEO, Sam Altman, is just 37. OpenAI.

OpenAI becomes the leading AI company. Sam Altman becomes not just the face of OpenAI, but for many people, the face of AI itself. He's the rock and roll star of artificial intelligence. He's raised billions of dollars from Microsoft, and his early backers include Elon Musk and Reid Hoffman.

As ChatGPT takes over the internet, Sam goes on a world tour. Israel, Jordan, the UAE, India, South Korea, Japan, Singapore, Indonesia, and the UK. You must be rushed off your feet. You're in the middle of an enormous world tour. How are you doing? It's been super great, and I wasn't sure how much fun I was going to have. By May of this year, AI has become important enough, fast enough, that Sam...

AI's chief diplomat is testifying in front of Congress. Mr. Altman, we're going to begin with you, if that's okay. Thank you. Thank you, Chairman Blumenthal, ranking member Hawley, members of the Judiciary Committee. He's dressed in a Navy suit, but now with normal gray shoes. His eyes look still worried. They're registering congressional levels of worry. OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives.

Thank you.

Sam is telling these congressmen, his likely future regulators, what every tech CEO has told everyone since the invention of FHIR. Don't worry, I have this under control.

But what is new here, what you would not see with someone like Mark Zuckerberg in Facebook's early years, is that Sam's also saying he knows that the downside risk of the thing he's creating is enormous. Casey Newton says that this tension, that AI's inventors are also the people who worry about its power, that's part of what makes this story so unusual.

Usually the way that it works in Silicon Valley is that you have the rah-rah technologists going full steam ahead and sort of ignoring all the safety warnings. And then you have the journalists and the academics and the regulator types who are like, hey, slow down. That could be bad. Think through the implication. That's sort of the story we're used to. That's the Uber story. That's the Theranos story. What's interesting with AI is

is that some of the people who are the most worried about it also identify as techno-optimists, okay? Like, they're the sort of people that are usually like, hey, technology is cool, let's build more of that. Why is that the case? Well, they've just looked at the events of the past couple years. They used GPT-3, and then they used GPT-3.5, and then they used GPT-4, and now they're using GPT-4 Turbo. We already...

basically know how to train a next generation large language model. There are some research questions that need to be solved, but we can basically see our way there, right? And what happens when this thing gets another 80% better, 100% better? What happens when the AI can start improving itself or can start doing its own research about AI, right? At that point, this stuff starts to advance much, much, much, much faster, right?

If we can see on the horizon the day that AI might teach itself, then the question of who's in charge of it right now feels pretty important. And remember, OpenAI itself had foreseen this problem. That's the very reason it had created the nonprofit board, as a safety measure.

And the problem for Sam Altman in 2023 is that while ChatGPT had been taking over the world, the composition of his nonprofit board had changed. Some of his natural allies, business-minded folks like Reid Hoffman, had left the board, which had tipped the balance of power over to the academics, towards the people associated with the effective altruism movement. And that's what set in motion the coup, the very recent attempt by the board to take out Sam.

When news of Sam's firing first broke, the reasonable guess was that he tried to push AI forward too fast in a way that had alarmed the board's safety-minded people. In the aftermath of all this, it's pretty clear that that's not what happened. According to the Wall Street Journal, here's how things broke down. The departure of some of Sam's allies had left an imbalance of power, and afterwards, the two sides began to feud.

One of the effective altruists, an academic named Helen Toner, co-authored a paper about AI safety where she criticized OpenAI, the company whose board she was sitting on. A normal enough thing to do in the spirit of academia, but an arguably passive-aggressive violation of the spirit of corporate America. Sam Altman confronted her about it. Then...

Sometime after that, some of Altman's allies got on Slack and started complaining about how these effective altruist safety people were making the company look bad in public. The company should be more independent of them, they said, on Slack. The problem is that on that Slack channel was Ilya Sutskovor, a member of the board, and someone who is both a sometime Altman ally, but also someone who is deeply concerned with AI safety.

How many companies have been destroyed by the actually already nuclear technology that is Slack? Anyway, two days later, it's Satskiver who delivers the killing blow. Sam is in Vegas that Friday at a Formula One race trying to raise more billions for OpenAI. He's invited at noon to a Google meet where Satskiver and the other three board members tell Altman he's been fired.

Afterwards, like any laid-off tech worker, he finds his laptop has been remotely deactivated. Over the weekend, as the company's employees and executives get angrier and angrier about the coup, they confront Helen Toner, the academic who wrote the spicy paper. They tell her that the board's actions might destroy the company. According to the Wall Street Journal, Helen Toner responds, quote, "...that would actually be consistent with the mission."

In other words, she's saying the board should kill the company if the board has decided it's the right thing to do. Casey told me that in the days after, a public consensus is quickly congealed against these effective altruists who had knowingly damaged the company, but then had been unable to provide evidence that they'd done it for any good reason. Part of the fact that the EAs have a really bad reputation right now is that if you have not thought that much about AI,

And it's very hard for you to imagine that a killer AI is anything other than a fantasy from the Terminator movies. And you find out that out there in San Francisco, which is already a kooky town, you think this. There's a bunch of people working for some rich person philanthropy. And all they do is they sit around all day and they think about the worst case scenarios that could ever come out of computers. You would think...

It seems like kind of weird and culty to me. You know, it's like, these are like the Goths of Silicon Valley, right? There's something almost religious about their belief that this, you know, AI god is about to come out of the machine. So these people kind of get dismissed. And so when the open AI...

Sam Altman firing goes down, there's a lot of discussion of like, well, here go the creepy AI kids again, the Goths of Silicon Valley and their religious belief in killer AI. They've all conspired to destroy what was a really great business. And that becomes, I would say, maybe the first big narrative to emerge in the aftermath of Sam's firing. We all know what happens next. On November 21st, five days after the shocking firing of Sam Altman,

He gets his job back. He is once again CEO of OpenAI. And while he won't get to keep his seat on the board, he seems to have defeated the Goths of Silicon Valley. There is a big party at OpenAI's headquarters. Someone pulls the fire alarm because there was a fog machine going. But by all accounts, everyone had a great time. They stay up very late. And what about the board? These people that tried and failed to do a coup.

So three of the four members of the board are leaving it. That's Tasha McCauley, Helen Toner, and Ilya Sutskovor. A fourth member, one of the people who had voted to fire Sam, Adam D'Angelo, who's the CEO of Quora, he is staying on the board. And then they have brought in Larry Summers, who is a well-known former U.S. Treasury Secretary,

After the break, we get to the question that sends us here. Who should actually be in charge of AI?

Search Engine is brought to you by PolicyGenius. It is very easy to put something important off because you do not have the time or patience to deal with it. And finding the perfect life insurance policy is a pretty good example. 40% of people with life insurance wish they'd gotten their policy at a younger age. PolicyGenius helps you get the life insurance you need fast so you can get on with your life. With PolicyGenius, you can find life insurance policies that start at just $292 per year for a million dollars of coverage.

♪♪

That's policygenius.com.

Mint Mobile is here to rescue you and your squad with premium wireless plans starting at $15 a month. All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network. Use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts. Ditch overpriced wireless with Mint Mobile's deal and get three months of premium wireless service for $15 a month.

To get this new customer offer and your new three-month premium wireless plan for just 15 bucks a month, go to mintmobile.com slash search. That's mintmobile.com slash search. Cut your wireless bill to 15 bucks a month at mintmobile.com slash search.

Something like once a week in America, some institution implodes. And it pretty much always goes the same way.

A confusing private conflict breaks out onto the internet. The combatants plead their versions of the story to the public. And we, reporters, gawkers, people online, render a quick, noisy verdict. The desire to participate in all this is human nature. I am doing it right now. You are doing it with me. Neither of us chose this system, but we're stuck with it. Institutions right now are fragile, the internet is powerful, and we're all addicted to being entertained.

In my wiser moments, though, what I try to remember is that whoever is actually at fault in any of these fights of the week, the truth is institutions are supposed to have conflict. People put together will disagree. A healthy institution is one capable of mediating those disagreements.

When we, the public, watch a private fight break out online, it's hard to ever really know for sure who was actually right or wrong. What we can know is that we are watching as the institution itself breaks. OpenAI was set up from the beginning to be an unusual kind of company with an unusual governance structure. As unusual as it was, I'm not convinced from the available evidence that the structure was the problem.

The faction of revolutionaries who took over OpenAI, who governed it for a little over a weekend, it just seems like they didn't know how to be in charge. They couldn't articulate what they thought was wrong. They couldn't articulate why their revolution would fix it. They never even bothered to try to win over the people in the building with their mission. They thought they saw someone acting like a king, and so they acted imperially themselves. In the aftermath, what I found myself wondering this week was this. This new version of OpenAI,

could it tolerate conflict? Could it have, productively, the fights you'd hope would take place somewhere as important as this, in the rooms we'll never see inside of? Casey Newton, who is better at spying into those rooms than you or me, he says he feels optimistic.

I think the most important thing about the new board is that Adam D'Angelo is on it. This is someone who voted to fire Sam Altman and who is still there. And he has a say on who else comes onto that board, who will have a say on who gets picked to investigate all of the circumstances. So to me, that is like, if you're somebody who is worried that like, oh no, OpenAI is just going to sort of go gas to the pedal. If you're worried that OpenAI is going to go...

Foot to the gas? Why can't I figure out this metaphor? Gas to the foot pedal. If you're worried the opening eye is going to go gas to the foot pedal, don't worry, because Adam D'Angelo is there. That's how I'm feeling about it anyway. Is that how you're feeling about it? Are you feeling like... Well...

I mean, look, let me take a step back. This might be too much for your podcast, PJ, but let me tell you something. I love when you take a step back. Okay, great. Take a step back. Okay, great. One of the big narratives that came out of this whole drama was there was the forces of corporate moneymaking and there were the forces of AI safety and the forces of AI safety kicked out Sam Altman and then the forces of corporate moneymaking stepped in to ensure that Sam Altman would be put back in his role to continue the corporate moneymaking. And...

It is true that the forces of capitalism intervened to restore establishment. That part is true.

But from my own reporting, I truly believe that the core conflict was not really about AI safety in the sense that Sam Altman was behind the scenes saying like, we have to go accelerate all these projects while the board isn't looking. And that's why he got fired. Like, I do not think that is what happened. I think the board was actually fairly comfortable where things were from like a safety perspective. I think they were just worried about the lying.

that they say that he was doing. But they have not pointed to a single instance of perhaps because he's such a good liar.

that you can never catch him, but you can sometimes smell the sulfurous smell of a lie that went undetected and passed by you. They do talk about him like a mischievous leprechaun, or like Rumpelstiltskin or something. And having interviewed Sam, that's not my impression of him, but maybe it's like a Kaiser Soze thing where his greatest trick was convincing me that he didn't exist. Anyways, you were saying that, and this fits with my general worldview, which is that when institutions explode...

It's always described as...

you know, people represent one value versus representing another. And sometimes that's true, and often it's actually about either things that are more subtle or just sort of power. Yes. And you're saying that from your reporting, your sense is not that the board was saying, "Hey, you're careening into the apocalypse. We have to stop you." The board had some hard-to-define problems with his leadership style, and they pulled the big red lever that they're really only supposed to pull if he's inventing a Death Star.

But what you're saying is if you were worried about the AI Death Star, you don't necessarily have to feel like

the AI Death Star is coming. That's right. That's right. There's no reason to believe that now that the old board is out of the way, open AI can just go absolutely nuts. Like, I don't think that's what is going to happen. And also, by the way, there's going to be way more scrutiny on open AI as it releases next generation models and new features. And so I think there's a way in which this was very bad for the AI safety community because they were made to look like a bunch of goths who are bad at governance. Um,

But I think it was good in the sense that now everyone is talking about AI safety. Like regulators are very interested in AI safety and regulations are being written in Europe about AI safety. So, you know, I actually don't think we have to panic just yet. Got it. Okay. And then I guess like I,

I began this episode by saying, like, one way that you can think about this as it being, like, a bunch of silly corporate drama. And, like, that is true. And at the same time... I hate that, though. Can I just say? I've been reading these stories. It's like, oh, well, looks like the Silicon Valley tech bros have gotten themselves embroiled in a little nutter drama. Tee hee. And, like, the only people who can feel that way are the people who truly do not care about the future. Like, I'm sorry. Like, if you want to just...

convince yourself that like there's nothing at stake here that like i truly wish my brain were as smooth as yours because it actually does matter like how people will make money in the future it matters if a machine will be able to do everyone's job so count me on the side of those who are interested and you do not think that this is just like a fun little netflix series for us all watching before you go to bed

I'm with you, and I appreciate you ranting and raving because I feel the exact same way. And I'm also just like, there's this really annoying to me thing in technology, and it's not just civilians. It's like also sometimes journalists who cover it where they're like, I know what's going on. It's the thing that happened last time. So it's like people who are like, AI is just NFTs. I'm like, no, those are just...

Pieces of technology that are described by letters. They're very different. Like the future and the present are informed by the past, but it's not just a movie that you can say you saw the end of. Some journalism is just people who don't care posturing for other people who don't care. And I think that is like, we've seen so much of that during the OpenAI story. But we're right and we're smart. Good for us. We're killing it over here.

So if we agree, and we do, that whether or not there were shenanigans this week, the shenanigans were...

inspired by a real question, and that real question matters. AI is a likely transformative technology, and the idea of how it should be governed is really tricky. We're focusing on OpenAI because they are the leader in the space. But if you zoom out from OpenAI, there's a ton of other companies developing artificial intelligence. There's a ton of other countries where this is happening. You know, it's being developed all over the world. And

And I don't know the right answer if this technology has the potential to be as powerful as the people developing it fear. I don't know what you do around that. And I'm curious what you think. Like, if you were king of the world, but you were leaving next year, and you had to set up a regime for artificial intelligence that everyone would actually follow, what would you do?

What do you do? Well, one, I do think this is a place where we want the government to play a role, right? Like if a technology is created that does have the effect of causing massive job losses and introduces novel new risks into like, you know,

bioweapons and cybersecurity and all sorts of other things, I think you do want the government paying attention to that. In fact, I think that there's a good case to be made that the government should be funding its own large language model. It should be doing its own fundamental research into how these models work and maybe how to build some of its own safely, because I'm not sure that the for-profit model is the one that is going to deliver us to the best result here.

In terms of what would government oversight look like, some folks I talk to talk about it just in terms of capabilities. Like, we should identify capabilities that's like, once a model is able to do this, then we would introduce some brakes on how it is distributed, how it is released into the world. Maybe there are some safety tests we make you go through. And in a world where the government can and does regulate this,

Which government? Is it the U.S.? Is it the U.N.? Like, how do you do it? It generally winds up being a mix of Western democracies that lay the blueprint. You know, the U.S. doesn't typically regulate technology very much, but Europe does. And so Europe essentially writes the rules for the Internet that the rest of us live on. And it basically works out OK because their values are basically aligned with American values.

And so, like, yes, we have to click on a little cookie pop-up every website that we visit because Europe is making us, and we hate it, but it's also fine, you know? And so, like, AI is probably going to be the same thing where Europe is going to say, well, AI should basically be like this, and the U.S. will have hearings where they sort of gesture in similar directions and then never pass a law, and, like, that will be the medium-term future of AI.

Where I think it will change is if there is some sort of incident where, like, thousands of people die and AI plays a direct result. Like, that is when the U.S. will finally get around to doing something. Maybe. Maybe. It's weird, um...

It's weird to feel both scared and excited. Like, I'm not used to having two feelings at the same time. There's this feeling that I just call AI vertigo, which, I mean, and this is the sort of staring into the abyss feeling where you can imagine all of the good that could come with, you know, having a universal translator and an essentially omniscient assistant that is just living in every device that you have. Like, that's incredibly powerful and good.

But yes, it will also generate both a huge number of new harms and at a huge volume. And so your imagination can just run wild. And I think it's important to let your imagination run wild a little bit. And it is also possible to go too far in that direction. And sometimes you just need to chill out and go play Marvel Snap for a little bit. Casey, that's exactly what I'm going to do. Okay, that's good. Thank you. Thank you.

Casey Newton. You should subscribe to his excellent newsletter, Platformer, and to his podcast, Hard Fork, which he co-hosts with Kevin Roos. They've had some wonderful episodes on this subject. You should go check them out.

Also, just in general, this blow up at OpenAI has been an occasion for some wonderful tech reporting. People have been all over this story explaining a very complicated situation very quickly. I'm going to put links to some of the pieces that I enjoyed and drew from for this story. You can find them, as always, at the newsletter for this show. There's a link to that newsletter in the show notes. ♪

♪ ♪ ♪ ♪ ♪ ♪ ♪

Search Engine is a presentation of Odyssey and Jigsaw Productions. It was created by me, PJ Vogt, and Shruthi Pinamaneni, and is produced by Garrett Graham and Noah John. Theme, original composition, and mixing by Armin Bazarian. Our executive producers are Jenna Weiss-Berman and Leah Reese-Dennis. Thanks to the team at Jigsaw, Alex Gibney, Rich Pirello, and John Schmidt.

And to the team at Odyssey, J.D. Crowley, Rob Morandi, Craig Cox, Eric Donnelly, Matt Casey, Maura Curran, Josephina Francis, Kurt Courtney, and Hilary Sheff. Our agent is Oren Rosenbaum at UTA. Our social media is by the team at Public Opinion NYC. Follow and listen to Surge Engine with PJ Vogt now for free on the Odyssey app or wherever you get your podcasts.

Also, if you would like to become a paid subscriber, you can head to pjvote.com. There's a link in the show notes. Or another way you can help the show is to go to Apple Podcasts and rate and review us. Highly would be nice. All right, that's it for this week. Thank you for listening. We'll see you next week.