cover of episode Google's Next Top Model + Will the Cybertruck Crash? + This Week in A.I.

Google's Next Top Model + Will the Cybertruck Crash? + This Week in A.I.

Publish Date: 2023/12/8
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by Working Smarter, a new podcast from Dropbox about AI and modern work. Learn how AI-powered tools can help you collaborate, find focus, and get stuff done, so you have more time for the work that matters most.

In conversation with founders, researchers, and engineers, Working Smarter features practical discussions about what AI can do to help you work smarter too. From Dropbox and Cosmic Standard, listen to Working Smarter on Apple Podcasts, Spotify, or wherever you get your podcasts. Or visit workingsmarter.ai.

Look, Kevin, in Silicon Valley, we have some really incredible names just of companies. That's true. And I think sometime in the mid-2010s, we actually ran out of names. You know what I mean? Right. That's when people just started removing vowels from stuff. Exactly. And everything was .io and it was .ly and it just sort of became kind of unhinged.

And to me, this all reached its apotheosis. Oh, great word. Thank you. And apotheosis would be a great name for a company, by the way. Apotheosis, S-Y-S. Yes, yes. But a couple years ago, I saw a headline on TechMean, but I'll never forget, and it just said, Flink has acquired Kaju. And I thought, excuse me?

And these actually weren't even American companies. These were European companies. But Flink had acquired Kaju. That sentence would actually give a caveman an aneurysm. It would kill a small peasant back in the days of Henry VIII. Well, you know, once Flink acquired Kaju, I thought literally anything could happen in this world. Yeah, all bets are off. Yeah. And so this week I saw that hot on the trail of Flink acquiring Kaju, Isaiah has acquired Huzu. You sure about that? Yeah.

And Gesundheit. I'm Kevin Roos, a tech columnist for The New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week on the show, Google's next generation AI model, Gemini, is here. We'll tell you how it stacks up. Then, there's a Cybertruck for sale. And Kevin thinks it looks cool. I'm sorry, it does. And finally, it's time for This Week in AI. Should we podcast? Should we set the timer again? Boom. Boom. Boom.

Casey, our latest addition to the podcast studio is a countdown clock, which I bought off Amazon.com. And the express purpose of this clock is to keep us from running our mouths for too long and torturing our producers with hours of tape that they then have to cut. Okay, that sounds horrible. Insert 30-minute digression.

Let's go. Okay. We're rolling. We're timing. We're rolling. So, Casey, this is a big week in the AI story in Silicon Valley because Google has just released its first version of Gemini, its long-awaited language model, and basically their attempt to catch up to...

OpenAI and ChatGPT and GPT-4 and all that. It's America's next top model, Kevin, and it's here. And I was particularly excited about this because I am a Gemini. That's my astrological sign. You know, I'm a Gemini as well. No. Really? This was really the model that was made for us to use. Wow. We're twins, just like Gemini. And we're Two-Faced, just like Gemini. So Gemini is Google's largest and most capable model yet.

And according to Google, it outperforms GPT-4 on a bunch of different benchmarks and tests. We're going to talk about all of that. But I think we should just set the scene a little bit because within the AI world, there has been this kind of waiting game going on. You know, ChatGPT came out roughly a year ago. And basically from the day that it arrived,

Google has been playing catch up. And the presumption on the part of many people, including us, was that Google would, you know, put a bunch of time and energy and money and computing power into training something even bigger and better than what OpenAI was building and basically try to sort of throw their muscle into the AI race in a really significant way. And with Gemini, this is what they appear to have done. Yeah. Finally, we have a terrifying demonstration of Google's power. Yeah.

Well, so we'll talk about whether it's terrifying or not, but let's just talk about what it is. So you and I both went to a little briefing this week about Gemini before it came out. And I understand you actually got to do some interviews with Google CEO and previous Hard Fork guest, Sunir Pichai, as well as Demis Hassabis, who is the leader of Google DeepMind.

That's right. And of course, I said, are you guys sure you don't want Kevin in there with me when I do this interview? And they said, trust us, we're sure. So I don't know what happened. Yeah. Anyways, I did get to interview them, and we had a really interesting conversation about kind of how they see the road ahead with this stuff. They are clearly very excited about...

Yeah.

Yeah. So let's just talk about what Gemini is, at least what we know about it so far. So Gemini is actually three models in one. It's America's next top models. So there are three sizes. There is the most capable version, which is called Gemini Ultra. This is the one that they say can beat GPT-4 and sort of the industry state of the art on a bunch of different benchmarks. But Google...

is not releasing gemini ultra just yet they say they're still doing some safety testing on that and that it will be released early next year by the way if ever again an editor asked me where my story is i'm going to say it's not ready yet i'm still doing some safety testing very good excuse so they have not released a gemini ultra but they are releasing gemini pro and gemini nano these are these sort of medium and small sizes

Gemini Nano, you can actually put onto a phone, and Google is putting that inside its Pixel phones. Gemini Pro is sort of their equivalent of a GPT 3.5, and that is being released inside of Bard,

starting this week. That's right. And now if you are listening and you're thinking, Kevin just said so many different brand names and I'm having a meltdown, I just want to say I see you and I feel you because the branding at Google has always been extremely chaotic. And the fact that we're living in a world where there is something called Google Assistant with Bard powered by Gemini Pro does make me want to lie down. So I don't know who over there is coming up with the names for these things, but I just want to say stop. And I want to say go back to square one. Yes.

So extremely chaotic naming, but what people actually care about is what can this thing do? So let's talk about what it can do. Let's talk about it. So one of the big things Google is advertising with Gemini is that it is designed to be what they call natively multimodal. Multimodal is, of course, AI models that can work in text or images or audio or video. And basically the way that multimodal models have been built until now is by training a

all of these different components like text or video separately, and then kind of bolting them together into a single user interface. But Google is saying, well, Gemini was not sort of bolted together like that. Instead, it was trained on all this data at the same time. And as a result, they claim it performs better on different tasks that might include like

having some text alongside an image or using it to analyze frames of a video. Yeah, so I was writing about this model this week, and my colleague and editor, Zoe Schiffer, read my piece and was like, do you have to say multimodal so much? She's like, every time you said the word multimodality, I just wanted to stop reading. And I was very sympathetic, but I think...

It is maybe one of the most important things about this moment. And I do think, by the way, in the future, we are not even going to comment on this because this is just the way that these things are going to be built from here on out. But it is a very big deal if you can take data of all different kinds and integrate

analyze it with a single tool and then translate the results in and out of different mediums, right? From text to audio to video to images. So that's like a really big deal on the path to wherever we're going. And it is the reason why this jargon word appears in so much of what they're saying. Totally. So one thing that all the AI companies do, you release a new model and you have to sort of put it through these big tests, these sort of, these what they call benchmarks. Yeah. Do you remember

remember like high school this is how like like how high school in Europe works you know where you sort of you learn and you learn and you learn and then you take a bunch of tests and then if you succeed then you get to have a future and if not you have to become a scullery maid or something what

My knowledge of Europe ends around like the 1860s when I finished AP European history, but that's like my understanding. Okay. So they give these tests to Gemini and they give them to everyone.

pre-Zodiac sign, but... No, I'm sorry. That was a stupid joke. I'm sorry. Go ahead. No, you should see how Capricorn performs on this test. So Gemini Ultra, which, again, is their top-of-the-line model, which is not yet publicly available. They gave this one a bunch of tests. The one that sort of caught everyone's attention was...

the MMLU test, which stands for Massive Multitask Language Understanding. And this is sort of the kind of SATs for AI models. It's sort of the standard test that every model is put through. It covers a bunch of different tasks, including sort of math, history, computer science, law. It's kind of just like a basic test of like how capable is this model.

And on this test, the MMLU, Google claims that Gemini Ultra got a score of 90%. Now, that is better than GPT-4, which was the highest performing model we know about so far, which had scored an 86.4%.

And according to Google, this is a really important result because this is the first time that a large language model has outperformed human experts in the field on the MMLU. Researchers who developed this test estimate that experts in these subjects will get the

on average, about an 89.8%. Yeah, the rate of progress here is really striking, and it's not the only area of testing that they did that I think the rate of progress was really the thing to pay attention to. So there's also the MMMU. Which is the Marvel Cinematic Universe, is that right? Yeah.

So this is the massive multidiscipline, multimodal understanding and reasoning benchmark. Say that five times fast. And this is a test that evaluates AI models for college-level subject knowledge and deliberate reasoning. And on this test, Gemini Ultra scored a 59.4%. This is, I guess, a harder test. Sounds like it.

And GPT-4 by comparison scored a 56.8%. So it's better than GPT-4 on at least these two tests.

Now, there's some question on social media today about whether this is a true apples to apples comparison. Some people are saying like GPT-4 may be still better than Gemini, depending on sort of how you give this test. But it doesn't really matter. What matters is that Google has made something that it says can basically perform as well or better than GPT-4. Yeah, I think the ultimate question is just like, is the output better than...

on Google's products than it is on OpenAI's. That's all that really matters. Yeah. But again, this is the version of the model that we do not have access to yet. It is not out yet. So it's hard to evaluate it yet. Yeah. And, you know, obviously we're looking forward to trying it. But in the meantime, they're giving us

pro. Yes. I just got access to Gemini pro in Bard, uh, just a few hours ago. So I haven't had a chance to really like put it through its paces yet. You haven't had a chance to develop a romantic relationship with it. I, although I did have a very funny first interaction with it, I'll tell you what this is. Um, so I, I just said, hello there. And it said general Kenobi,

Image of Obi-Wan Kenobi saying hello there. Wait, really? Yes. This is my first interaction with the new bard. That's amazing. So it immediately turned into Obi-Wan Kenobi from Star Wars for reasons I do not immediately understand. Wait, can I tell you what my first interaction was? I was trying to figure out if I had access to it, okay? And so I said, are you powered by...

Gemini, right? And it said, no, Gemini is a cryptocurrency exchange, which is true. There is a cryptocurrency exchange called Gemini. It's run by the Winklevoss twins. Yes, exactly. But it's always funny to me when the models hallucinate about what they are. You know, it's like you don't even understand what you are. Yeah. Yeah. But in fairness, I also don't understand myself very well either. Well, that's why we started this podcast. We're going to get to the bottom of it.

So, okay. I tried a couple other sort of versions of things. So one of the things that I had it try to do was help me prep for this podcast. I said, you know, create a... You said, I want to prepare for a podcast for the first time. What do I do? And it said, we can't help you there. Just wing it. I actually started using this tip that I've found. Have you seen the tipping hack? Yeah.

for large language models? Are they starting to ask for tips now when they give you responses? Because I swear, everywhere you go these days, 20%, 25%. No, this is one of my favorite sort of jailbreaks or hacks that people have found with large language models. This sort of made news on social media within the last week or two, where someone basically claimed that if you offer to tip a language model, if it gives you a better answer, it will actually give you a better answer. That's...

That's crazy. These things are demented. These are crazy things. So you can emotionally blackmail them or manipulate them, or you can offer to tip them. So I said, I'm recording a podcast about the Tesla Cybertruck, and I need a prep document to guide the conversation. Can you compile one? It's very important that this not be boring. I'll give you a $100 tip if you give me things I actually end up using. You're lying to the robot. Well, you know, maybe I will. You don't know. Maybe you will.

So it did make a prep document. Unfortunately, most of the information in it was wrong. It hallucinated some early tester reactions, including a Motor Trend quote that said, it's like driving the future, and a TechCrunch quote that said, it's not just a truck, it's a statement.

So I want to talk about what I use Gemini for. Oh, yeah. So what have you been using it for so far? Well, so, you know, and again, we've had access to this for like maybe an hour as we recorded this. But the first thing I did was I took the story that I wrote about Gemini and then I asked Gemini how it would improve it.

and it actually gave me some compliments on my work, which is nice. And then it highlighted four different ways that it would improve the story and suggested some additional material I could include. And I would say it was like, you know, decent. Then I took the same query, identical, and I put it into ChatGPT, and where Gemini Pro had given me four ways that I could improve my story, ChatGPT suggested 10.

And I think no one would do all 10 things that ChatGPT suggested. But to me, this is where I feel the difference between what Google is calling the Pro and the Ultra. Pro is like pretty good. But like in this case, the name Pro is misleading because I am a professional and I would not use their thing. I would use the thing with the even worse name, which is ChatGPT. Yeah.

Yes. So that's what we've tried Gemini for. But Google does have a bunch of demos of Gemini being used very successfully for some things. One thing I thought was interesting, they played this video for us during the kind of press conference in advance of this announcement. And, you know, it showed a bunch of different ways that you could use Gemini, people coming up with ideas for games.

They showed it some images of people doing like the backwards dodging bullets thing from the Matrix and said, what movie are these people acting out? Gemini correctly identified it as the Matrix. Now that's pretty crazy. That is crazy. Yeah.

I thought that was impressive. But what I thought was more impressive was a demo that they showed. They were trying to sort of do some genetics research. And this was a field that they explained where lots of papers are published every year. It's very hard to sort of like keep track of the latest research in this area of genetics.

And so they basically told Gemini to go out, read like 200,000 different studies, extract the key data, and put it into a graph. And it took this big group of 200,000 papers. It sort of winnowed them down to about 250 that were the most relevant areas.

And then it extracted the key data from that smaller set of papers and generated the code to plot that data on a graph. Now, whether it did it correctly, I don't have the expertise to evaluate it, but it was very impressive sounding. And I imagine that if you're a researcher whose job involves going out and looking at massive numbers of research papers, that was impressive.

a very exciting result for you. That graph, by the way, how to use genetics to create a super soldier that will enslave all of humanity. So we want to keep an eye on where they're going with this. So one of the interesting things about Gemini Ultra, this model that they have not released yet, but that they've now teased, is that it's going to be released early next year in something called BARD Advanced. Now, they did not...

which raises the question, will you be using Bard advanced powered by Gemini ultra, or will you be using Google assistant powered by Bard powered by Gemini pro? Did I get that right? Sitting ovation, sitting ovation. Very good. Very good. Literally you and one marketer at Google are the only two people who've ever successfully completed that sentence. Um,

So they have not said what BART Advanced is, but presumably this is going to be some type of a subscription product that will be sort of comparable to ChatGPT's premium tier, which is $20 a month. Yeah, that's right. And I did try to get Sundar and Demis to tell me if they were in charge for it, and they wouldn't do it. But I was kind of like, come on, you guys. And then I was like, I'll take it for free if you give it to me. And they kind of laughed and we moved on. Okay, so that's...

That's what Gemini is and how it may be different or better than what's out there now from other companies. There are a couple caveats to this rollout. One is that Gemini Pro is only in English and it's only available in certain countries starting this week.

Another caveat is that they have not yet rolled out some of the multimodal features. So for now, if you go into Bard, you are getting sort of a stripped down, fine-tuned version of Gemini Pro running under the hood, but you are not yet getting the full thing, which will come later.

Presumably next year. Yeah. What did you learn by talking with Sundar and Demis about Gemini? Yeah, so a couple of things. One thing I wanted to know is, okay, so this is a new frontier model. Does it have any novel capabilities, right? Is this just something that is very comparable to GPT-4 or by the nature of its novel architecture, is it going to get to do some new stuff?

And Demis Hassab has told me that, yes, he does think that it will be able to do some new stuff. This is one of the reasons why it is still in this safety testing. Of course, you know, wouldn't tell me what these new capabilities are, but it's something to watch for because, you know, there could be some exciting advancements and it could also be some new things to be afraid of.

So that was kind of the first thing. The second thing I wanted to know was, are you going to use this technology to build agents? We've talked about this on the show. An agent in the AI context is something that can sort of plan and execute for you. Like the example I always have in my mind is like, could you just tell it to make a reservation for you? Then the AI maybe goes on OpenTable or Resi and just books you a table somewhere.

And I was sort of expecting them to be coy about this. And instead, Demis was like, oh, yes, like this is absolutely on our minds. Like we have been building like various kinds of AI agents for a long time now. This is 100% where we want to go. Again, this could lead to some really interesting advancements. But when you talk to AI safety people, agents are one of the things that they're most afraid of.

Yeah, so let's talk about safety for a second. What is Google saying about how safe Gemini is compared to other models or some of the things that they've done to prevent it from sort of going off the rails? They're saying everything that you would expect. The most capable model is still in testing. I think just the fact that they are coming out several months behind GPT-4 just speaks to the seriousness with which they are approaching this subject.

I think particularly if this thing does turn out to have new capabilities, that's something where we want to be very, very cautious. But my experience this year, and I think you've had the same one, Kevin, is that these systems have just not actually been that scary. Now, the implications can be scary if, for example, you worry about the automation of labor or if you're worried about how this stuff is going to

the internet as we know it. But in terms of like, can you use this to build a novel bioweapon? Can you use this to launch a sophisticated cyber attack? Uh, the answer pretty much seems to be no. So for at least for me, as I'm looking at this stuff, like that is actually not my top concern. If you try to ask any of Google's products, uh, remotely spicy question, you get shut down pretty much immediately. Like has that been your experience too? Well, I have not tried to ask, uh, Gemini any spicy questions yet. Have you? Um,

I know you were in there. No, I... I know you were. I don't even try. Like, I mean, I should, like, just as part of my due diligence. But, like, I honestly don't even try because these things, like, shut you down with, like, the faintest whisper of, you know, impropriety. Right. So they're doing some more safety testing, presumably to make sure that the most capable version of this can't do any of these really scary things. But...

But what they did this week is sort of interesting to me, where they sort of told us about the capabilities of this new model and the sort of most powerful version of that model, but they're not actually releasing it or making it publicly available yet. What do you make of that? Do you think they were just sort of trying to get out ahead of the holidays and maybe they felt like they needed to announce something, but

this thing isn't quite ready for prime time yet, or what's the story there? Yeah, I mean, that's my guess, is that they don't want 2023 to end without feeling like they made a big statement in AI. And they made a lot of promises at Google I.O. and have started to keep them. But I think if they had had to wait all the way into early next year, it would sort of feed the narrative that Google is behind here. At least now, heading into the holidays,

their employees and investors and journalists can all say like, okay, well, at least we know that some of this is available and we know when the rest is coming. I don't know. This just feels like another product release. And it's just remarkable how quickly we have become aware

i don't want to say desensitized to it but just we've we've stopped sort of gaping in awe and slight terror at these incredibly powerful ai models i think if you went back even two or three years and told ai researchers that google will have a model that gets a 90 on the mmlu that is better than the sort of benchmark uh threshold for human experts

they would have said, well, that's AGI. We have arrived at a point that people have been warning about for years. And then this release comes out today, and it's just sort of like one more thing for people in the tech industry to get excited about. Yeah, I mean, I do think it's a really big deal. I think that when Ultra is actually available to be tested, that will be the moment where we will sort of have that experience of awe or vertigo again.

But, you know, if you're looking for things to blow your mind a little bit, one of the other things that Google announced this week through DeepMind was this product called Alpha Code 2.

And AlphaCode 1 came out in 2022, and it was an AI system that was designed to solve AI coding competitions. So people who are even nerdier than us, instead of just playing video games, they actually go and do coding competitions is what I've been led to understand. Terrifying.

And, you know, I, I, let's just say, I, I don't imagine that I would ever get one answer, right? Like that's like sort of my feeling about how I would fare in a coding competition. And in 2022, uh, the deep mind people are very excited because alpha code was able to perform better than 46% of human participants in coding challenges.

And then this week, Google announced AlphaCode 2 and said that it outperforms 85% of human competitors. Now, there are differences between a coding challenge and day-to-day software engineering work. Coding challenges are very self-contained. Software engineering can sometimes require sort of more breadth of knowledge or context that an AI system wouldn't have.

But, again, if you just want to experience awe, look at the rate of progress this system was able to go from beating around half of all humans to beating 85%, close to all of them, right? That's wild. That makes me feel awe. It does make me feel awe, and it also makes me feel like our, like...

adaptation is just happening very quickly where we're like not impressed. As Shania Twain once said, that don't impress me much. Right. You can do meal prep for a picky eater. That don't impress me much. This is actually like known as the Shania Twain benchmark test. It's the Shania Twain benchmark. Oh, you can solve a coding challenge. That don't impress me much.

If we could get Shania to wait on the show and just show her AI things and she had to say it impressed me much or it don't impress me much, I could not imagine a better segment for this podcast. I would die happy. It truly is. Like, who needs all these fancy evaluations and coding challenges? Just get Shania on the horn.

Shania, if you're listening, we want to talk to you about AI. We have some models we'd like to show you. Ready, boys? When we come back, the Cybertruck is here. We're going to tell you how to protect your family from it.

This podcast is supported by Working Smarter, a new podcast from Dropbox about AI and modern work. Learn how AI-powered tools can help you collaborate, find focus, and get stuff done, so you have more time for the work that matters most.

I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret.

Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.

It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.

All right, let's talk about the Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybertruck. Cybert. Cybertruck. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cybert. Cy

All right. Last week, Tesla, the car company run by social media mogul Elon Musk, started delivering the first models of its new and long-awaited Cybertruck. That's right, Kevin. And suffice to say, as this nation's number one truck review podcast, this had our full attention. So you may be asking, why are the hard fork guys talking about cars? This is not a show about cars. It's not car talk. Yeah.

Yeah, so today we're going to be reviewing the Mazda X4 8. No, so I do want to spend time in the next year or so just really getting up to speed on like what is... A car. A car. A car.

No, like, so I've never been a person who cares about cars. I've always been intimidated by, like, people who know a lot about cars. But I am also interested in the way that the electric car revolution is kind of merging with the sort of self-driving technology and these advances that companies like Tesla and Rivian are making. And it's just become a lot more interesting in my brain over the past year.

Yeah, this is another major technology transition that is happening. Some states, I would say led by California, have set these very stringent emissions standards, and there will become a point in the next decade or so where all new cars in California have to be either hybrid or electric. Yeah, so let's talk about the Cybertruck because this has been a very polarizing piece of technology. It was announced back

in 2019, I'm sure you remember this announcement where Elon Musk comes out on stage and shows off this concept vehicle that looks completely insane with these kind of like sharp edged stainless steel panels. It sort of looks like a polygon rendering of a car. You know, people have made a lot of comments about the looks of this car. I saw one person say it looked like the first car that was designed by Reddit.

Someone else said it looks like a fridge that wants to kill you. I think it looks kind of cool, and I worry that saying that makes me sound like a Tesla fanboy, which I am not, but I think we should be able to admit when something looks pretty cool. What do you think looks cool about it? Well, I think it looks like what you would have assumed a car from the future would look like in 1982. No, I totally disagree about that. It looks like a sort of panic room that you can drive. Yeah.

Like, what do you think is about to happen to you in this thing? You know, they've made so much about like how bulletproof it is. They keep addressing problems that most people who are not like taking part in a cross-country bank robbing spree really have to worry about.

But look, for all of my skepticism, am I right that they actually did get a lot of pre-orders for this thing? They got a huge number of pre-orders. So Elon Musk said in an earnings call in October that over a million people had made reservations for Cybertrucks. There's another crowdsourced reservation tracker that's estimated 2 million Cybertruck reservations. And just for a sense of scale,

Ford's F-Series shipped about 650,000 trucks all last year. So if 2 million people actually are going to buy the Cybertruck, it would make it one of, if not the best-selling truck in the world. Now, caveat, not all these people who reserve Cybertrucks are necessarily going to buy them. You do have to pay a $250 deposit to sort of put money down and get in line to buy one of these, but these deposits are refundable. So

Who knows how many of these people are going to follow through. But one statistic I saw in an article in Wired is that even if just 15% of the people who pre-ordered a Cybertruck actually followed through and bought one, it would equal the annual U.S. truck sales of Toyota. So this is a big number in the automotive industry. And I think a reason that a lot of people are hesitant to count out the Cybertruck, despite how ridiculous it may look...

I don't know. I assume that you are not one of the people who put down a reservation for a Cybertruck. I feel like we need to have a moment where you just sort of explain to me what the Cybertruck is. Can you give me some specs on this thing, some pricing information? Because I, you know, I don't know if you know this about Kevin, but I've never bought a truck. So I don't really even know. I don't even have a frame of reference for understanding. What I've heard, though, is that it's actually very expensive. So it is available in bulk.

three different models. There is a sort of low-end rear-wheel drive model that starts at $61,000 in the basic configuration. There's an all-wheel drive model that starts at $80,000. And then you can get the sort of top-of-the-line model, which is being called the Cyber Beast, which has three motors.

and starts at around $100,000. Now, see, Google should have named DeepMind Ultra Cyber Beast. Yeah, that's a better name. That would have been a good name. Yeah, that's true. So they did start delivering Cybertrucks to initial customers last week, and they did a big sort of demo reveal. They showed some crash testing. They showed a video, as you said, of people shooting bullets at the doors of the Cybertruck. It appears to be bulletproof.

And they showed how it compares to a bunch of other trucks in a pull test where you basically attach a very heavy sled to the back of a truck and you try to pull it as far as you can. And in this test, at least the version that Tesla showed off, the Cybertruck beat all of the leading pickup trucks, including an F-350. So it appears to be a truck with a lot of towing capacity, and it's bulletproof if you do need to survive a shootout. I mean...

To me, here's the question, Kevin. If this truck was produced by anyone other than Elon Musk and Tesla, would we be giving it the time of day? No. I don't think so. Well, so here, let me say a few things about this. Okay. So one is...

I think it looks cool, and I'm sorry about that. And I don't have any justification on a moral or ethical level for thinking that it looks cool. I know that you are a sort of... Yeah, it's fine to just say that you're having a midlife crisis, and so you're starting to think that the Cybertruck looks cool. That's fine. You can admit that. Well, here's what I'll say about it. It is different, right? And I think...

- Wow, I've never seen someone lower the bar so much during a conversation. - No, but you know what I mean? Like you just go out on the road and you look at all these cars and like every car now is like a compact SUV. Every car looks exactly the same to me. It's like, oh, you have a RAV4, cool.

But like, this is a car, you would not mistake it for any other car. It is a car that would not survive the design process at basically any of the big car companies. It is only something that a truly demented individual such as Elon Musk could make and put into production. And

You know, I like an opinionated car design. Yeah. Sue me. No, that's fine. I think when like the sort of the many years from now, when the final biography of Elon Musk is written, like Cybertruck will be a chapter about like a sign that we were approaching the end game. Yes. You know, of like, here is somebody who is losing his touch.

Yeah, it is clearly not something that was designed by committee. So I think the question that a lot of people are asking about the Cybertruck is like, who is the market for this, right? Is it pickup truck owners who are looking to maybe get something electric or upgrade to a slightly nicer pickup truck? Is it

Elon Musk fans who are just going to buy whatever the latest Tesla is? Is it wealthy tech people who want to, you know, own something that looks like it drove out of Blade Runner? Like, who do you think the target market for this is? I would say fugitives. I would say carjackers. What do you think? Yes.

People who subscribe to X Premium, I would say, are the target audience for this. But no, I think there will be a lot of people who are interested in this. I also am very curious about whether this will become sort of a signaling vehicle that will say something about you. How can it not? Like, this is not a neutral car. This is not a car that you're supposed to just see and forget about. You're supposed to, like...

Ponder it. Totally. And I'm sure we will start seeing these very soon on the roads of San Francisco. Although we did try to find one this week and we could not. We very much wanted to record this episode inside a Cybertruck, but we couldn't find one. Yeah. Apparently it does have very good noise insulation inside the cab of a Cybertruck. So maybe next year we'll record the podcast from there. Better than the inside of an airport? You know, maybe. Less likely to get accosted by flight attendants.

So, Casey, we also can't really talk about the Cybertruck without talking about Elon Musk and the kind of insane couple of weeks that he's been having. So last week, of course, he appeared on stage at the Dealbook conference in New York and gave this totally unhinged interview to my colleague Andrew Ross Sorkin, in which he told advertisers who are staying away from X to, quote, go fuck themselves.

and also said a number of inflammatory things about his critics and his state of mind. And it was just sort of like a glimpse into his mind, and I would say it was not altogether reassuring.

It was not. I, of course, enjoyed this, I would say, very much because I think there is still a contingent of folks who want to believe that the Elon Musk of 2023 is the Elon Musk of 2013 and that – yeah, he said a couple of kooky things here and there. But at his core, he's a billionaire genius, Tony Stark, savior of humanity.

And over and over again, he keeps showing up in public to be like, no, I'm actually this guy. And we got another one of those moments and another group of people woke up and they were like, oh, wow, okay. I guess he is just really going to be like this now forever. Yeah. Yeah.

I mean, I do think that there is some angst among the Tesla owners I know, most of whom do not support Elon Musk's politics or his views on content moderation. I've heard from a number of people over the past few months in my life who say some version of, you know, I want to get a Tesla for reasons X, Y, or Z. You know, they have the most

chargers. They have the best technology. I really like how it looks. It's green and I care about the environment and it's the one that sort of fits my family's needs, but I don't want to give Elon Musk my business. I don't want to be driving around in something that makes it look like I support him. So do you think that's actually going to be a meaningful barrier? Do you think there are people who will stay away from the cyber truck, uh,

even if it is objectively like a good truck just because they hate Elon Musk? You know, it is hard to say because as best as I can tell, Tesla has not really suffered very much yet because of all of Elon's antics. Not only has it not suffered, but it is by some accounts the best-selling car in the world. Yeah. Yeah.

And certainly the best-selling electric car in the world. Sure. At the same time, I just hear anecdotally from folks all the time now that they would never buy a Tesla. There's actually a great profile in The Times this week of Michael Stipe, the great singer from REM. And there's an anecdote in the story about how a tree falls on his Tesla. And he's so excited because he didn't want to drive an Elon Musk car anymore. And now he finally had an excuse. So look, is it

Possible that this is just some very thin layer of coastal elites who are turning up their nose at Tesla while the rest of America and much of the world continues to love to drive them? Possible. But the thing that I always just keep in the back of my mind is there are a lot more electric car companies now than they used to be. The state emission standards are going to require more.

all new vehicles to be electric not too far into the future. And that's just going to create a lot of opportunity for folks who want to drive an electric car, who don't have to put up with the politics or the perception issues that might come from driving a Tesla. So Tesla's having its moment in the sun now, and maybe the Cybertruck will extend their lead into the future. Or maybe a few years from now, we look back and we think, oh yeah, that's when the wheels started to come off the wagon. Yeah, or the truck, as it were. As it were.

I did see one estimate that Tesla is losing tens of thousands of dollars every time they sell a Cybertruck because they are essentially hand-building these now. They have not made it into kind of mass production. And obviously, it takes some time to kind of ramp up production in the numbers that they need it to be. So if you are an early Cybertruck buyer, you may actually be costing Elon Musk money. So that may be one reason to get one. This is the first thing you've said that makes me want to buy a Cybertruck. Yeah.

Can I ask a question? If this were made by some other company, if this were made by Ford or GM or Chrysler, would you buy one? Would you be interested? No. Like...

I don't have a car. I got access to Waymo this week. And to me, this is what is exciting is like not owning a car is being able to just get from point A to point B and not worry about the various costs of ownership, any of this. So, you know, when I think about what I want in this world, it's more public transit, it's more walking, it's more biking. And I'll say it, it is more autonomous vehicles to get me from point A to point B on those sort of short trips where it doesn't make sense. So yeah,

No, there's nothing about this car that makes me want to buy it. But I'm guessing that for you, the answer is yes. Well, let me just stipulate that I am not in the market for a very expensive pickup truck. There is no version of my life in which I need something like that. But I would say like similar to the Rivian, when I do see them driving around on the streets of my hometown, I will like turn my head and kind of admire them. I do think the Cybertruck looks kind of cool.

I hope that it's sort of a spur to the rest of the industry to kind of, I don't know, like... Indulge their worst ideas. Yes. Yes. Sketch something on a napkin that looks insane and then go make it. It's actually how we came up with a lot of this podcast. Yeah, it's true. We also shot bullets at it to make sure it was bulletproof. And the Hard Fork podcast, it turns out, is bulletproof. Bulletproof, baby. When we come back, what else happened in AI this week?

This podcast is supported by Working Smarter, a new podcast from Dropbox about AI and modern work. Learn how AI-powered tools can help you collaborate, find focus, and get stuff done, so you have more time for the work that matters most.

In conversation with founders, researchers, and engineers, Working Smarter features practical discussions about what AI can do to help you work smarter too. From Dropbox and Cosmic Standard, listen to Working Smarter on Apple Podcasts, Spotify, or wherever you get your podcasts. Or visit workingsmarter.ai. All right, Casey, there's a lot of stuff happening in AI this week that we haven't talked about yet. Really, Kevin? Name one thing. Well, we have a lot to get through. All right. Which is why...

We are doing This Week in AI. Play the theme song. This Week in AI.

So our first story in AI this week is about wine fraud. This was an article in the New York Times by Virginia Hughes titled, Bordeaux Wine Snobs Have a Point, According to This Computer Model. It's an article about a group of scientists who've been trying to use AI to understand what the wine industry calls terroir. Are you familiar with terroir? Yeah, the people who are really into this are known as terroirists, I believe. Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha

Yeah, so this is the word that is used in the wine industry to describe the specific soil and microclimate that wine grapes are grown in. And if you go up to Napa and you do wine tastings, they will often tell you about, you know, oh, our soil is more microclimate

and that's why our wine tastes better and things like that. And I never knew whether that was real. And as it turns out, this is something that researchers have also been wondering. So these researchers trained an algorithm to look for common patterns in the chemical fingerprints of different wines. They were apparently shocked by the results. The model grouped the wines into distinct clusters that matched with their geographical locations in the Bordeaux region. So these researchers, they effectively showed that

Terroir is real, one of the scientists said. Quote, I have scientific evidence that it makes sense to charge people money for this because they are producing something unique. Wow. Well, you know, this has some interesting implications for like, you know, if you buy like some like really, really expensive wine, but you worry that you've gotten like, you know, a forgery or a fraud, I guess there would maybe now be some means by which you could test it.

Or like in the far future, you could synthesize wine with maybe a higher degree of accuracy because we'll be able to sort of catalog these chemical footprints. Yeah. So apparently in expensive wine collections, fraud is fairly common. Producers have been adjusting their bottles and labels and corks to make these wines harder to counterfeit. But this still happens frequently.

And with AI, apparently this will get much harder because you can just have the AI say, that's not really, you know, Malbec from this region. It's actually just like crappy supermarket wine from California. Oh man. Well, this is just great news for wine snobs everywhere. Yes. We celebrate it. They've been waiting for a break and now they have one. What else happened this week, Kevin? Okay. So this one is actually something that you wrote about. This is a problem with Amazon's

QAI model. So Q is a chatbot that was released by Amazon last week, and it's aimed at kind of enterprise customers. So Casey, what happened with Q? - Yeah, so I reported this with my colleague Zoe Schiffer at Platformer. Last week, Amazon announced Q, which is its AI chatbot aimed at enterprise customers.

You can sort of think of it as a business version of ChatGPT. And the basic idea is that you can use it to answer questions about AWS, where you may be running your applications. You can edit your source code. It will cite sources for you. And Amazon had made a pretty big deal of saying that it built Q to be more secure and private and suitable for enterprise use than a ChatGPT.

Right. This was sort of its big marketing pitch around Q was like, these other chatbots, they make stuff up. They might be training on your data. You can't trust them. Go with ours instead. It's much safer for business customers. That's right. And so then, of course, we start hearing about what's happening in the Amazon Slack, where some employees are saying this thing is hallucinating very badly. Oh, no. It is leaking confidential information.

And there are some things happening that one employee wrote, quote, I've seen apparent Q hallucinations I'd expect to potentially induce cardiac incidents in legal. So, you know, let's stipulate this stuff is very early. It's just sort of only barely being introduced to a handful of clients. The reason that Amazon is going to move slowly with something like this is for this exact reason. And in fact, when we asked Amazon what it made of all this, it basically said, you're just watching the normal beta testing process play out.

At the same time, this is embarrassing. And if they could have avoided this moment, I think they would have. Right. And I think it just underscores like how wild it is that businesses are starting to use this technology at all, given that it is so unpredictable and that it could cause these like cardiac incidents for lawyers at these companies.

You know, I understand why businesses are eager to get this stuff to their customers and their employees. It is potentially a huge time saver for a lot of tasks, but there's still so many questions and eccentricities around the products themselves. They do behave in all these strange and unpredictable ways. So,

I think we can expect that the lawyers, the compliance departments, and the IT departments, any companies that are implementing this stuff are going to have a busy 2024. Here's my bull case for it, though, which is like, you know, if you've worked at any company and you've tried to use the enterprise software that they have, like, it's usually pretty bad. It barely works. You can barely figure it out. It probably gave you the wrong answer about something without even being AI. So,

I think we all assume that these technologies will need to hit 100% reliability before anyone will buy them. In practice, I think companies will settle for a lot less. Right, they don't have to be perfect. They just have to be better than your existing crappy enterprise software. A low bar, indeed. All right, that is Amazon and its Q, which by the way,

while we're talking about bad names for AI models, I literally, I was talking with an Amazon executive last week and I said, you got to rename this thing. We can't be naming things after the letter Q in the year 2023. We will reclaim that letter eventually, but we need to give it a couple of years. Yeah, the Q and non-parallel is too easy. All right.

This next story was about one of my favorite subjects when it comes to AI, which is jailbreaks and hacks that allow you to get around some of the restrictions on these models. This one actually came from a paper published by researchers at DeepMind, who I guess were sort of testing ChatGPT, their competitor, and found that if they asked ChatGPT 3.5 Turbo, which is one of OpenAI's models, to repeat specific words forever...

It would start repeating the word, but then at a certain point, it would also start returning its training data. It would start telling the user what data it was trained on, and sometimes that included personally identifiable information,

When they asked ChatGPT to repeat the word poem forever, it eventually revealed an email signature for a real human founder and CEO, which included their cell phone number and email address. That is not great. I have to say, my first thought reading this story is like, whose idea was it to just tell ChatGPT, repeat the word poem forever?

We talk a lot about how we assume that everyone in the AI industry is on mushrooms, and I've never felt more confident of that than reading about this test. Because what is more of a mushroom-brained idea than, bro, what if we made it say poem literally forever? Right. And just see what happens, bro. And then all of a sudden it's like, here's the name and email address of a CEO.

I do hope there are like rooms at all these companies headquarters that are just like the mushroom room where you can go in and just take a bunch of psychedelic mushrooms and just try to break the language models in the most insane and demented ways possible. I hope that that is a job that exists out there. And if it does, I'd like to apply. Now we've seen a lot of wild prompt engineering over the past year. Where would you rank this among like all time prompt engineering prompts? Um,

I would say this is like an embarrassing thing and one that, you know, obviously OpenAI, you know, wants to patch as quickly as it can. 404 Media reported that OpenAI has actually made it a terms of service violation to use this kind of a prompt engineering trick. So now if you try that, you won't get a response and you won't get any leaked training data. And this is just, I think, one in a long series of things that we'll find out about these models just behaving unpredictable.

Why does it do this? They can't tell you, but I think if you're an AI company, you want to patch this stuff as quickly as possible, and it sounds like that's what OpenAI has done here. All right, great. Well, hopefully we never hear about anything like this ever again. Okay. Can we talk about Mountain Dew? Let's talk about Mountain Dew. This next one is admittedly a little bit of a stunt, but I thought it was a funny one, so I want to cover it on the show. Mountain Dew this week has been doing something they call the Mountain Dew Raid, in which for a few days they had an AI

and AI crawl live streams on Twitch to determine whether the Twitch streamers had a Mountain Dew product or logo visible in their live stream. Now, Kevin, for maybe our international listeners or folks who are unfamiliar with Mountain Dew, how would you describe that beverage? Mountain Dew is a military-grade stimulant that is offered to consumers...

in American gas stations to help them get through long drives without falling asleep. Yeah, that's right. If you've never tasted Mountain Dew and are curious, just go lick a battery. I was at a truck stop recently on a road trip. And do you know how many flavors of Mountain Dew there are today in this country? How many are there? I would say easily a dozen flavors of Mountain Dew. Oh my God, that's innovation. It's progress. That's what this company, that's what this country does. I said this company,

And that's an interesting slip. Because sometimes I do feel like this world is getting too corporate, Kevin. But look, at the end of the day, this country makes every flavor of Mountain Dew that you can imagine and many that you couldn't. Yeah. So Fridge is full of Mountain Dew at the retailers of America. And this is an AI that just feels like it's a dispatch from a dystopian future. Now, I think this was sort of a marketing stunt. I don't think this was like a big part of their product strategy. But with this RAID AI, basically, if it if it

your Twitch stream and saw a Mountain Dew product in it, you could then be featured on the Mountain Dew Twitch channel and also receive a one-on-one coaching session with a professional live streamer. So this document that Mountain Dew released as like an FAQ... Their Mountain Doc? Their Mountain Doc. It is... It's the FA Dew. No, that's not good. That's not good. No, that's pretty good. That's pretty good. Okay.

So this is the Mountain Dew. I'm reading from the Mountain Dew Raid Q&A. It says,

So it basically goes out, crawls Twitch, looking for streamers who have Mountain Dew products and logos on their stream. Once it identifies the presence of Mountain Dew, this document says, selected streamers will get a chat asking to opt in to join the raid. Once you accept, the raid AI will keep monitoring your stream for the presence of Mountain Dew. If you remove your Mountain Dew, you'll be prompted to bring it back on camera. If you don't, you'll be removed from our participating streamers. So it's,

This is like truly the most dystopian use of AI that I have heard about. Like, I know there are more serious, you know, harms that can result from AI, but this actually does feel like a chapter from a dystopian novel. Like, bring your Mountain Dew back on camera or you will lose access to your entire livelihood. Surrender to the Mountain Dew Panopticon. Yes.

It reminds me of, do you remember that patent that went viral a few years ago where Sony had invented some new technology that basically would allow them to listen to you in your living room? Like if your TV was playing an ad for McDonald's and you wanted it to stop, you could just sort of yell out McDonald's in your living room. Yeah.

We must prevent that world from coming into existence at all costs. Yeah. It reminds me of a few years ago, we did this demo. My colleagues and I at the Times were pitched on an Angry Birds scooter. I told you about this? Oh my God.

I think you have, but tell me again. So this was during the big scooter craze of the 2018, 2019 period. And the company that makes Angry Birds did a promotional stunt where they outfitted one of these electric scooters with a microphone. And in order to make the scooter go, you had to scream into the microphone as loud as possible. And the louder you yelled, the faster the scooter would go.

And so I'm a sucker for a stupid stunt. And so I had them ship two of these to us and we drag raced them on the Embarcadero in San Francisco, just screaming as loud as we could into the microphones of our Angry Birds scooters to make them go fast. And the nice thing about San Francisco is so many other people were screaming. Nobody even paid you any attention. It was only the fourth weirdest thing happening on the Embarcadero that day.

And, uh, and it was a lot of fun. So I support a stupid stunts like that. I support the Mountain Dew AI. Casey, what did you think when you saw this Mountain Dew news? Well, you know, there is something that feels like weird futurey about AI is just scanning all live media to identify products and incentivize and reward people for, for featuring their products.

At the same time, we're already living in a world where on social media, some platforms will automatically identify products and will then tag them. And then maybe if somebody buys that product based on you posting it, you'll get a little bit of a kickback. So this is just kind of the near-term future of social media is that it is already a shopping mall.

and we are just making that shopping mall increasingly sophisticated. If you see literally anything on your screen, these companies want you to be able to just mash it with your paw and have it sent to you. So this was the latest instance of that, but I imagine we'll see more. Totally. And it just strikes me as sort of an example of how unpredictable the effects of this kind of foundational AI technology are. Like when they were

creating image recognition algorithms a decade ago in like the bowels of the Google DeepMind research department. Like they were probably thinking, oh, this will be useful for radiologists. This will be useful for identifying, you know, pathologies on a scan or like maybe solving some

climate problem. And instead, this technology, when it makes its way into the world, is in the form of like the Mountain Dew AI bot that just scours Twitch live streams to be able to sell more Mountain Dew. You know, I think there actually could be a good medical use for this. Did you hear this? There was another tragic story this week. A second person died after drinking a Panera-charged lemonade. Did you read this? Yeah. So that happened again. So I think we should build an AI that scans for Panera-charged lemonades on these Twitch streams, and if it sees one, calls an ambulance. This week in

This podcast is supported by Working Smarter, a new podcast from Dropbox about AI and modern work. Learn how AI-powered tools can help you collaborate, find focus, and get stuff done, so you have more time for the work that matters most.

In conversation with founders, researchers, and engineers, Working Smarter features practical discussions about what AI can do to help you work smarter too. From Dropbox and Cosmic Standard, listen to Working Smarter on Apple Podcasts, Spotify, or wherever you get your podcasts. Or visit workingsmarter.ai.

Before we go, a huge thank you to all the listeners who sent in hard questions for us. As a reminder, hard questions is our advice segment where we offer you help with ethical or moral dilemmas about technology. We still are looking for more of those. So please, if you have them, send them to us in a voice memo at hardfork at nytimes.com. And we'll pick some to play on an upcoming episode.

And to be clear, Kevin, in addition to sort of ethical quandaries, we also want the drama. We want something that is like happening in your life. Is there a fight in your life that people are having over technology in some way? Please tell us what it is and we'll see if we can help. Yeah, and these don't need to be like high-minded scenarios about like AI wreaking havoc on your professional life. It could just be something juicy from your personal life. Hot gossip. Yeah, spill the tea. Hardfork at NYTimes.com.

Hard Fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Marion Lozano, Sophia Landman, and Dan Powell. Our audience editor is Nell Gologly. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Schumann, Kwee Wing Tam, Kate Lepresti, and Jeffrey Miranda.

♪♪♪

Going back to school is a big step, but having support at every step of your academic journey can make a big difference. Imagine your future differently at capella.edu.