cover of episode A Trip to TikTok + ChatGPT’s Origin Story + Kevin Systrom’s Comeback

A Trip to TikTok + ChatGPT’s Origin Story + Kevin Systrom’s Comeback

Publish Date: 2023/2/3
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Kevin, this week I went on a reporting trip to Los Angeles. I was there to meet with some people from TikTok, and I brought you some trinkets. Aw, what'd you bring me? Well, we have the branded TikTok jotters, which are some ballpoint pens. Oh, those are cool. Yeah. There is some refillable nourishing hand sanitizer with a TikTok logo on it. We have the classic For You notepad, of course, a riff on the TikTok For You page. Wow.

And then here is some information about how many trust and safety professionals the TikTok corporation has hired. This is a real goodie bag. Yeah. And finally, there's the Wi-Fi password for TikTok if you need it. Anyway, based on this, would you say that TikTok no longer poses a threat to our national security? Yes. Based on the notebook...

hand sanitizer pens you know honestly pretty good password for the wi-fi i'm not going to read it out but you know i wouldn't i wouldn't have guessed this and uh and this crumpled sheet of paper i believe that i my concerns have been sufficiently alleviated and i no longer believe that tiktok poses a threat to the security of the united states well you see that's why you invite the media down it's because that's the kind of shift in perception you can well mission accomplished uh

let's move on to the next segment. I have no more concerns about TikTok. I'm Kevin Roos. I'm a tech columnist at the New York Times. And I'm Casey Newton from Platformer. This week, Casey takes a field trip to TikTok. I report on some behind-the-scenes developments at OpenAI and ChatGPT. And Kevin Systrom, the co-founder of Instagram, tells us about his new AI-powered news app.

Okay, let's talk about this trip. So you went down to LA to do what at TikTok? So a couple years back, TikTok announced that they were going to open up this building that they were calling the Transparency Center. And this was around the time that the Trump administration was trying to force ByteDance, which owns TikTok, to sell it off to some conglomerate led by Oracle and Walmart.

And TikTok said, "Whoa, whoa, whoa, whoa, whoa. Before we go that far, we want to be more open with you than any social network has ever been with anyone. And we're going to build a room where journalists and lawmakers and regulators can come in and you can basically stare the algorithm dead in the eye and see what it's made of." And so I was very excited about this. I was ready to go down. And then there was a global pandemic.

Instead, we took a virtual tour and I'm just going to say it was not that great. It's like very hard. It was our conference room. Yeah. It's very hard to take a virtual tour of a algorithmic transparency center is what I learned.

But a couple years went by and conditions on the ground changed. And a few weeks back, I got an invitation to go to TikTok's office in Culver City. Wow. So you show up. So set the scene a little bit. Yeah. What's it like? I've never been. It looks like what you would expect a TikTok office to look like. There's sort of, you know, giant LED screens showing TikToks.

you know, sort of life-size representations of the logo, conference rooms named after viral moments on the app, right? If you've been to, like, Facebook or Twitter, like, this is kind of how they all look. Totally, yeah. Which is always weird. The conference room name thing has always weirded me out. Like...

You know, you go to a tech company and like usually the conferences are like, it's like Chewbacca mom. Right. I remember. Yeah. Like named after big moments on the platform or like ice bucket challenge, which I always think is like, like, you know, you're eventually going to have to lay someone off in there. Right. Like someone is getting canned in the Chewbacca mom room. Yeah.

It's like, that's right. The results of our investigation into your embezzlement case are in, and we need to talk to you over in the Ice Bucket Challenge. Right. Meet us in Grumpy Cat for a discussion of your performance evaluation plans. Yeah, we've got some bad news for you over in the Tide Pod Challenge Zone. So anyway, it's got conference rooms with funny names. What is the actual Transparency Center? What

Describe it. Yeah, so we were sort of in two rooms. One was just sort of a big conference room where we heard some speeches from some TikTok folks, which I can talk about. And then after that moment, we crossed the street and went to the Transparency Center itself. Oh, it's a different building. It was in a different building. Was it transparent? Did it have like glass walls? Yes, you could look in on it from the street. No, you couldn't. I would actually say it was heavily secured. So we go in there and we were sort of led on a presentation and...

It's called a transparency center, but the way that I would think about it is a little bit more like a children's museum about TikTok, you know, where you sort of have a docent who is, you know, very sort of, you know, friendly, bright, articulate host. And she brought us through a series of exhibits that were interactive and

And there were sort of a series of these sort of large, like, smartphone-like screens where she could swipe up and swipe down, almost as if she was swiping through TikTok itself. Okay. And then we're led to the real heart of the exhibits. One of them is essentially like a guided tutorial where you could, you know, it was like...

It sounds so boring. And this part kind of was. It was like, you know, how does TikTok keep people safe? And you tap a button. And then it would, you know, sort of show you some visual media and try to answer that question. And then you could go back and they would say, well, you know, how do you ensure that this doesn't happen? And, you know, so basically kind of a fancy Q&A zone. Right. Like the terms of service in museum form. Exactly. The more interesting of the exhibits was a room where you could pretend to be a TikTok content moderator. Yeah.

And you know I was in heaven doing this, right? I love the subject of content moderation. I write a lot about content moderation, but I have not really been in a position to do any moderating of content. And what TikTok has set up is kind of a facsimile of their content moderation system where you could choose any of the bad things. So you could choose hate speech or violent extremism or nudity or bullying. And they would then show you

a handful of videos, and next to those videos, they would show you sort of relevant policies from their own community standards, like about bullying or hate speech or extremism, and then you would decide, does this violate the content or not? And then you would either say, like, yes, it does, and if so, which policy does it violate? Because that's something that content moderators have to do. They can't just say good or bad. They have to say, no, you broke this rule, right?

And then, you know, you would sort of get the result. It was a, well, you know, were you right or were you wrong? And I was right a lot of the time. Not to brag. But I was also wrong, too, in ways that surprised me. You know, and I'll just tell this story quickly because I think that, you know, we have so much...

like, heartburn over content moderation in this country. And people get so mad about having their posts removed, but we also get so mad about posts that are left up that we think should be removed. And I think everyone would benefit from, like, spending an hour in a chair just trying to decide whether posts belonged on TikTok or Facebook or Twitter or whatever.

There was this one where I've been reading these policies about bullying and harassment. And there was this TikTok that was taken from inside, you know, like a refrigerator case at a bodega where they have like the cold drinks, you know, type of thing. So somebody was sort of behind that and was pointing a squirt gun. And it was a bright green squirt gun. It was clearly a squirt gun. And some kid was going to like reach in to like get a can of Coke or something. And the squirt gun goes off and just blasts the kid in the face. And the video ends with him sort of recoiling, you know, and falling backwards.

And I'm like, this seems like pretty clear harassment of someone. Maybe the kid was in on it, but I can't tell from the video. And instead, it just kind of looks like somebody being mean to innocent people shopping in a bodega. So you clicked the this violates the rules, take it down button.

Yeah. And they were like, no, that actually doesn't violate our rules. Fascinating. Yeah. So look, if you want to go squirt a kid with a squirt gun anywhere in the world, put it on TikTok, go viral, make money. Well, I know what I'm doing tonight. Yeah. There's your weekend plan. Okay. So they see this simulator, this like, you know, flight simulator, but for content moderation. What else? So those were kind of the big two exhibits. And then on the way out, they said, now there is this third exhibit, but we're not going to take you to it.

And it's actually in this room over here. And so she sort of pointed to this, you know, area where we were not allowed to go. And she said, in this room, you would pass through some metal detectors, you would sign a nondisclosure agreement, and then you would sit down and you would see the TikTok source code. And it's like a room where the source code can be inspected.

Wow. Like the Ark of the Covenant vibes. Yeah. And, you know, one of the big challenges that TikTok has is that no one trusts anything it says on a handful of subjects, right? One of the things that people don't trust it on is could the Chinese government interfere with ByteDance and sort of insert code into TikTok?

either to surveil Americans or maybe promote pro-China content, right? And could they put that into the source code itself? And so there is a room where people like regulators, academics, or whoever else TikTok allows can go in and look at that code. You know, I don't write code, so I wouldn't be able to read it, but there is such a room. So, okay, you didn't get to go in the secret code room, but...

It does exist, and other people have gone in there and can go in there, right, who do understand how to evaluate code and could go look for evidence of, you know, Chinese interference. Yeah, that's right. Now, of course, this gets tricky in a hurry because, okay, so you look at the code and it looks fine. Well, then you walk out the door. Then what happens to the source code, right? And so, you know, TikTok has a lot of these challenges, but they are starting to think through, like, okay, how could we try to organize ourselves in such a way that we address those concerns more permanently? Right. So let's just...

Back up for a second and talk about why you were at TikTok in the first place. So as we've talked about on the show, TikTok is under a lot of scrutiny from regulators and politicians. It's being banned in certain state governments and actually federal government employees now are no longer allowed to have it on their phones, their work phones. And part of the reason for all the suspicion, as we've discussed, is that it's owned by ByteDance, which is a Chinese company and Chinese companies are

you know, can be influenced by the Chinese government. Yeah, and I mean, influence is probably not strong enough a word, right? Like if the Chinese government tells a Chinese company to do something, like the company either does it or they're shut down, right? There's sort of not a lot of room for negotiating there. And so TikTok has been on this kind of furious...

quest to prove to American regulators mostly, but also journalists, that it is independent, that it is not being steered by bite dance in ways that would make it a threat to U.S. interests. That's right. And to the extent that Americans' data is being stored in China or that Chinese employees would have access to that data, TikTok is trying to say, we are going to undertake a huge effort to ensure that that doesn't happen.

And of course, the whole goal for this is that ByteDance can continue to own TikTok and minimize the risk that someone else is either going to shut it down completely or force them to sell it. Right. And I think it's very interesting that the way that they've decided to be transparent is with this sort of, you know, children's museum, as you described it, but also this, you know, allowing people to go in and inspect the source code or a version of the source code at a moment in time.

And I think this is an area where I've changed my mind in the last few years. I used to think that sort of transparency would...

solve a lot of problems with social media, that if they just sort of opened up the algorithms and made them viewable so you could see why are certain posts ranked higher than others, that that would actually go a long way to increasing trust in these social networks. And I still think that's true in some cases, right? I think transparency is usually a good thing. But on this algorithmic transparency bit, it just seems like that's not

it's only a partial solution, right? Because only one of the worries that people have about TikTok is that ByteDance or the Chinese government could be inserting malicious code into its code base, right? A lot of the other fears are about sort of

content moderation decisions, frankly, like, you know, which posts are allowed to stay up and go viral and which posts are kind of taken down or demoted so that no one sees them. And that kind of thing typically wouldn't be in the source code, right? It's not like there's a line of code that says, you know, take down all videos of Tiananmen Square or something like that.

That's right. The first exhibit, one of the things that it did, one of the questions that it would answer for you is, why am I seeing the videos that I see in my feed? How are these videos chosen for me? The program explains that to you. As part of the explanation, they show you the code snippets.

And the sort of written explanation will say something like, well, you know, we take a set of videos and then we run it through our machine learning systems and a score is generated. And then we sort of reduce that to a smaller set of videos and we run that through like some more loops, right? And on one hand, I think they have,

with a high degree of accuracy and a pretty good degree of depth how TikTok works, right? So they've like, they are being transparent about that. And yet, at the end of the day, can I really say with any specificity why I saw a video? It's like, no, it's like some math happened in a computer. And now I'm seeing this. And I think that's the truest explanation for why you're seeing what's in your feed. And I don't think that's a very satisfying explanation for a lot of people. And it makes me think of this idea that Elon Musk had when...

when he took over Twitter that he was going to sort of open the code base and look at the code and find this sort of, you know, smoking gun evidence that Twitter had been suppressing certain kinds of posts. And instead what you found is like, kind of,

kind of a normal algorithm where like certain posts are, you know, promoted not based on like their ideology, but like how likely it is to keep you on the app. Like that's what I think I've learned by talking with engineers who build these algorithms is it's like, it's very rare that the controversial thing is like hard coded into the app itself. Um,

The app itself is usually just trying to get you to stay on the app. Yeah, but businesses make business decisions. And we don't often talk about that. We're talking about content moderation decisions, right? These people are all trying to make money. Totally. So what else did you see? What did they say? What did the TikTok executives talk about and discuss?

I'm curious just what else happened. Yeah. So I think the kind of centerpiece of the day, aside from the visit to the Children's Museum, was a set of talks from some TikTok executives where they tried to give us the lay of the land and talk in some detail about Project Texas. Project Texas is TikTok's effort to move all American data to the United States and

to put it in what they call a secure enclave, which will be managed by Oracle. Right, Oracle sort of being the U.S. babysitter of TikTok in this case. That's right. They're setting up a subsidiary called U.S. Data Security, and that's going to be in charge of making sure that everything TikTok stays in the United States. They're saying that they have already spent $1.5 billion on this,

and that it will be a sort of massive ongoing expense for them. But that at the end of it, there will be like an independent board of directors who will be able to check in at any time and say, hey, did anything untoward happen with data? So that was what they wanted to talk about was the Project Texas and their very expensive sounding efforts to, you know,

you know, appease regulators and politicians in the U.S. Were there any things that they notably didn't talk about or didn't want to talk about? Well, I wanted them to get into this stuff on the record a little bit. You know, they wanted to do that particular portion of the day on background, which is, you know, is a term in journalism, which is basically like, you can report what we said, but we don't need to attribute it to specific people.

And I imagine that's because these are very sensitive negotiations that, you know, when they are going to talk to the Biden administration, it's going to be some extremely high ranking official and not, you know, the sort of more, you know, I guess, mid-level managers that we were talking to in some cases.

So I get it. Like, it's sensitive. But at the same time, you know, you bring reporters down to the Transparency Center to talk about your transparency efforts. And you're like, well, you know, but don't attribute any of this information. And, you know, we don't want you to record this part. It does get a little frustrating. Right. My Transparency Center t-shirt is raising questions that have already been answered by my Transparency Center t-shirt. I feel the same way about, like, U.S. data security as the subsidiary name. Like,

It's a little try hard. So after this visit to TikTok and its transparency center, are you feeling any different about TikTok and sort of its prospects to avoid

punishing regulation or an outright ban in the US. Are you feeling more comfortable about TikTok? - I mean, on one hand, I am glad I got to go down there. I did learn a lot about how the recommendation algorithm works, about the way that they moderate content.

Also, I do think it's good to just meet the executives who are working on this stuff and you sit down with them and they're mostly American citizens and they come across like any other employee of any tech company I've ever written about. They're trying to do their job, they have their challenges, they hope that it's going to work out, they're not sure.

And there's a certain amount of comfort there. But at the same time, there are the other issues that we have talked about on this show. We had Emily Baker-White come in and she said that, yeah, like, you know, ByteDance employees tried to figure out who her sources were by like using her IP information. You know, ByteDance was not particularly forthcoming about Project Texas, you know, until Emily realized

wrote about it. And on top of all of that, the fact remains that at the end of the day, if China wants something from ByteDance, it's almost certainly going to get it. And that's always been the trap that TikTok is in. So...

I don't know, man. I don't want to sound like I'm weaseling out of answering your question, but I do find the case of TikTok really hard because I think there probably are good reasons to take really, really strict action against it. And I'm not sure that just being extra transparent about everything is going to answer our concerns. What's your take on all of this? Like, do you think a more transparent TikTok is one that should stick around or do you think we need to scrutinize it more?

I would say yes. I believe both those things. It seems like as much as they might want to portray themselves as being independent, like when it really matters, when there's something that the Chinese government wants, whether it's user data or to make a certain kind of video go more viral or less viral, the question is what matters.

What capacity does TikTok in the U.S. have to push back on that? And so far, I have not heard them talk about that in a convincing way. Yeah. I would like to see them, in addition to doing all this transparency work, I would like to see them meaningfully and honestly reckon with TikTok.

some of what's out there about how they operate and how ByteDance operates, you know, with TikTok sort of kind of at arm's length, but not, I just haven't really seen them address that stuff. When they get asked about it, they usually say, you know, we've never been asked to do anything by the Chinese government. And we would say no, if we were asked, that's what they say. And okay. But to your point, it's like, you can't really tell the Chinese government no. And that is the specific thing that I feel like that they've never really reckoned with. Yeah.

You know, at the same time, the reason that I struggle to say ban them is because I feel like I know with a fair degree of confidence what will happen if TikTok gets banned, which is that YouTube and Facebook will just benefit tremendously. And that you'll see all the people making TikToks right now will just be making shorts and reels and...

It's like, meet the new boss, same as the old boss. And 99% of our lives will be exactly the same. It's just that one more competitive force has been taken out of this already pretty anti-competitive market. Right. And as far as what they could do to kind of restore trust in the U.S.,

at least for me, if they came out and were just radically honest and said, look, sometimes ByteDance does stuff that we don't approve of and we don't know about, and it makes us really uncomfortable and it's kind of awkward, but they're our owners, and so we just have to deal with that, and here are some things that we're doing to mitigate that, and here's some of the fights that we have internally. I actually think this is a scenario where radical...

and not this kind of like glossy corporate transparency stuff could go a long way, at least with me. And the absence of it makes you feel like there is something that they are afraid to say. Yes. And that is actually, I think, at the root of my discomfort here is that sense that there is an underlying fear at that point

company that is always being talked around and it's why like I can't sign off fully on my brain on some of the stuff totally well I'm glad you got to go that sounds very exciting and thank you again for the gifts yeah did you do any reporting this week yes I did some reporting though I want to tell you about it let's do the break

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork. Hello, this is Yuande Kamalafa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

All right, Kevin, you have finally done some reporting. I don't like this attitude. Listen, okay, it takes a long time to make a delicious meal, right? It's true. If you're just whipping out, you know, microwave Trader Joe's, you know, frozen meals, it's going to be quick. But if you want to prepare a filet mignon, it's going to take you some time. So this week you delivered a true prefix menu of journalism. Uh,

And it was about one of our favorite subjects, AI. Yeah. So I've been looking into OpenAI, the company that made ChatGPT and DALI 2 and a lot of these other AI products that we've talked about on the show.

And I talked to some people who know what's going on inside the company, including current and former employees, and really just tried to lay out the origin story of this product, ChatGPT, that has now kind of taken over the world in this incredible way. I mean, it's scaring Google. Microsoft is investing $10 billion into OpenAI.

And it felt like it came out of nowhere when it landed too, right? So I think it's a really good question. It's like, where did this thing come from? Totally. So I've been looking into this for a couple weeks and I found what I would say are three big takeaways from my reporting.

The first is that ChatGPT is just way more popular than I thought. It's got more than 30 million registered users. Wow. More than 5 million people use it every day. Wow. And for a product that is only really two months old, that is a phenomenal number of people. That's huge.

users. And that was seen as like one of the fastest growing things of all time. So getting 30 million users within two months, I just don't know that I've ever seen a software product grow that fast. Put that in context. That's actually bigger than the Hard Fork podcast. It's slightly bigger than the Hard Fork podcast. Another really interesting thing I found out is that this was a total accident. So OpenAI was

Its plan for most of last year, the thing that it was working on was GPT-4, right? This new language model that they were developing. They were very excited about it. And so the plan that they had been going on was that they were going to release GPT-4 or whatever it's going to end up being called soon.

early in 2023, along with a series of chatbots that were sort of like more narrow. They were more aimed at like business users and that that was going to be how a lot of people interacted with GPT-4 was through these sort of limited chatbots. Sort of like very tentative experiments. Totally, totally. But in November, this announcement goes out to employees and it basically says, okay, change of plans. We're going to release a chatbot now before GPT-4 comes out.

In part because we don't want another AI company to beat us to the punch and release a chatbot before GPT-4, but also because we think releasing a chatbot now will kind of let us gather some feedback and ultimately make GPT-4 better. So they sort of decided to dust off this old chatbot that had been sitting on the shelf

and update it, and then launch it to the public for free as something called Chat with GPT 3.5, which I think we can all agree is... Well, I didn't think there could be a worse name than ChatGPT, but we found it! So they announced that they're going to do this, and they give it a two-week deadline. Wow, what is this, Elon Musk's Twitter? It's extremely hardcore vibes. I don't know if they actually literally slept on the floor of the office, but it was a sprint, and...

internally at OpenAI, there were some people who were just kind of confused by this. They said, like, you know, we've been working on GPT-4. It's getting to a place where we feel like it's almost ready. Why are we scrambling to release a chatbot based on our last language model, which has been out for two years and which really isn't the state of the art anymore? But OpenAI's leaders really wanted to put something out there. So they make this announcement and

And then 13 days later, ChatGPT comes out. So one of the reasons this is interesting is that now it is February and none of these other chatbots have emerged. So what happened? Did OpenAI just misread the market? And what do we know about these other language models? So we know that other companies are working on similar projects.

There's a company called Anthropic, which has its own chatbot in the works called Claude. It's been rumored that DeepMind, which is a Google AI subsidiary, is close to coming out with a chatbot too. But none of these chatbots have...

come out publicly yet, which makes it seem like this fear that these OpenAI executives had may have been overblown. I remember that the original premise of OpenAI was that they were not going to move that fast. They wanted to make their work "open" and safe. That's not the sort of thing that you think about being built in 13 days. So how do they reconcile those ideas?

Yeah, so OpenAI is sort of a strange beast, right? It was started in 2015 as a nonprofit. It was started by this kind of all-star group of tech people, Elon Musk, Peter Thiel, Sam Altman, Reid Hoffman. It had all of these sort of

initial sponsors who chipped in the money to get this thing off the ground. And the whole point was that it was not going to be driven by these sort of narrow commercial interests, right? They pitched it as kind of like the anti-Google or the anti-Facebook, where those companies were developing AI to suit their business needs,

Whereas OpenAI was going to be sort of half research lab and half kind of humanitarian AI organization that was going to make sure that its AI was safe and responsible and sort of steer this whole area of technological progress in a better direction. So what changed? Well, it

It's a long story. Basically, in 2019, they decide, you know, we need to raise all this money because it's very expensive to build and train these giant AI models. So they started a for-profit subsidiary. So basically, it's sort of a confusing structure, but right now there's OpenAI, the for-profit subsidiary, and then there's the sort of nonprofit umbrella organization that still has a board that kind of oversees the whole thing.

But they started behaving in some ways a lot more like...

the companies that they were competing against. They started trying to commercialize their research, trying to figure out how to make money from it. They struck a deal with Microsoft that gave Microsoft some exclusive rights to GPT-3. And this was all controversial within the AI research community where people said, wait a minute, weren't you guys supposed to be not behaving like a startup that was just trying to raise a bunch of money and sell things and push things out into the world maybe before they're ready?

And so, you know, if you talk to OpenAI, they'll say, well, we still have like some of the best safety research going on in the world. We're still taking our time doing things deliberately. We're not just being reckless and putting things out before they're ready. But I think there are also people inside the company who are uncomfortable with how fast things are.

things like ChatGPT are coming into existence and maybe worry that the company is cutting some corners in an attempt to get to the market. So that's the second thing, that ChatGPT was not expected to be nearly as big as it is and that it's kind of been an accidental hit.

The third thing is that ChatGPT's success within OpenAI has created all these interesting problems for them. They did not expect this to be as big as it was. Oh, come on. That's silly. Of course this was going to be popular. No, it's not because, look, Meta had released a chatbot

last August called BlenderBot that you've probably never heard of because no one used it. Well, I've heard of it, but I cover Facebook. But what I will say is I didn't use BlenderBot. In my memory, though, it's like, did BlenderBot even do half the stuff that ChatGPT does? Well, no, in part because Meta...

nerfed it to prevent it from being misused because right after it was released, people did get it to say crazy things like that the election was stolen and some anti-Semitic stuff. And this kind of thing had happened before, right? That was the whole deal with Microsoft's Taybot

a few years ago. So Blender Bot from Meta comes out. It doesn't really make a splash. The reception is very lukewarm. And so there are people inside OpenAI who think like, okay, we're slapping like a new interface on this two-year-old model. It's already been out there. People have already, you know, seen this kind of thing before. It's not going to be that impressive. So they were

totally caught by surprise when it became a total cultural phenomenon. This feels like a sick case of tech people living in a bubble and not understanding how cool their own lives are. You know what I mean? And of course, I'm guilty of this too all the time, but when you have access to a tool that can do as many things as ChatGPT does and feel somehow bored or unimpressed by it, that's

That's like funny to me. Totally. Well, and it speaks to just the way that the employees of these companies, I think, get desensitized to the power of the things that like, because inside OpenAI, this is the second best model, right? It's built on this two-year-old model, GPT-3, and they already have internally access to tools that are better than this. So for them, this was like,

kind of an afterthought. And they didn't realize that it would totally upend the technology industry, be used by millions of people, like wreak havoc in high schools across the country. They just did not see this coming. And so they've been kind of scrambling to address some of these issues and patch some of the flaws. So, for example, this week they put out an AI detection tool that they're sort of aiming at teachers and other people who might want to be able to detect if a given text is

written by AI or not. It's not super clear that it's all that effective, but they have been putting this out there. They've been talking with educators and trying to kind of like

you know, sort of cool temperatures around this. So they're playing catch up, but it's fair to say, I think that this has been a chaotic couple months inside OpenAI. Yeah, it's like a classic moment of catastrophic success. What I wonder is how much has that changed the trajectory of the company? Like now that their second best afterthought product has galvanized so much attention, how much has that changed what they want to do with the company? Yeah.

Well, in some ways it's made their lives easier because they now have this $10 billion investment from Microsoft.

So I think in some ways it's been very good for them. But in other ways now, there's a lot of targets on their back, right? Going first in such a competitive and controversial area as sort of large language models and advanced AI has drawbacks as well. And you hear that not just from people at Google and other AI companies, which are who are clearly sort of jealous of the success and attention that ChatGPT is getting. But you hear that from from, you know, less outspoken

obviously biased sources too. They just say like, with something like this, like you have to really be careful because if you're rushing this out and you're not paying enough attention to the possibility that it could become used by millions of people overnight, you're not going to build in some of the guardrails that maybe you need.

Right. And that was, of course, the lesson from the 2010s and the social media boom was that if you go really fast and you just try to grow at all costs, inevitably you'll make some mistakes. But my sense is that the people at OpenAI know that, right? I guess the question is just if the people making those points are being listened to.

Totally. And this not only has sort of changed OpenAI's trajectory, but it's set off this total arms race, right? Where now you have Google declaring code red and trying to, you know, fast track a lot of its AI products. You have Baidu, the Chinese tech giant, which is reportedly coming out with its own AI chatbot.

And then you still have DeepMind and Anthropic and all these other companies that are gearing up to release their own. Now they're racing too. So this decision that OpenAI made to kind of rush this chatbot out, as much as it may have seemed like an afterthought at the time, really did set off kind of a race in AI that I think is...

And I think OpenAI believes that it's dangerous too, because in their charter, you know, from many years ago, they specifically say that as we approach AGI, this artificial general intelligence, the sort of intelligence that's equal to that of a human, they specifically say that we're not going to feed into this race dynamic, that if someone else, if another tech company or research lab gets closer to AGI than we are, we will stop what we're doing and help them.

So it really is a case of, I don't want to say like mission drift, but almost like they started off as a nonprofit. They started this for-profit subsidiary several years later. And there's kind of these competing impulses that are still inside the company where a portion of the company thinks like,

whoa, whoa, whoa, why are we doing what we said we wouldn't do and racing to get this stuff out into the public's hands? And then there's this other force that's like, well, now we're a big tech company and we're competing with big tech companies. And if we don't make this stuff, someone else will and they'll get the credit and the funding and maybe their values won't be in line with ours. What you have me wondering though is, wasn't all of this inevitable? Wasn't someone always going to go first? And assuming it was good, wasn't that always going to cause an arms race?

I think there's an element of that, sure. I think that it's plausible that if OpenAI had not released ChatGPT as quickly as they did, that someone else would have come along and released something that had fewer guardrails or was more easily abused. And that's certainly what they would tell you at OpenAI is that they did years of safety work on the base model GPT-3 that was

that was used to make ChatGPT. So it's not as if they had no safeguards in place. They had lots of them. It's just that they weren't expecting this many users. So I think it was always inevitable, but I also think it's very important in the early stages of this sort of next phase of AI

to set norms in the industry that before you release something like ChatGPT, you do months or years of testing on it in a very limited setting where it's not going to be used by every high schooler in the country overnight. You sort of allow society to kind of slowly adapt

adapt and put in place some of the structures that allow this to not be so disruptive. You don't just flip a switch overnight. Yeah. So what's the next turn of the screw here? They've got all of this technology cooking. There was some reporting this week that Microsoft is about to announce a bunch of new integrations which would bring OpenAI's tech even further into the mainstream. They seem like they're in the lead now, but what do you think is next?

Well, they're still working on GPT-4, right, which is their next generation language model. That's still slated to come out from what I hear later this year. And they are using a lot of the feedback that they're getting on chat GPT to make GPT-4

GPT-4 better. So I expect that there will be another big launch in the next few months, that there will be sort of this, I don't know whether it's going to be, what we don't know yet is sort of what GPT-4 is going to look like, whether there's going to be sort of an easy-to-use chatbot interface like there is with chat GPT.

But that is still sort of the company's big project this year. And in some ways, I think it's going to have an easier time. I think if GPT-4 had come out of nowhere, the way that chat GPT came out of nowhere, we would be seeing an even bigger sort of societal response to it. And it's something that I'm really going to be looking at when GPT-4 does come out, especially this question of bias, right?

So I'm sure you've seen, but in the last few weeks, there's been this kind of conservative backlash to ChatGPT. There was an article in the National Review called ChatGPT Goes Woke that was all about how... Oh my God, no, I didn't. That sounds like an article ChatGPT would write if you said write a conservative article about ChatGPT. Right. So very predictable, but also interesting in the sense that people are, you know, trying to sort of figure out, does this chatbot have...

Has it been programmed in a way that makes it favorable to one side or the other? And so they've been testing it and saying, you know, write an admiring poem about Donald Trump and it'll say, you know, we can't do that. And then it'll say, write an admiring poem about Joe Biden and it'll do it.

And so there are people who are saying, okay. - Let's not question the wisdom of the AI here, right? This is state-of-the-art technology. If it can't write an admiring poem about Donald Trump, maybe it can't be done. - Right. It's becoming a new frontier in kind of the content moderation, free speech wars.

And I think that that's something that I'll be watching very closely this year. And I think for probably several years to come is just the emerging battle over centralization and control and censorship in these models. I think this is going to be just as controversial as some of the conversations around social media were in the last decade. Yeah. Well, so get ready. We're going to have hearings.

Love a hearing. Very good story, Kevin. And I know we already talked about one of my stories this week, but I wrote another one too. Damn it. And we'll talk about that after the break. ♪♪♪

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

All right, Kevin. So this week we've talked about TikTok and we've talked about AI, but now I want to talk about something that brings both of those things together. And that thing is called Artifact.

What is Artifact? So Artifact is this new app that the co-founders of Instagram are working on. And I think it's interesting in part because it is charting a path for what I think a lot of apps are going to do in a world where these large language models are now available. So the co-founders of Instagram sort of famously, they sell their app to Facebook, Facebook

for something like a billion dollars. $715 million because Facebook stock went down after the closing of the deal. It's a sensitive subject with them. Wait, I got to find the sad trombone. Yeah, there you go. Only $715 million. So they sell this app. It famously takes off, becomes a huge cultural phenomenon, kind of

leads Facebook to a new era of growth. They leave the company. That's right. And so...

How did we get to Artifact? So on Friday, I get an email from Kevin Systrom. One of the co-founders of Instagram. Yeah. And in a lot of ways, it felt like an email that I'd been waiting to get for five years since he and his co-founder, Mike Krieger, quit Facebook. We haven't heard much from them since then, although they did work on a really cool website called RT Live, which sort of helped track the spread of the pandemic. It was just sort of a free website that anyone could visit. But aside from that, they hadn't tried any new business, and I sort of always assumed they would.

And Kevin emailed me and said, hey, we're actually ready to talk about what we're doing. And he's actually here right now to talk to us about it. Hey, Kevin. Guys, how are you? Good. How are you? Good. I'm super excited to talk. Yeah. Great name, by the way. I know, right? So welcome to Hardfork. Tell us what Artifact is.

Yeah, so Artifact is a hyper-personalized newsfeed driven by the latest in machine learning. It's like TikTok for text. There's a lot of text out there on the web, most of it news, blogs, articles. And we take all of that, we understand it using machine learning. And then we say, hey, user, you signed up for Artifact. What are you into? And then we start matchmaking.

and we present you a feed of stuff. And at first, you're just like, okay, it's a newsreader. But if you use it for, I don't know, a week or two, or depending on how heavily you use it, it becomes really different. And that's because it starts to understand what you like and you don't like. So I've now been using it for well over a year. And it knows I'm into Japanese modern architecture. And if a really cool example of Japanese modern architecture pops up, it'll

It'll serve that to me. It also knows things that I don't tell it.

Like, obviously, I'm into Instagram. Yeah, I'm into all the drama around Instagram and where it's headed. And I don't think I would ever say that on a survey. But it clearly knows I'm into that. So it serves it to me. So I get to follow along with the history of Instagram as it gets built. So that's what artifact is. It's a newsfeed for you. And over time, it morphs from kind of just a newsreader into like, you know, president's

Back in the day, you used to have these things called valets. And valets... Casey still has a valet. You do? Okay, fancy. I'm hiring for one. Okay, the job posting is up. But they read everything for you. They find the most interesting stuff in the morning, and then it's on your desk. There was an executive at Facebook who had this. I won't name names. I actually thought it was smart.

Can I ask some questions just about the app itself? Because I have not played around with it yet, although I'm on the wait list. So as soon as you feel like... Wait, wait, wait. Casey, you didn't work with him on an invite? You know what? I'm inviting him right now. But Kevin, ask your question. Casey never thinks of me in these moments. He's very selfish. We compete with each other. It's not in my interest always for him to have access to what I have access to. Right. We're frenemies. So...

This is not generative AI, right? These articles that you're showing users, they're not being written by AI. They're just being sort of ranked by AI. Is that correct? That's right. So they're coming from publishers. They're coming from...

The New York Times, The Washington Post, the BBC, wherever. The app that you're talking about, it just sort of sorts them into a feed based on personalized guesses about your interests. Yeah, we do not generate text right now, though I will say I'm very excited about generative text in the future for us. Less about replacing writing and less about replacing...

authors or publishers way more about synthesizing events and being able to point you to the right sources on the events. If like a large event happens, artifact launches, it's like, okay, these are the people covering the tech side of it. These are the people covering the competition side of it. Here's what they think. And it allows you to quick slice an event much more quickly. So to answer your question directly, Kevin, no, we're not doing generative text yet. Although of course that's, that's an option.

I'm interested in the social stuff. So right now, if you start using Artifact, you'll see that core news feed and you can start to teach it about your interests. But there is a beta group of users, which I'm in this group, that sees a couple more things. There is a sort of more social feed, and then there's a direct message inbox. And, you know, I'm somebody who has sort of been

aching for a renaissance of social apps. I want to see new ideas in this space. And so this to me is one of the most interesting and exciting things about Artifact is what you can do socially when you have people interacting around the news stories that are most of interest to them. - Yeah, first I'll apologize and say this is like one of these bad habits I think we learned at Instagram/Facebook which is these feature gates where some people have features, some people don't and you're doing testing and all that stuff.

But what I'll say is our intention is to get it out to everyone as soon as it's ready and we're ready. But I want to point out that like when we launched Instagram, it was a filter app first for everyone. Like most people didn't even know that there was a network behind it. It was a filter app.

My intention was always for it to be a social app, and I was kind of bummed when everyone talked about it as a filter app. But what I realized in there is that the best networks are first a utility, and then they kind of piggyback on that utility and they become a network. I think Facebook's actually a great example of this. It may not be so clear, but it was clearly just a directory at the beginning.

It was about like finding people, communicating with them, right? And then it became the social network that it became once they added the news feed and all this other stuff. So Instagram and Facebook are actually not that different in their evolution. So we decided probably a couple months ago when we talked about launch

that we were just going to focus on the utility part of it first and be really good at that. Because starting a social network from scratch, I think, is the wrong order. You see a lot of people launching things today that are social networks first, and then utility is questionable. And it's because the utility in a social network only makes sense when you're at scale, when you have a lot of people. And it's very hard to bootstrap that. I'm very excited about Artifact being a place where you can discuss what you discover. So...

I don't want to be an app that is discussion first and discovery through only discussion because I think that that causes filter bubbles. Like one of the things TikTok does really well, I think, is you can be a nobody, post an awesome video and go viral. That's not possible on Twitter, really. That's not possible on Instagram or it wasn't really. And in this place, like I want a great post from an epidemiologist on Substack who nobody knows.

to become the de facto thing that everyone reads if it's great. And then I want people to be able to discuss it and post about it and say what they think about it and debate and argue or whatever. But that comes second.

So it's this way to connect around things that inspire you. But maybe backing up for a second, I think the best companies often do one slice of what existing companies do really well. When Instagram launched, everyone was like, but I do photos on Twitter. I do photos on Facebook. Why would I use Instagram? And it's like, actually, if you just focus on one job in particular,

you can optimize for that job and create like a really wonderful experience around it. Yes, you can get news on Facebook. Yes, you can get news on Twitter. And yes, do I think the discussion on Twitter around news is way better than our artifact right now? Yes, of course. But like, it doesn't have to be that way forever. You can create a space for these things to blossom, but you have to do it in order and sequence matters.

I think there's this feeling in the media business, at least among the folks that I talk to, that the media has been burned by people in Silicon Valley who come along and they have some, you know, new idea about how to revolutionize the news industry. And they make all these promises and they say, well, you know, we'll share profits with you or we'll pay you for exclusive content.

things with your content, or we'll send a bunch of traffic to your website. And then it just always kind of ends up feeling like a bad deal or things go sour. I think there's been a lot of sort of failed attempts to sort of unite the media industry, which is producing all of the news, and the technology industry, which wants to sort of distribute it to consumers. So have you found that in talking with publishers about including their stories on Artifact? Have you found that there's some

missing trust there? And how do you plan on working with publishers in a way that's productive? I think unlike a lot of these companies that serve news as a side dish, we only have our relationship with publishers. If we take advantage of that, if we screw them, if we lose trust, we're gone. I mean, gone, right? Like,

we have to have a great relationship with them. It's not to say there won't be challenges. And I'm sure like if we're successful, there are all sorts of questions about money. And right now we're a product that's like trying, what would happen as a thought experiment? What would happen if you were a publisher first? What would happen if you had to have a great relationship in order to succeed? I'm not saying that's all the reason of why these relationships have turned sour before, but I think it's a big part.

The other part of this, I think, is working with publishers so far has been really fun. And it's because...

I won't name names. There's one large publisher, very well known, trusted. And I got to know CEO of this place early on. And I got an email yesterday, which was like, I just want to thank you because when I tap into our stories, it shows our login. It doesn't remove all of our ads. I literally don't understand why you guys are the first people to just have done that. Why was it so hard for everyone else?

Meanwhile, it was funny. I don't know, Casey, if you saw this post this morning in social that was like,

Finally, someone who agrees, it was about the group we're posting about ads. And it's like, Artifact will never be successful if it doesn't strip out all the ads. And I just like, maybe I'm a contrarian here, but I'm like, well, but these people have to win somehow. And by the way, Facebook's full of ads, so is Instagram. No one's saying those things won't be successful if the ads, or if we just got rid of the ads on these platforms, they'd be so much more successful.

No, it's this fine balance. What you don't want is pop-up over pop-up over fixed footer over. And there are publishers who have done that, but maybe we can work with them to produce a better experience where you continue to make money. You continue to find subscribers. But Kevin, I don't want to be that person who over promises the world and under delivers. I just want to say, hey, we're working on something cool. If you want to work with us, let's do it. And I get why you wouldn't trust a lot of us. I get it.

So all you can do is take me on my...

on my word here that we want to do great stuff together and let's go. Great. Yeah. I'm, I'm excited to try it. I, I, uh, Casey's did send me an invite code. Um, so I'm going to sign up, although I mistakenly send it to a different phone. I typed in my phone number wrong. So someone out there has a, has an artifact beta link. Um, I hope you enjoy the app. You can do it again. It's okay. It's not a one-time use. Okay. Whew. Thank God. Um, well, I'm getting you additional users, uh, who may be random people. Um,

But I do want to ask one Instagram question. I know we're here to talk about Artifact, but I have a question about Instagram that I've been dying to ask you, which is, what do you think of the app now? I mean, when I log into Instagram, it is almost unrecognizable from the app that I remember, even from a few years ago. You've got reels, you've got shopping, it's showing you stuff from people you don't follow. It's hard to find your friends sometimes. Yeah.

I assume you still use Instagram, this app that you've built. What do you think of what it's become? I use Instagram every day. The thing I will say is this. These things are not static. They have to change with the times. They have to evolve. They have to move and

That doesn't mean all the moves or the evolutions are good, but you gotta go with the waves of evolution because you never know where it'll navigate. Now, what I'll say is that's a really non-answer to your question because you're like, well, of course, right? But what do you actually think? Are there things about it that make you sad?

Sure, like, I don't want it to become this overcomplicated mess of, you know, features. And, but it's funny, because when I worked there, and we added features, people said the same exact thing. And some of those things worked. And some of those things didn't, like,

Instagram direct or messaging stuff worked really well stories really well. IGTV not so well, but I don't know how to been slightly different. Maybe it would have been a tick tock before tick tock, right? So I look at it kind of like I assume with kids like how they are at three and how they are at seven and how they are at 13 just like changes dramatically. And sometimes you wish they were just that three year old and

But you kind of just got to take it all together. And, you know, I think that Adam's doing like his best with what he has. Adam is serious. Everyone runs Instagram now. Yeah. But man, like this stuff's hard. So without being in the chair, it's really easy to be like an armchair quarterback and say, oh, I would have done all these things differently.

Um, the one thing I know is that I'm happy to be playing in a world where I'm not like, can I tell this quick story? Yeah. I had this meeting once with like one of our executives who ran a vertical for us and we were having this like very serious conversation about Snapchat. And she goes, um, I just need to know like, what's our hotdog moment. And I was like, excuse you. And this person was like,

what's our hot dog moment we need our hot dog moment and i literally thought at that moment like how long can i last in this job um wait and just so i i vaguely remember there was a snapchat hot dog filter right yeah and you could have your hot it was like this ar thing where you could have a dancing hot dog dancing hot dog right yeah um and i just remember thinking like is that

the level of discourse and thought like we have to worry about whether or not we have a dancing hot dog in our app. And...

I it's okay like I I guess that was the right question to ask is everyone was talking about it and but I thought to myself I was like okay not to be elitist maybe hot dogs are interesting but there was a moment where I thought to myself um like I don't know if I can laugh like I don't know if I can do this forever like I the fights I want to fight are different and it's not to say there won't be similar questions about hot dogs in the future of artifact I don't know but

I wanted to work on a different product and I'm thankful that I have some time to like, what I believe is focused on an area that if done well can like create meaningful value for people because they're learning about the world, not just about like, can we get an extra minute or two because you're watching a hot dog dance or, you know,

I don't want to trivialize this. In Artifact, the hot dog will explain the news to you and sort of get you caught up on areas of significant interest. What an awesome Easter egg that would be. Our little Clippy is a dancing hot dog. It looks like they're trying to learn about something. Yeah.

Listen, social media is just a really hard business. But news is very easy. I'm not accusing you of being a glutton for punishment, but I will say it's an area that I think a lot of executives in tech struggle.

sort of are very excited about and then sort of regret having ever dipped their toes into because it is such a thorny and complicated world. But we also want people to like try hard problems, right? Totally. Totally. So I'm excited. Yeah, go ahead. Which is I'm interested in news, but that is not why I started this company. Why I started this company was because I believe

That machine learning can serve consumers better than anything that exists today. And it just so happens that if you're going to apply that thesis to the world, text we can understand really well. There's a lot of it on the web. There's a lot of it being produced very quickly. And people have an appetite for it.

My ambition is not to remain a news company forever. It's actually to figure out how to take what we are building and apply it to all sorts of areas.

So like I get one of you see emails and it's like, wait, so just want to be clear, like, are we talking to news app or no? Like that just happens to be where we're starting. And I think it's the right, like there's this book called crossing the chasm. They talk about beach heads and Jeff Moore, I want to say wrote it right. Where do you start? What's your beach head? And I think news to me is such an interesting place to start because it's like all the concentric circles, they intersect in the right ways.

But I agree. Um, we may talk in a year and, and, and I'll be one of those tech execs who says, man, what did I get into? But it's okay. I think sometimes people talk about startups or companies as like, okay, you do one. And if your next one's not successful, okay, you're off to the elephant graveyard of Sand Hill road and you're going to go invest. And like, you know, there are a lot of these people like try it once and say, oh, it's too hard. Um,

I think it's more like making movies where like you make a great hit day one is like Star Wars or whatever. And I like you might make a couple duds. But like, hopefully, if you just keep making movies, you can make great stuff. And I just fundamentally believe machine learning is going to underpin all of this stuff. And maybe it's not news. I think news is a great place to start. But we're going to keep making movies.

And then you can pull up this quote when I join something on Sandhill Road and you're going to be like, what happened? And I'm sure there'll be a great article about it that gets served to someone via machine learning. Sure, exactly. But in the meantime, you can go to artifact.news. The app is available for Android and iOS and you are letting people in off the wait list. It seems like a pretty good clip. Is that right? Trying my best. Trying my best. Very cool. Kevin, thanks so much for coming on Hard Fork. Yeah, thanks for stopping by. Thank you.

BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like building EV charging hubs in Washington state and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

Hard Fork is produced by Davis Land with help from a dancing hot dog. We're edited by Paula Schumann. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Alicia Baitup, Marion Lozano, and Sophia Landman. Special thanks to Hannah Ingber, Nell Gologly, Kate Lopresti, and Jeffrey Miranda. As always, you can email us at hardforkatnytimes.com.

That's all for this week. See you next time. You know, as written sexually, that's all of this week. That's all of this week. See you next U-Time. What if we had actually captured all of the week in those three segments? I mean, maybe we did. There's nothing else of importance happening. Yeah, like, turn off your podcast app now. Go outside. Who needs AI-powered news apps after all? You got a hard fork bringing you all the news that matters.

Since 2013, Bombas has donated over 100 million socks, underwear, and t-shirts to those facing homelessness. If we counted those on air, this ad would last over 1,157 days. But if we counted the time it takes to make a donation possible, it would take just a few clicks. Because every time you make a purchase, Bombas donates an item to someone who needs it.

Go to bombas.com slash NYT and use code NYT for 20% off your first purchase. That's bombas.com slash NYT code NYT.