cover of episode Everyone Pivots to A.I. + Bad News for Crypto

Everyone Pivots to A.I. + Bad News for Crypto

Publish Date: 2023/3/3
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

I had a very exciting night last night. I was out at a party with my wife and we're in San Francisco at a nice little bratwurst joint. And who walks in

But one Casey Newton. You were there by total happenstance in the same restaurant. It was a delightful meet cute. It was very fun to see you out in the wild, unanticipated. And of course, my friends were very excited to meet you, having heard so many of my complaints about you over the last several months. But they put on a brave face and took your hand and it was cool. I did just feel a little weird for the other people at the bar while we were like sitting there just gabbing. Like,

We talk a lot about the risks of big American cities these days. We've got crime, we've got break-ins, we've got open-air drug markets. They don't tell you about the biggest risk of going out in San Francisco, which is that a podcast might spontaneously break out. I introduced my wife to one of your friends and they were just like, "Sydney!"

So even when I go out, I cannot escape my AI past.

I'm Kevin Roos. I'm a tech columnist at The New York Times. And I'm Casey Newton from Platformer. And you're listening to Hardfork. This week, we've got big AI news from Snap, Meta, and possibly Elon Musk. Then, New York Times reporter David Yaffe-Bellini is going to tell us whether crypto is dead or only mostly dead. And finally, we'll talk about the controversial new TikTok filter that may be making people dangerously hot. Oh, no.

Okay. So Casey, while we've been talking a bunch on this show about Bing and Microsoft and their AI experiments, there is a lot more happening in the world of AI. In fact, there's been a bunch of stuff happening that I think we should catch up on. Yeah. I would go so far as to say the AI news accelerated. Yeah, it sure did. We actually haven't played our Week in AI theme song in a long time because every show has been about AI, but I think we should play the AI theme song. Play the theme song.

So the first big AI news of the week is that Snapchat announced this week that it is introducing an AI chatbot into its product named My AI. And initially, it will be available to subscribers of Snapchat Plus, which is their $4 a month subscription program. But the goal, Evan Spiegel, CEO of Snap, told The Verge, is to eventually make the bot available to all of Snapchat's users.

So I haven't been able to get access to this yet, but I've seen screenshots floating around. It is basically a version of ChatGPT that exists in your Snapchat feed. So, you know, you'd have your texts and snaps from your friends, and then you'd have this little like robot avatar thing with like kind of a

purple face and bluish-greenish hair called My AI. You open up a text with it, and you can just chat with it like you would with your friend. You can send it your nudes and say, hey, what do you think of these? Yeah, it'll send you nudes back, but it's just the inside of a server rack. So I haven't been able to use this chatbot yet, but I've been looking at other people's reactions to it, and it seems like they really have sort of

clamp down on the chatbot as far as what it will and won't talk about. A lot of people are saying this thing is boring, it's giving the most obvious answers. I think this is on purpose. In their announcement for this product on Monday, Snap said that this is an experimental chatbot,

and that it's designed to avoid biased, incorrect, harmful, or misleading information. And this is, I should say- Which is more than I can say for most of the friends I talk with on Snapchat, but go on.

Exactly. So I haven't seen anyone saying that the Snapchat My AI bot has tried to break up their marriage yet. So that's a good thing. But it is interesting to me because this is sort of the first time that a social app has really built this into its core product like this.

I mean, I also like, I just think this is going to do such interesting things to the social world of teenagers. Like the first thing that I thought when I heard about this new Snapchat AI bot is like, I really, really hope that Snapchat did fine tuning in a way that makes it safe to unleash on a population of, let's face it, mostly teenagers and young people. Well, I mean, what does safety mean in that context to you?

I've been thinking about this a lot because on one level, these chatbots

are interesting because they will say interesting things. If a chatbot would just tell you the weather and what time it is and what the latest movies are, that's not- Then it's Alexa. Right. Then it's Alexa. Then no one's going to use it. So in some cases, I think that there is a direct correlation between how interesting these chatbots are allowed to be, how much crazy stuff or unnerving stuff or even disturbing stuff they're allowed to say, and how interested people will

be in them. But I really do worry. Social media is already such a minefield for teenagers who are self-conscious or depressed or lonely. And I just don't know what it looks like if you just all of a sudden show up with an AI chatbot that can be everything to everyone.

You know, at the same time, what if, you know, you're a kid and you're feeling lost and you go to your Snapchat AI and you just say, hey, like, what's my place in the world? And it says, well, actually, you're going to be a corporate litigator. You know, you could really save these kids a lot of existential doubt. Wait, that's supposed to make them happier if you tell them it's going to make them. Yeah.

Are the corporate litigators you know happy people? No, but it doesn't matter. You know, when you're 14, you think you're going to go, you know, start a nonprofit and save the world. Then by the time you get out of college, you've got $100,000 in student loans and you have to pay them off somehow. And so you become a corporate litigator. You know, why not save yourself the eight years of existential doubt when you could just sort of embrace your soulless future while you're 14? This somehow got even darker. No, I think it should...

I really hope that the companies that are doing this kind of integration are thinking really hard about not just, you know,

putting a hundred words in the do not say bucket for the chat bot. But in trying to steer these conversations in a positive direction, I can imagine myself as a teenager asking questions about identity and body image and friends and social life and school and stress.

And I just, I hope that these products are ready or can be made ready for that kind of interaction. Because let's face it, like they're going to be using it as a proxy friend. And for that reason, I think it's really, really important that the right safety work go into these products.

Okay, next story. Meta is another big tech company that is also throwing its hat into the ring when it comes to generative AI. Mark Zuckerberg announced on Monday that Meta now has a team that is dedicated to building tools powered by artificial intelligence, and that they are exploring putting these tools into their products like Messenger and WhatsApp. They're also experimenting with using AI for things like creative Instagram filters, as well as video and multimodal experiences.

So did this announcement surprise you by Mark Zuckerberg on Monday? I don't know that it was a huge surprise. We know that Meta has been working on AI tools for a long time, and it makes sense that they would want to create some kind of...

profile team within the company that was going to be working on that stuff in a very visible way. On the other hand, I think it's fair to say there's been a bit of flailing over there over the past year or so. They have announced a lot of things that kind of came and went. There was a newsletter effort. They were really interested in

audio for like a year and podcasts and stuff. And then of course, last year, along with everyone else, they got interested in Web3 and they started building ways for people to like showcase their NFTs and Instagram, that sort of thing. So fast forward to March of 2023 and AI is the flavor of the month. And so now Meta has a big AI team and I can smirk at that a little bit, but in the end, I think it was obviously the right thing for them to do.

Yeah, it feels a little bit like they're chasing a trend, but it's the same trend that everyone is chasing right now. And we should say, it's not as if this is Meta's first foray into AI. So for many years, they've had a very large, very well-funded AI research lab. And actually, last week, that lab introduced what they're calling LAMA, which is a foundational large language model. It's very similar in sort of how it works to things like GPT-3 or Lambda from Google.

And what's different about this model is that Meta actually made several different versions of it in different sizes. So basically, researchers who don't have access to a ton of computing power can still work with it. And in the research paper that Meta put out about this, they said that one version of Lama, quote, outperforms GPT-3 on most benchmarks, despite being 10 times smaller.

So, did you read about LAMA? Do you have thoughts about it? Yeah, I mean, it's one of those where it's really difficult to assess how impressive it is because they won't let us try it, right? This is something that they are only allowing research groups to try out, and they have really framed it as interesting.

a tool for making AI more responsible in general. And, you know, that sounds great, but I'm not going to be the person who's using it to make AI safer. And so I guess I will just sit back and wait for researchers to tell me whether this was useful or not. Right. Interested to see what comes out of that. I also have some words for whoever at these companies is naming these models. We now have Lama, Lambda,

ChatGPT, Bard. I mean, it's really, they need some help over there. What do you think would be a good name for an AI model? Hmm. I think sort of the best one out there right now is this company, Anthropic, which has a model, and they just call theirs Claude. Like C-L-A-U-D-E. Just like a Frenchman. Yeah, it sounds like a chipper butler. Yes. So I think Claude is easily the best of the names that we have so far.

Well, I can't wait until OpenAI merges its image generator model with Meta's new language model, and they call it Dalai Lama. Oh, boy! And that was our show. Okay. All right.

Here's where I think all of this gets interesting, Kevin. Let's stipulate that none of what I'm about to say is going to happen this year. But an ongoing problem that Meta has is it always wants more people creating more content in the hopes that you stumble across the thing that keeps you engaged for another two or three swipes of the thumb and maybe see an ad, maybe make a purchase. That's the model.

Right now, they are limited by what human beings can create. But if their AI gets really good, that won't be a limitation anymore. So imagine someday in the near future, you as you normally doing on an average weeknight are watching Instagram reels of people dancing and you see one that you really, really like.

And imagine you could say, show me more like this. And when you tap that button, instead of seeing a bunch of humans doing different but similar dances, the AI just creates videos of people doing kind of related dances. Like on the fly. On the fly, just generating video for you to watch based on what it already knows that you like. If Facebook can get there, then I think all of these investments wind up being really valuable to them.

them. But I guess my question about the social networks jamming AI generated content into their products is like, doesn't that kind of destroy the whole point of a social network, right? Like Facebook has had a real name policy for many years. Like it wants people to represent who they actually are. It wants the posts that

that people create to represent what they're actually, if your cousin is posting photos of their baby, they want it to be actual photos of your cousin's baby. Isn't a social network that has AI-generated content all over it kind of antithetical to the whole point of a social network, which is keeping up with your family and friends? - Well, I think that that era has already started to end

for Facebook. Last year, they said, "We are going to start showing you way more suggested posts," which is to say stuff that you are not following, that their ranking algorithms are just guessing that you're gonna like seeing. This is an idea, of course, that they borrowed from TikTok, which by some measures is the biggest app in the world. So Facebook and Instagram took a look at what TikTok was doing and how much it was succeeding, and they said,

We're moving on into this new era where it's not going to be about your friends and family exclusively. They'll still be there. But the real point, as ever, is just to show you stuff that gets you to keep swiping. Well, and then what's to stop them from sort of augmenting the friends and family content with AI too? I mean, if you could... I wish they would. I mean, some of my cousins...

You've never read a more boring post. I'm just kidding. I love all of my cousins. Right. So you could have, you know, dynamically generated baby photos of, you know, non-existent babies. You know, this could take a really dark turn. But it is going to be interesting to see how they do this. You know, one word that did not appear anywhere in Mark Zuckerberg's post about AI? Metaverse. Metaverse.

The metaverse. Yeah. That word was nowhere to be found. So my question, Casey, is what the hell happened to the metaverse? Have they lost their interest in it? Are they no longer going to be spending tons of money trying to make it happen? Is generative AI just so hot right now that you can't

get enthusiasm for anything else? Like, why are they, they seem to be dropping this focus on the metaverse or at least downplaying it? Well, I think they came around to the same observation that a lot of other folks have had, which is that a true metaverse is just still a really long time away.

like more than five years, maybe 10 years. In order for it to be real, that hardware has to get a lot better. Tens of millions more people need to buy that hardware, and then the software inside it needs to get much better. I can remember one of the early episodes of this show as talking about how their Horizon Worlds product was...

not good at all. And they were essentially going back to the drawing board with it because people would just try it and then bounce off of it immediately. So I think they're just realizing that while they still believe there is something there, it's just not going to materialize for a long time. And they're going to have to do something else in the meantime to convince their employees and investors that this is, you know, an important company right now. And, you know, we're not going to need to wait until the 2030s for it.

It just really feels to me like this company is flailing. Like, clearly, they're still making tons of money on ads. And, you know, I don't think they're anywhere near declaring bankruptcy or whatever. But it seems like they are just looking for anything that has sort of a pulse that they can kind of jump on and make their own version of. So I look at this and I see, okay, this is a company that doesn't have, you know, many new ideas of its own, that is looking for something out there that is resonating with people that they can

sort of co-opt and turn into their own. I'm a little more optimistic than you. If you look at their most recent quarterly earnings, I was surprised because they had managed to get their users up by like, you know, low single digits. It seems like their ad revenue is stabilizing. Another thing that they're doing with this AI is just essentially using it to guess things about you, even when they're not able to collect the data that they used to. And that seems to be helping them on the revenue front. Oh, that's interesting. So it could be like a workaround to some of the Apple

Privacy stuff. Exactly. How does that work? They just build predictive models, but, you know, instead of trying to make it fall in love with you, they just try to say, you know, do you want this sweater? They just say, maybe you're unhappy in your marriage. Would you like a meeting with a divorce counselor? Is that...

Yeah, it's something like that. And then the last thing is that they're actually having success getting people to watch the short form video, these reels, not just an Instagram, by the way, but they're having success getting people to watch them in Facebook. And so that seems like it's meaningfully affecting TikTok's ability to grow. And of course, TikTok was just banned from government devices in Canada and EU officials can no longer have it on their phones if they use those phones for work at all.

So I agree with you. It doesn't feel like they have the most focused on successful product strategy that they've ever had right now, but the company is still doing pretty good. I mean, one question I have for Facebook about their push into AI is how this is going to affect their

to moderate content. You know, one thing that happened last week was that there was this sci-fi magazine called Clark's World that actually had to shut down submissions because they were getting a massive spike of stories sent to them that it just totally overwhelmed them. The podcast

The publisher, a guy named Neil Clark, he blames this on generative AI. He says that it seems very likely that of the submissions that are flooding into our inbox, a lot of them have been written by ChatGPT or other AI programs.

And he basically said that this had become a spam problem. And so it made me think, you know, these social networks that already have billions of people posting content every day, they're already struggling to moderate that amount of content. Now throw AI-generated content into the mix where you have people, you know, who can make 50 different versions of a video and post them all in hopes that one will go viral. And you just have just a massive scale problem.

So I don't know. How are you thinking about that? Well, I mean, I think it's going to be a continuation of the arms race that already exists, right? Like all of these giant world scale platforms are already dealing with hundreds of millions of posts a day. They already have to account for that. Now, if you sort of

add a 10x multiplier on top of that, then yeah, I'm sure the problem gets more difficult. And I bet we see a lot of weird things. But, you know, on balance, I think interesting stuff will still probably rise to the top. But, you know, if you are a like

solo proprietor or like work on a very small team for a magazine and you're used to being able to review the, I don't know, few dozen submissions that you're getting every week with relative ease. And then all of a sudden you're looking at like 10,000 stories, then yeah, that does become really difficult. And I think it speaks to the need for us to develop tools that let us know when something has been created with a chat GPT-like tool or not.

I hope that when these platforms stick generative AI into their social networks, that they give you the option of kind of like the

the organic feed, like the, the, the actual, the human feed. Cause like, you know, sometimes, yeah, I probably do want to be distracted with like 25 dynamically generated, uh, you know, videos, but like, I, it would be a real bummer if that really crowded out your ability to see, I, you know, it's already kind of hard to find your friends and family on these social networks because they've jammed so many videos and reels and other things into them. Like, I just hope they give us the option of the, the kind of human, uh, tab. Yeah.

Yeah, that feels like sort of the new, like, give me the chronological feed where it's like something a very vocal minority will like beg for for months, but then when it actually gets introduced, no one will actually use it. Yeah, because the AI cousins are going to be more interesting than the real cousins. Yes, exactly.

All right. Well, that is just a little slice of what happened this week. We haven't even talked about the, arguably the most important AI story of the week. Okay. One more big AI story this week, which is that Elon Musk has apparently been thinking about starting his own AI lab to develop an alternative to chat GPT. So this was according to reporting by the information, uh,

They said that Elon Musk has approached AI researchers in recent weeks about forming a new research lab and that he's been recruiting someone named Igor Babushkin, who is a former deep mind researcher who specializes in the kind of machine learning that powers chat TPT and other large language models.

And this effort, according to the information, is still in the early stages and there are no concrete plans. But Elon Musk is thinking about this and talking with people in the field and maybe starting an open AI competitor. And it could be the first AI lab to run inside of a Hyperloop in an underground tunnel. And I think that's really exciting.

You know, Musk has been interested in AI for a long time. You know, he was one of the founders of OpenAI back in 2015. And he's been talking for years about how super intelligent AI could rise up and kill us all and how we should prevent that. And now it seems like he and some other folks in his orbit are very concerned about these models not expressing the right political beliefs.

You know, he's tweeting about based AI and based is sort of like 4chan shorthand on the internet now for like the opposite of woke. And he really wants these chatbots, it sounds like, to be able to behave in ways

lots of ways that we might consider offensive or dangerous to have fewer guardrails around them. And that just feels to me like such a shrinking of ambition. Like you wanted to save the world from killer AI and you ended up like starting an AI lab to make sure that it can write poems about Donald Trump.

On one hand, it all seems very silly. Another way of saying that the AI is being trained to be woke is just saying the AI is being trained with some safeguards in place. There was a pretty significant backlash in the past when AI models were released and, for example, said a bunch of racist stuff. So if you're a for-profit business,

it makes sense that you would wanna create tools that were not gonna trigger that same sort of public backlash and potentially destroy the value of what you're building. Now, at the same time, it's clear that as AI takes over more and more products, people are going to want it to reflect their own beliefs and their own viewpoints.

And if they feel like they can't get the AI to talk like them or to answer questions the way that they wish it would, then I think they are going to create a marketplace for an alternative. Although we should say that, you know, Sam Altman, who runs OpenAI, has talked about over time wanting to make sure that the models...

can be adjusted to reflect a wide range of political viewpoints, right? They want to offer a very sort of politically generic model that people will be able to adjust to their liking. And, you know, that makes a fair amount of sense to me, right? I think there's like a pretty good argument that

at the point where this becomes just a program running on your computer, you should have really wide flexibility to get it to say what you want. One of the reasons why I'm comfortable with that is that all of the big social networks and places online where you might post whatever you're making with your political AI, they're still going to have their own rules. Even if you use an AI to write something really terrible, you're still going to be limited in where you can post it and how fast it can spread.

Right. I did see someone suggesting that Elon Musk's AI lab would have an advantage because he would be able to use Twitter data to train an AI chatbot, which just, I mean...

If you want an AI that behaves like a psychopath, you should train it on Twitter data. I cannot imagine a more toxic training ground to use for your new AI model. But if he wants to try it out, go for it. I do think we are going to see this kind of splintering of the AI research community along sort of,

ideological lines right i think you'll have your ai models that behave more like a democrat and your ai models that behave more like a republican and ones that behave more like a libertarian and i think those may end up just coming from different communities and different companies so it'll be interesting to see

whether there can be such a thing as a truly neutral AI model, which is I think what OpenAI and others are trying to build, or whether like a social network, you have to kind of put your foot down at some point. Of course you do. Of course you do. I mean, like, that's why I think this is going to be such an exhausting and tedious conversation because

is, you know, at some point you have to decide whether you're going to let the model talk about certain subjects and in what ways they'll be allowed to talk about them. And they're just always going to be people working those refs, even though, you know, as humans, we still do retain the ability to write ourselves. And if the output of the model isn't exactly what you want, you can always just write a few sentences. But I don't...

expect that that argument is going to get very far. Totally. The other question I have about this is whether this means that he is getting bored of Twitter, whether

whether he is uh making plans to exit twitter i mean i think that that would be wonderful um and i hope he is getting bored with it um on wednesday twitter went down for like two hours uh there was some reporting this week that the site is going down much more often than it used to that of course we assume is connected to the fact that he keeps laying off hundreds of people um you know uh every few weeks

So, yeah, I think there is actually some pretty good evidence that Elon is starting to get bored. Or maybe he just wants all the tweets to be generated by AI. Maybe that's where this is all leading. Oh, no. Who knows? My tweets have been generated by AI for years. Is that right? Only the bad ones. The good ones I write myself. When we come back, we'll talk to Times reporter David Yaffe-Bellany about what is happening with FTX and the great crypto crackdown.

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com/snapdragonhardfork. Hello, this is Yuande Kamalefa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

David! Hey guys, how's it going? Sorry I'm not in the studio today. You know, I had a bad cold last week, so something is going around. Clearly. They call it the hard fork curse. They're calling it the hard fork curse.

Well, I wasn't even in San Francisco last week, so clearly something's going around nationwide. Something is sweeping the nation. Exactly. And they're calling it AI fever. It's not crypto fever, I'll tell you that much. Crypto fever has passed!

You know, CZ was tweeting this morning about Binance's new AI product. It's some kind of like generative AI thing where you make yourself a profile picture and then they turn it into an NFT. Automatically evades money laundering regulations. It's just a bot that says, are you sure you wouldn't like to buy a few more Bitcoin there, David? Come on. Yes. All right.

David Daffy Bellamy, welcome back to Hard Fork. Thanks for having me. So the last time we had you on the show for one of your patented DYB FAQs on FTX, you were telling us about Sam Bankman-Fried, who had just been arrested and was sitting in a jail somewhere in the Bahamas. Since then, Sam Bankman-Fried has been returned to the U.S., where he faces 12 counts of fraud and conspiracy charges in conjunction with the collapse of FTX.

So we've learned a little bit more through these court proceedings about FTX and what was going on. Can you catch us up on what prosecutors are now saying about Sam Bankman Freed and FTX? Yeah, so a lot's happened since we last spoke. He's not in jail, but he's sitting in house arrest at his parents' house in Palo Alto.

And then last week, prosecutors unveiled a new sort of revised indictment against him that added some new charges onto the ones that they had already filed. They added a bank fraud charge onto there. They added a money transmitting charge.

And they also revealed a lot of new details about the campaign finance part of the original case. So initially he was charged with campaign finance violations, but it wasn't totally clear the sort of specifics of what he allegedly did. And so a lot of that was kind of clarified and sort of expanded upon in this kind of new indictment. So now all of a sudden it's illegal to try to buy off a politician? I thought this was America. Come on. Yeah.

So, yeah, I mean, of course, there's all sorts of legal chicanery that you can do in the political process to sort of buy power and influence. But what you're really not supposed to do is funnel campaign money through other people. You're not supposed to, you know, give a million dollars to your friend, tell them who to donate to and then have them donate in your name. That's what's called a straw donation. And that is the crux of what

FTX and SBF are accused of doing. You know, not only was the money that was going to these campaigns customer money that had been deposited in the exchange, but, you know, it was basically kind of Sam pulling the strings and donations would be made in the names of

other executives, and in particular, two executives who are mentioned in this revised indictment, one who is donating a lot to Republicans and the other who is donating a lot to Democrats. You know, I think we saw one thing from SBF last year is that this was not a person who was afraid of putting his name on donations. So what is the thinking about why he would bring in these surrogates to put their names on these donations?

Particularly on the Republican front, the idea is that you can kind of play both sides in a kind of tricky way. You know, you can sort of influence the political process without having to deal with like the potential PR consequences of like donating to a lot of Republican politicians who get criticized in liberal media and that sort of thing. Also, you know, like if somebody

doesn't like SBF or doesn't want to be associated with him for whatever reason, then, you know, theoretically, you could still use your money to kind of influence the process, but, you know, strip yourself out of it. You also have reported this week that another FTX executive had pleaded guilty in this case. So who was that FTX executive and what was his role in all of this?

So the executive who pleaded guilty is a guy named Nishad Singh, who was the director of engineering at FTX and also one of the kind of original founders of the exchange. And as you may remember, shortly after SBF was arrested and charged,

two of his other kind of top lieutenants, Gary Wang and Caroline Ellison, pleaded guilty. And, you know, Neshad was kind of the fourth member of the inner circle, basically. Unlike Caroline and Gary, Neshad was also involved in the campaign finance part

of the charges. He was the guy who was making donations to Democrats that prosecutors now say were essentially SBF donations, and he was sort of used as the kind of straw donor in that scheme. But really, the charges against Nishad, you know, and his guilty plea kind of strengthened the campaign finance part of the case for the prosecution, because now they have somebody who

was a straw donor saying, I was a straw donor and I was basically acting on Sam's behalf. And so that's a powerful bit of testimony to have. Now, at this point, SBF has pleaded not guilty to all of the charges. Is that right? Yes, he's pleaded not guilty. A trial date has been set.

for fall of this year. I think it's pretty unlikely that the trial will actually happen around then because these things take a long time to prepare for, but that is the kind of current state of affairs. But at this point, his top lieutenants have pled guilty and said, oh yeah, we were definitely doing a lot of crimes.

Yeah, it's not looking good for him. Not only have they pled guilty, they've agreed to full cooperation. And so they're going to get up on the witness stand at this trial and say, I committed crimes with Sam Bankman Freed. And that's not going to look good to a jury. So in addition to the campaign finance thing, you've reported that as part of the bankruptcy process, a lot of the money that SBF had given away or invested was being clawed back in an attempt to kind of make money.

FTX investors and customers whole or as whole as possible. So what can you tell us about the status of that process of trying to like dig up the money that was dispersed from SBF and from FTX and give it back to the customers?

So there's some unknowns here still. We still don't know the exact amount of money that is missing, the exact size of the hole. But the rough estimate is that it's about $8 billion. And of that $8 billion, the new executive team that has taken over FTX and the lawyers working for them, that team has managed to recover about...

5.5 billion dollars which sounds like a lot it's a pretty good return honestly after only a few months of work but there are some sort of mitigating factors there one is that you know a lot of this is kind of like the low-hanging fruit right that they're able to recover really quickly it's not likely that you know the next

$2.5 billion or however much will materialize super quickly. And then you kind of break down what makes up that $5.5 billion. Yes, there's some cash, there are some kind of traditional securities that are maybe easily convertible into cash, but then there are a bunch of cryptocurrencies.

And the value of some of that crypto is pretty uncertain. And that $5.5 billion includes a lot of FTT, which was the in-house FTX token that you may remember as one of the kind of prime drivers of this whole fiasco. And so, you know, customers aren't going to be super happy if...

you know, they're paid back in FTT and now virtually worthless token. Right. You lost a million dollars when FTX collapsed, but here's a handful of magic beans. Exactly. Exactly. So 5.5 billion maybe really isn't actually 5.5 billion, and it's definitely not 8 billion. But, you know, how do you get to 8 billion? At this point, the bankruptcy team is kind of going after the money that SBF distributed to various places. So, you know, he invested hundreds of millions of dollars in other startups.

So you can ask those startups to return the money in kind of a friendly way, or you can sue them and try to kind of claw it back more aggressively. You know, the bankruptcy team is also reaching out to all of the PACs and political campaigns and politicians who got money from SBF and trying to get it back. But in a lot of these cases, when it was like, you know, basically like VC money being pumped into other startups or political donations, like the funds just aren't there anymore. Like they've been spent. Right.

And so it's not totally clear how much of that will be recoverable. And, you know, even the funds that are recoverable could take years to get back. Well, the next time you talk to the bankruptcy team, I hope you'll tell them my advice, which is to start a generative AI startup. They'll raise $8 billion in no time.

So things are not looking good for SBF, but there's so much else happening in the rest of the crypto space, in particular with crypto regulation. You've been reporting on how regulators in Washington are cracking down on crypto. You described it as a flurry of actions. And this has been something that crypto advocates have been fearing for a long time, was that the government would basically wake up and decide to go after them.

And they pinned a lot of their frustrations, at least initially, on Gary Gensler, the chair of the SEC, who actually said in an interview with New York Magazine last week that he thinks that basically every cryptocurrency except Bitcoin should be considered a security. And I immediately, when this came out, saw tons of crypto people just losing their minds, very upset about this comment.

Why are they so upset about this? What would it mean for the crypto industry if every cryptocurrency except Bitcoin was considered a security? Yeah, before I answer that, I mean, it's also just worth reflecting on like how much crypto people hate Gary Gensler. It's like really kind of remarkable. I mean, you know, there's always tension between industry and regulators. But like, you know, I have talked like on the record to crypto executives who describe him as a sociopath.

Like days after Kraken settled with the SEC, Kraken's founder, Jesse Powell, went on Twitter and like posting like masturbation themed memes about Gensler that he eventually deleted. Like it's the level of like toxicity is like kind of incredible. And I've asked Gensler about it and he's basically just like, oh, yeah, yeah, I'm doing my job sort of thing.

But anyway, to actually answer your question. Yeah, I mean, Gensler's central claim for the couple of years that he's been the SEC chair is that the vast majority of cryptocurrencies are securities like akin to, you know, shares traded on the stock market and that sort of thing.

And that's significant because there are a whole bunch of regulatory requirements that come with something being a security. You know, Gensler wants to kind of extend all those requirements to the crypto industry. So, you know, if you started your kind of random coin, you'd actually have to explain what the idea behind it was and go into more detail about the technology and that sort of things that people would know what they were getting into.

The crypto industry is very resistant to that. They have all sorts of legal arguments about why cryptocurrencies don't actually meet the standard for security. But also, you know, it would be incredibly expensive for the industry to suddenly have to, like, get all these licenses and meet all these disclosure requirements associated with securities. And so that's kind of the crux of the fight here.

You know, Gensler has long kind of acknowledged that Bitcoin is not a security. Why? What is different about Bitcoin that in his eyes makes it not a security? In essence, it's that Bitcoin is sufficiently decentralized that it's not a security. There's no central group of people that is in charge of Bitcoin, that issues Bitcoin, that, you know, whose like business plan will determine whether Bitcoin is a success.

That's sort of what it comes down to. So my understanding, just from talking with folks in the crypto industry, is that there's a belief that if every crypto instrument except for Bitcoin were sort of declared or treated as a security, that it would basically destroy everything that the crypto industry

industry has spent the last decade building, you know, all NFTs would be considered securities, you know, stable coins, these sort of crypto coins that are supposed to behave like government issued currency and be pegged to the value of government issued currency. Those would be considered securities. The tax implications, the regulatory implications that it would basically just destroy the entire thing. So were they being...

exaggerated about that? Or is there a realistic fear that if these tokens are treated as securities, that the whole industry could collapse? You know, it's a complicated question and a lot depends on like how the crypto industry would respond to that state of affairs.

a lot of times like industry will say, oh, if you make this sort of, you know, designation or institute this rule, like our whole industry will collapse. But then when the rule actually gets instituted, like you adapt to it, like you figure out ways to respond and you're able to like kind of maintain the like basic technological breakthrough. I mean, Gensler would argue like,

yeah, too bad. Like, okay, you're distributing something that meets the legal standard of a security. Like the argument that like, oh, this is really fun and great. And so we shouldn't actually have to play by the rules doesn't hold much water. That's what he would say. And it's also certainly the case that the crypto industry has done a pretty good job kind of destroying itself

over the last year without any of those rules existing. And maybe even a lot of the problems that we've seen over the course of 2022 could have been prevented if some of these kind of basic protections were in place. But, you know, it's an incredibly contentious issue in crypto land. And, you know, the industry boosters would rather have

their cryptocurrencies kind of categorized as commodities, which come with a kind of lighter touch regulatory regime. I'm just wondering if these venture firms that raise billions of dollars for crypto specific investments could have ever raised that much money in a world where

cryptocurrencies were considered these kinds of securities. And to the extent that you thought that crypto was going to be the foundation of a new internet, this to me feels like the moment where maybe we say like, well, actually, just no, like it's not going to be a new internet anymore. Like the amount of surface area to build on just got way, way smaller. And so it seems possible that this might be one of the more significant developments in the history of the industry right now that we're talking about.

Yeah, it's incredibly significant and it's going to get heavily litigated, right?

There is one case that's pending over the cryptocurrency Ripple, which the SEC claimed was a security a couple of years ago. Ripple fought back. And we're sort of waiting for the judge's ruling on that. And that's likely to come relatively soon. And so that'll be a kind of landmark legal decision in this debate over the classification of cryptocurrencies. Another thing you hear a lot, or at least that I hear a lot from people in the crypto industry, is that

If regulators crack down, as they now appear to be doing in the U.S., that these companies will just move offshore and that there'll be, you know, it won't actually have the effect of changing what crypto is. It'll just change where it happens. And the U.S. will lose out on all of this growth and, you know, these companies and these jobs. So my question for you is, like, do you see any evidence that that is true? Are companies moving offshore to get away from this new crackdown? Or was that always just kind of a scare tactic?

Yeah, I mean, I think it's probably too soon to judge whether that's like happening in a new way now as a result of this recent crackdown. But it is definitely the case that the biggest crypto company in the world at this point, Binance, has always been kind of offshore. And so, yeah, there are definitely crypto people who are arguing like, you know, what the growth and innovation that might be unleashed if you allowed that sort of trading in the U.S. is now flowing to other countries, right?

It's definitely the case that the SEC sees its job as protecting investors. Like, it's not a concern for the SEC, like, whether, you know, the economic benefits of crypto flow to the U.S. or to another country. Also, like, the economic benefits of crypto, you know, like, I sort of want to put that whole thing in air quotes just because I feel like there have not been a lot of economic benefits

of crypto for many of the people who've been using it. Listen, the benefits to working class Americans of having robust board ape factories in our borders, I mean, those jobs are not going to create themselves. So I think this is an urgent priority. Yeah. Yeah. I mean, the crypto industry hasn't actually really demonstrated that actually there are strong benefits. So that's a big issue.

Or also like the regulatory piece seems to have a couple dimensions to me. Like,

along which it could be harmful to the industry. So one is just, it makes the investment case for crypto projects just much less strong. Because if you could be sued out of existence or have your executives charged with fraud or selling unregistered securities, that's just what VC wants to be investing in that. There's also sort of a psychological hit, I think. A lot of the people that I talk to in crypto are,

came from traditional finance because they wanted to be able to play around in this new wild west where you could do basically whatever you wanted, where there was no compliance person or lawyer looking over your shoulder while you did things. Just sort of the fun of being able to experiment in a totally new and untested and ungoverned industry. And a lot of that goes away if you have...

compliance people who are standing over your shoulder saying like, did you comply with this? Did you comply with that? It just makes the whole thing a lot less fun. Yeah, it's fun to play Grand Theft Auto because there are laws in that game that you can break without any consequence and you can kill people and you can fly a helicopter into a mountain and still live the next day. But at the end of the day, you are playing a video game.

Right. Yeah. And that's going to come to an end. It's also just like kind of, you know, the central like philosophical idea behind the origins of crypto is that, you know, it was a corrective to the mainstream financial system and it showed all the flaws in the kind of existing setup and that sort of thing. And now if it's just governed by the same regulatory agencies and kind of operates in the same type of way, like what's the point at all, really? You know, what does it do that's different? Well, I think the point was the friends we made along the way. And...

I hope. And the apes we bought. Hold tight to those books. Yeah. Well, David F. Ebelny, as always, great to talk to you. Thank you for catching us up. Thank you, David. Yeah. Thanks for having me. When we come back, we reverse the ravages of time. If I could turn that time. Oh, I would love to get Cher on the show. I would die.

Bye.

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and bet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

Casey, there is a big controversy brewing this week in the world of TikTok filters. Oh my. TikTok has these filters that you can apply to your videos.

And two of them are really causing a stir. The first is called Bold Glamour, and it is a feature that is ultra-realistic and that is coming under fire, according to Vice, for being too good. Vice says,

has some users freaking out that it conveys unrealistic beauty standards without viewers realizing that the look comes from software. Now, bold glamour is, of course, one of the core values of the Hard Fork podcast, but we have never actually used this filter ourselves. Should we do it? Let's try it. Okay.

Okay, so let's open up TikTok here. So now can I just search bold glamour? Yeah, I think you have to go into effects and search bold glamour, and glamour is spelled the British way with a U. All right, I'm using this effect. Wow.

Unfortunately, I do actually look incredibly hot in this filter. Casey is winking at his TikTok. Okay. Hey, everybody. What would you say are the biggest differences? Well, one, it's really sort of like evened out my skin tone. So I have a sort of default like pasty pink complexion. And this has made it very kind of tan, I would say.

My cheekbones have really been made more prominent. And I think my jawline is extra defined. I think it also made my eyes bigger and maybe my teeth whiter. But I'll go ahead and just sort of show you hot me. Oh, yeah. You look...

You look great. Yeah. Here, I'll record a little bit of me in the bold glamour filter. Please do. And I'll show you what that looks like. Oh, it looks super handsome. It gave me like a little chisel on the jaw. Big eyes. It like made my eyebrows bigger for some reason. It did. Yeah.

And I think it made your eyebrow bones more prominent. Yes. I have a little bit of Promethean brow going on. You have that classic Prometheus brow. The look that all the Zoomers are going for. So users are very excited about this filter, but also a little spooked.

One user said, as someone who experienced body dysmorphia growing up, this makes me sick to my stomach. It's sickening for our youth. And this user said that if they had had it when they were younger, it would have, quote, emotionally destroyed me.

So Casey, why do you think this has struck a nerve with people? One thing I think we should say about these filters is that your reaction to them will probably be quite different depending on your gender, right? I think if you are a woman, you're under an enormous amount of pressure to conform to certain standards. And I can imagine opening up TikTok and all of the sort of negative feelings that that might bring up for you, right? You're thinking, oh, yeah,

here we go, you know, here is one of the most powerful companies in the world that is sort of reinforcing the idea that we should sort of all have these giant eyes and super prominent cheekbones. And that would probably be a really upsetting thing, or at least it could be. And there also is like a pretty robust dynamic

set of studies at this point that show that these sort of augmented photos, at least, have a direct impact on the body image of adolescent girls in particular. So there was a study a few years ago where they showed manipulated Instagram selfies to girls

between the ages of 14 and 18. And the researchers found that exposure to these manipulated photos, these filtered retouched photos, led directly to lower body image. And that girls with higher social comparison tendencies were especially misguided.

negatively affected by seeing these manipulated photos. So I do think there's pretty good evidence at this point that the retouching and filtering of at least photos on social media has created worse mental health outcomes. Right. Although at the same time, if you grew up in the 90s, you were also bombarded of completely unrealistic body image issues. It just feels like this is something that kind of recurs in every generation in its own way.

It is true that there have been manipulated images of people designed to make them look prettier and more conventionally attractive for forever in every medium. But I do think that there is a difference between retouching your selfies and having kind of this dynamic and always-on thing.

I mean, I imagine that right now stuff like this is built into TikTok, but I imagine that at some point it could just be built into the camera on your iPhone, right? I mean, iPhone cameras already have AI image retouching on them. You can apply filters right in the camera itself, and maybe that's going to be popular enough. People will like the way they look so much more with these filters applied that they will just want it to be sort of the base layer of their camera.

And that could lead to a really interesting world. The other thing that I'm just thinking about looking at all of this is drag, which has really just completely taken over queer culture over the past several years, thanks to RuPaul's Drag Race. And I think so many of my gay friends are just fascinated to use this stuff to sort of have like the easiest way imaginable of just picturing themselves as a drag queen. And I think there's just a lot

of fun and self-expression in trying on those different looks. And so I don't know, like the more that I talk about this, the more I think I'm generally positive about people having access to these tools to just like play around with their identity expression. Speaking of identity and playing around with it, another TikTok filter is getting a lot of attention this week

which is the teenage look filter. Have you used this one? Actually, I did use this one because when I opened up TikTok the other day, I saw somebody using this filter and of course, immediately said, well, I need to try that for myself. So the idea is, I haven't used it. Basically, it shows you what you would look like

As a teenager, what you did look like as a teenager? The gimmick is it shows you both your current self and what you would look like, you know, as a teenager and sort of stacks those two views on top of each other. So you get to see your teenage self and then your horrifying adult self in the same frame. And people are having very strong reactions to looking at the difference. Oh, no. Okay, I have to do this. So I have teenage look and...

Wow. That's very strange. It turned me into... Here, I'll just show you. So this is my... Aww. It gave me very soft-looking skin and...

And, like, it just basically made me look very unblemished and smooth-skinned. Which I appreciate because as a teenager, I did not have smooth skin. I had acne. I know. It makes you look like kind of the Neutrogena ad version of yourself, you know? After Accutane. So I did not look like that as a teenager. But people were getting really emotional about this. Like, I saw some videos of people who...

saw their teenage self through this filter and just started crying. Yeah. Because they, you know, they remembered being a teen or brought that back to them in a really interesting way, or it looked like their kid or something like that, where I think it's really powerful. It is. And when I was watching some of the videos, you know, it seemed like it was triggering in people's

all of these questions about what their lives had been like. And maybe they happened to be in a spot that is not as far along as they hoped they would be when they were a teenager. I think there's something about being confronted with your teenage self that makes you recall all of the dreams you had for yourself at that age. And if you haven't achieved them, I think there is something very emotional. Or maybe you just had a really hard time in high school and you're confronted with that version of yourself and

all of a sudden, all of those feelings come up again. You know, that makes a lot of sense to me. And yet I never would have predicted that it was going to be a TikTok filter that got people there. Totally, totally. I mean, I love doing this kind of digital archaeology on myself. Like I...

A little while ago, I found my high school LiveJournal account that I had sort of forgotten about. And we'll include a link to that in the show notes. We will absolutely never. That is being deleted from the internet as we speak. I'm going to go immediately after this taping and make sure no listener can ever find it. I did have a very moody teenage LiveJournal. And it was fascinating to sort of see the world as I saw it at 16 or 17.

And I really, I think there is one of the things that I like most about the internet is that it does sort of collapse time in this way where, you know, now we've got all these apps that'll show you, you know, your photos from this date 10 years ago or your memories from this part of your life. But I worry a little bit

that preserving our digital selves is going to be harder than just throwing everything into a scrapbook because, you know, these like live journals since I had it has been like sold to a Russian company. And so now like it's basically impossible to use. And

it could go offline at any moment. And so like all these pieces of our digital pasts, I worry about how easy it's going to be to preserve them. But maybe if we can't preserve them, we can just recreate them using the TikTok filters of 2027. You know, part

me is interested that these filters keep going viral before TikTok had a lot of success with this. Snapchat used to release, it felt like one or two of these filters every year that would sort of be totally captivating. And at this point, you'd sort of think that like we had seen every iteration of like how young or old one of these filters can make you and like how much like a girl and how much like a boy it can make you look, right? But clearly there is something left in there to be discovered. Yeah. Yeah. I remember the, the

one that made people look old on Snapchat. That was a big deal. Well, I mean, for me, the reason it was a big deal was that I used it and it just looked exactly like my dad, which made me feel like it was probably quite accurate. Yeah, I used it and then immediately started investing in skincare products. I can't go there. All right. We'll be right back.

BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like acquiring America's largest biogas producer, Arkea Energy, and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

Before we go, so this week we talked about the man whose sci-fi journal has been overrun by AI submissions, and we want to hear more stories like that. How is AI showing up in your everyday life, at your job, at your kid's school? What are you using it for? Send us a voice memo and just email it to us at hardfork at nytimes.com. We want to hear your story. And we may turn these into an upcoming episode. We are interested in telling stories not just about

the companies that are creating AI, but in how it's changing your life for the better or the worse. So let us know. Is it my week on the credits? It's my week because you took my week last week. Sorry. It's okay. No, I asked you to. You were sick.

Hard Fork is produced by Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Alicia Baitup, Marion Lozano, Sophia Landman, and Rowan Nemisto.

Special thanks to Paula Schumann, Pui Wing Tam, Nell Gologly, Kate Lopresti, and Jeffrey Miranda. That's all for next week. That's all for next week. That's all for next week. I'm getting a little out of myself. Did the illness do something to your brain? I am slowly falling apart, Kevin. There's nothing slow about it.

One key cards earn 3% in one key cash for travel at grocery stores, restaurants, and gas stations. So the more you spend on groceries, dining, and gas, the sooner you can use one key cash towards your next trip on Expedia, Hotels.com, and Vrbo. And get away from...

groceries, dining, and gas. And Platinum members earn up to 9% on travel when booking VIP access properties on Expedia and Hotels.com. One key cash is not redeemable for cash. Terms apply. Learn more at Expedia.com slash one key cards.