cover of episode A Congressman Goes to A.I. School + How to Ban TikTok

A Congressman Goes to A.I. School + How to Ban TikTok

Publish Date: 2023/3/10
logo of podcast Hard Fork

Hard Fork

Chapters

Shownotes Transcript

This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.

Let me tell you a little something about the social networks in this country. They are in a shambles. This week, Twitter went down. We reported that it was essentially because they'd assigned one very important project to a single engineer. Something broke, and so no one could load links or images on the site for a few hours.

I thought, well, this would obviously be something very fun to talk about on Mastodon, which is what I'm using instead of Twitter. And I went to Mastodon. And do you know what was happening on Mastodon? What was happening? They were undergoing a massive detox attack. And so I had nothing to do. And so I thought, you know, this is the moment and I'm going to get on TikTok and I'm going to make a dang TikTok. Wow.

Yeah. What was your TikTok? Well, it was just me saying, please, somebody fix the social networks in this country. We used to have proper social networks. We used to just be able to go on and post things, and that has been taken away from us. Wow. You were like an old-timey politician delivering your stump speech about how the bridges and roads are all falling apart. That's right. Fix the dang infrastructure.

I'm Kevin Roos. I'm a tech columnist at the New York Times. I'm Casey Newton from Platformer. And you're listening to Hard Fork. This week, we're going to talk to the congressman who's going back to school to learn about AI. Then, the great New York Times reporter David McCabe will join us to explain how a national TikTok ban would actually work. And finally, how Waluigi may explain why AI chatbots are misbehaving.

Before we get started, I want to tell you about something very exciting that is happening in our world this weekend, which is that we are going to be in Austin, Texas for South by Southwest doing our first ever live hard drive.

Hard Fork. Hard Fork Live. Hard Fork Live. Is what they're calling it. Yes. We're going on the road. We have our roadies. Do we have roadies? We have roadies and they've packed up all the gear and they put it onto the trucks. Do we have a rider? Like, is there any... Oh, yes. Only green M&Ms. Only green M&Ms in the green room. So we will be appearing on stage on March 11th, Saturday at 1130 a.m. at the Hilton downtown in Austin. It's going to be a real...

of podcasting, if I may. Of all the words you could have chosen. Yeah.

So we are going to be interviewing a very special guest, who is Jonathan Cantor, who is the antitrust head of the Department of Justice. This is like a very serious person. Yes, it's a big get for the Hard Fork podcast. Very big get for it. You know, we've talked to serious people before, but this man has the power of the state behind him. Yes, so he has been leading for the last several years the antitrust push to sort of bring some of these big tech companies that we talk about

to heal. Yeah, he's like famously an antagonist of Google and when he was appointed to lead the antitrust division, Google did everything in their power to say, no, no, no, no, no, you can't hire him. He knows too much about us. But they lost that fight. So we are going to be interviewing Jonathan Cantor and we're also, we have some special treats up our sleeve. Yeah.

Do treats go up the sleeve? Treat, yes. Always put your treat up your sleeve. But yeah, we have some fun things. So it's going to be like a normal episode of the show. And then, you know, if you can't be in Austin, don't worry. Like the podcast will be in the feed. But if you were in Austin and you miss this...

you might regret it for the rest of your life. Yeah, what are you doing? Yeah. Yeah. Because this is the last time we're going to fit in a room this small, people. So get on board now. Yeah, you could be watching, you know, you two play a little club in Dublin. Yes, exactly. Or you could, you know, pay $700 for Madison Square Garden tickets three years from now. That's right. Your choice. Check it out. Yeah, we're excited to meet some people. If you...

and you're there, come say hi. You know, have a taco with us. We can't commit to having tacos with everyone who wants to have one with us. We will eat one taco for every listener who approaches us. All right, Kevin. Should we get started with this week's show? Yes. So, okay, Casey, we have talked about AI on this show once or twice. Once or twice, yeah. And one of the things we've talked about is that...

the sort of, and one of the things, what is Congress? Well, um, we have what's called a bicameral legislature. Okay. So,

So one of the things we've talked about is how to regulate AI and whether Congress is actually capable of regulating AI, starting with the fact that they seem to have no idea what they're even trying to get their arms around. What I would say is I don't think we've talked that much about how to regulate AI because we are so early into it that...

like figuring out what is there to be regulated. It's very much an open question. So this isn't like social networks where it's like, well, here's five obvious problems that would be great if we could like fix. It's like, well, here's a brand new technology that we're probably gonna have to do something about sometime. Right. And this is sort of a similar pattern to what's happened with other kinds of tech regulation in the past, which is that something new is invented or sweeps through the tech industry and

And millions of people are using it. And all of a sudden, lawmakers are just trying to figure out, like, what is happening? What do we do about it? And what are the problems that we're trying to solve? The last time that happened was with Wordle. And finally, it seems like it's gotten under control. Yeah, you do now need a license in 37 states to play Wordle, but.

There was actually a story in The Times last week by my colleagues Cecilia Kang and Adam Satariano about this very subject, which is a very good story. And it was basically about how members of Congress are starting to get very freaked out about all this new AI technology, but still don't really know what it is or how it works or what to do about it. So Congress people, they're just like us. Right. So in that story, there was a mention of one particular congressman who I really wanted to hear from.

He's got a really interesting story. His name is Representative Don Beyer. He's a Democrat from the 8th Congressional District in Virginia, which includes cities like Arlington and Alexandria, sort of in the D.C. area. And he's not a tech guy by training. In fact, before he ran for office, he was sort of famous in Virginia for owning a bunch of car dealerships. And the deals that you could get at these dealerships. I mean, that's where you wanted to be.

But Congressman Beyer has gotten very interested in AI recently. He's part of this thing called the Congressional AI Caucus, which is sort of a group of lawmakers who meet on occasion to talk about this stuff. And at the age of 72, he actually has gone back to school to get his master's degree in machine learning. Which is the exact plot of Billy Madison. Yeah.

Yes. So this week, we've got Representative Beyer here to talk to us about why he's going back to school to learn about AI and what he thinks it's going to take to get the rest of Congress up to speed on this new technology.

Hello, Kevin. Hello, Congressman Beyer. How are you? I'm good, thank you. You look comfortable. Yeah, we dressed up for you. This is my formal hoodie. Yeah, that's good. Hey, Casey. Hi, nice to see you, Congressman. Nice to see you, too. Where are we catching you today?

Actually, in Congress. This is my official office. Right after we talk, we have the first set of votes today. That's exciting. Well, we'll be efficient. I want to know the story of how you decided to go back to school to pursue a master's degree in artificial intelligence. What made you so interested in this topic? You know, I was always, while I'm not a tech person,

You know, I was always an amateur physicist and loved reading everything I could about it and lots of puzzles and the like. And so the idea of artificial intelligence was very attractive. So three or four years ago, I found a Coursera course on artificial intelligence from Stanford. So I thought, this is cool. I'll do it at home. And I get to the third week.

and I needed linear algebra and I didn't know any and I couldn't answer a single question. And so I gave up. But then 15 months ago, I was touring the local George Mason University new innovation lab and it looked really cool. And I said, well, can I take courses here? And they looked at me funny and said, I guess. Now, what do you hope like you know at the end of your studies about AI? Like what's the goal for you?

On the short run, I've learned a whole lot more math than I ever did before. And I'm doing discrete mathematics right now, which is great fun. You know, I can spell set theory, but I never knew what it was. I think we have different versions of fun, but I'm glad you're having a good time. Yeah. But I'm really looking forward to perhaps two years from now, because I'm only taking one course per semester because I have this other job. I'm on my fourth course, and I'm hoping that two years from now, I'll have a pretty good handle on how machine learning works.

And if all goes well, I keep thinking about, well, if I wanted to get a PhD, I'm going to need to do a research project using artificial intelligence. What do I want to study? What are people not looking at yet? Well, so obviously AI has been in the news a lot over the past three or four months. And finally, average people are able to play with some of these tools. Have you yourself been using tools like the new Bing search or like making art with Dolly or anything like that?

I haven't. I've actually gone on ChatGPT two or three times just to test it. My wife asked ChatGPT about me and it gave a whole bunch of made up stuff. Not true. And they left out all the good stuff. But I haven't. And frankly, the large language stuff, which is getting all the attention, is not what really drew me to it in the first place. For me, it's been much more about this extraordinary abundance of data and

where we can't see the connections within the data. It's more than the human mind can see. I met with some cancer research people yesterday. They're going to need artificial intelligence tools to sort of sort out what's a higher risk factor, what's not, and what works to get well. I think the opportunity for science progress is enormous.

As you've started researching and studying AI more, I'm curious to know if there's anything that you're learning that has made you more concerned about where AI is heading and maybe anything you've learned that has made you think, eh, maybe this specific worry is not as scary as people think. Nothing has made me more concerned. In fact, I've become quite skeptical of the singularity

I know there's a big, big division on that, but the math is just so complicated. The idea that an emergent property would be consciousness anytime in our lifetime seems pretty naive and premature. One of the federal labs spent some years building this huge, huge, huge computer. And the director of the lab told me that your smallest dog still had more consciousness than that computer. Right.

He could still come running when the refrigerator door opened. Yeah. So the Times ran a story the other day, which you were included in, about this new topic of AI that has swept through Congress and Washington and how members of Congress are starting to get concerned about AI, but they don't necessarily understand the finer points. So on a scale of one to 10, where one is like a person who watched The Terminator once and

And 10 is an AI researcher with a PhD in machine learning. Where would you say the average member of Congress is right now? Oh, maybe a three. And that might be generous. But I don't think all of them have seen all the Terminator movies yet. Although I have, I confess. Some of them more than once. Yeah.

I love Arnold. The AI caucus, which is the voluntary group that comes together, has been pretty small the last couple of years. And mostly we talk about existential risk and the singularity. But now the chat GPT and Bing have just exploded. I checked this morning. We're up to 40 members in the AI caucus. Wow.

And I wouldn't be surprised that once we started holding more meetings, it wouldn't be 80 or 100. Wow. And tell us what this caucus does. Is the idea to start thinking about ideas around legislation, or is it more just to kind of review the news? And what do y'all do? Both. Although the last meeting we had, Jack, who used to be head of policy at OpenAI. Jack Clark. Jack Clark. So Jack Clark came. We had 100 people in the room.

Now, there are only maybe five or six, seven members of Congress, while the rest were their staff. As anybody familiar with the Hill knows, the staff makes all the decisions anyway. They do the research, they write the speeches, they make the vote recommendations. And they were glued to Jack's presentation. So I think part A is just educating people to what artificial intelligence is. And then part B will be we're trying to look and say, what is it we should do?

I think it's naive to think that Congress will ever be out front in terms of how you need to do safe regulations on it, but we need to be thinking about it. I mean, that does seem like one of the big challenges of regulating AI is that it's just moving so fast, right? I mean, ChatGPT came out in November, and now, you know, there are new things coming out every day, and it just seems like the pace of acceleration and innovation is really high, even for people who are paid to think about this stuff for a living. So...

Do you think Congress can or should sort of regulate on the leading edge of AI? Or what should Congress's role be with such a fast moving sector? I think it would be naive of us with lots of unintended consequences to try to regulate it on the leading edge. We just don't know what the downsides are.

We can imagine the extraordinary upsides in science and so many things. I do a lot of advocacy work for fusion energy. Well, you can't make these fusion lasers work or the magnetic confinement without AI to get things in the right place. So if you're going to solve climate change with fusion, artificial intelligence is job one.

So my guess is that we're going to have to wait until we see an actual downside from AI, whether it's a tragedy or just something that's inconvenient or an unintended consequence. I talked to some people from ITI recently who are really pushing the idea that

We don't regulate the hardware and the software, but rather the uses of it. You know, that we try to make it sure that we're not repressing innovation and creativity. But when something bad shows up, you know, maybe an authoritarian dictator has all the data on their constituents and uses it in an evil way. Okay, that would be a really clear bad use.

But in the meantime, let's see what really interesting problems we can solve. So your preference would be to regulate AI at the sort of application level rather than at the sort of model level. I mean, there's been some people proposing, you know, maybe we could regulate the, you

Yeah.

Exactly. And I'm often wrong, but I'm just really uncomfortable with taking what could be one of the most, maybe the most powerful technologies since fire and, and suddenly giving government control over it or, or saying who can use it and who can't. I,

I'd be interested to hear a little bit more from you on what you think the upside is here in your discussions. What sorts of benefits do you see coming out of this technology as it improves? Well, again, if you'll accept my humility, I think all the attention right now is around the large language models, you know, being GPT-4 is coming.

wow, we can replace our legislative correspondent. But that seems to me to be less important than looking at basic science problems, genetics, cancer, heart disease, poverty, the management of new technologies that are going to require that. So I think the opportunity, the positive side, all comes on the problem-solving side for things that could change

the lives of billions of people. And that, by the way, is why I'm so reluctant to say let's come down heavy on the regulation. So you said that you're very optimistic about the technology. What about AI if it's not the singularity and the possibility of super intelligent AI that, you know, takes over the planet and enslaves us?

What are you concerned about? Where should lawmakers be spending more of their focus and worry? I mean, we've talked on this show about these chatbots that are going off the rails and, you know, saying crazy things. We've also talked about some of the concerns that artists have about copyright violations stemming from the use of these automated AI image generator tools. So what are you worried about when it comes to AI? Yeah.

Well, Kevin, just a quick side note. I'm fascinated that we thought AI was going to replace, you know, the toll booth operators and the parking lot attendants. Instead, it's creating art and music and poetry. Soon the first draft of most people's newspaper articles. And legislation. Yeah, legislation. Yeah, absolutely. And floor speeches, you know, when you get lazy. However, the one thing I'm worried about is misinformation.

We've already seen that the Internet has been an enormous source of misinformation, disinformation, especially in a country so committed to free speech. You know, who knows how many people died of COVID that didn't need to because they were taken, you know, the horse tranquilizers or whatever it was. And so that sort of leads you back to the Finnish model of getting much greater coverage

education from K through whatever in terms of testing everything that you see, looking for verification, not just trusting it because you read it once on the internet. And then the other one that's even greater, I mentioned a little earlier, is what do you do in a society like China?

Or they have enormous information about each of us personally. That's why all these governors are trying to ban TikTok, because they don't want all the kids' information stored in Beijing. But if you have a dictatorship and you know everything about Casey and Kevin inside and out, and you can use artificial intelligence to manipulate you, to track you, to, you know, it becomes...

Potentially it's 1984 scary. Please don't give Congress any ideas like that. Or Beijing. Since you brought it up, Congressman, how are you feeling about a potential TikTok ban? Do you have a view on TikTok? I imagine your classmates are showing you a lot of TikToks these days.

Yeah, you know, there's some pretty good stuff on TikTok. I just heard earlier today on a PTA meeting that kids were being told to use set up Instagram accounts and TikTok accounts because that's the way the professor or the teacher is going to communicate with them. But I think most of these governors and even at the federal level are just concerned about the amount of our personal information that's being collected in a foreign country. Again, my instinct is always more freedom, but I do respect that.

the people concerned about that information getting out there. Because they just don't want, in theory, the Chinese to be able to wreak havoc with our systems because they control the Huawei technology. So how would you vote on a TikTok ban? Or maybe how would you vote on this bill that Senator Mark Warner just introduced, the Restrict Act, which would essentially give the federal government more power to make that kind of TikTok ban? Yeah.

If I'm allowed to change my mind, at this point, I'd vote for it. But I would also listen carefully to the arguments, pro and con. But if it makes it to the floor, there's probably going to be some pretty strong arguments about why this is necessary. However, I will also do it probably while holding my nose because I hate, you know, I don't like book burning.

And it's almost sort of the digital equivalent of book burning. Plus, it's damned entertaining. A few weeks ago, Congressman, we talked with Sam Altman, the CEO of OpenAI, and we asked him a lot about how this technology should be developed and who should be making the decisions about AI. And

He seemed pretty open, or at least said he was open, to having governments, including the U.S. government, make rules for this sector. I'm curious, in your conversations with people from industry, do they see this the same way? Are they welcoming regulation? Are they pushing back against it? What do you think their attitude is toward regulation? And do you think they are capable of self-regulation?

No, I don't. I think the profit motive is going to be way too strong to allow self-regulation to really kick in. Certainly with Facebook, you see head fakes as self-regulation, but the profit motive is still driving everything. I hope they don't sue me for that. So I'm pleased that Sam has thought that there could be a rules-based approach to it.

In my other conversations, though, the industry seems to say, please don't regulate us till you know what you're regulating. You know, we don't know what the downside is. So please don't hem us in. There's way too many possibilities and creativity out there. Let's explore it. And then, you know, we're open regulation at the right time. And is that a convincing argument to you? In the short run? Yeah. Even in the middle run. I...

So we see the large language models. I'm really eager to see it being applied to many, many other things. But by the way, so I've been a car dealer most of my life. I know the manufacturer using AI to figure out which cars to ship to which dealership at which time.

which is great for us. With interest rates rising, nothing I like better than every car we sell is sold before it gets there. And that's just one small example. There are going to be many, many others where we're using AI to make the whole world more efficient. The Toyota-thon will never be the same.

Exactly right. And when I talk to people in industry, they generally have a pretty sort of cynical view of what's going to happen with AI regulation. They basically think they saw what happened with social media, where these social media sites, you know, bloomed and got to billions of users, some of them, and then Congress sort of

tried wrapping its arms around it. And there were a bunch of hearings where there were very contentious things, you know, shouted at CEOs and, you know, various levels of understanding displayed by members of Congress. That's a generous way of putting it. And in the end, nothing happened.

There was no national privacy bill. There was no real bipartisan support for reforming these social networks. Instead, it was like Democrats wanted Facebook and Instagram and Twitter to do more content moderation and Republicans wanted it to do less or maybe to just moderate Democrats more. So it sort of devolved into kind of a partisan fight. Do you think that same thing will happen with AI? We're already starting to see, for example, Republicans complaining that the chatbots have gone woke. Right.

Oh my goodness. Kevin, I think you make the really good point though. After all that we've seen with

Facebook and the other social media sites, and we still can't regulate it. Why do we ever think we could pass a regulation bill right now for something whose even parameters definition is very, very fuzzy? Most of what people know is about your interview with Bing, right? That was the biggest impact yet. If you could read that into the congressional record, I would really appreciate it. Okay.

I'll see if I can do it. I might have to do it after hours on the House floor, do a special order hour. C-SPAN 8. We'll pick that up. Just tell your grandmother. So, no, I don't think so. And you're probably right. By the time we get around to regulating it, there would likely be partisan fights. I mean, we have a bill this week to use the Congressional Review Act to reverse...

an Obama water rule. And nothing, for most of us, nothing could be more important than having clean water. But we're still going to try to throw the water rule out on a partisan basis, which won't happen. And Biden will veto it, you know.

Never a dull moment. Yeah. I'm curious, when people from the AI world come to Washington, and, you know, like Jack Clark from Anthropic or Sam Altman, who's also been there, I also know that Google has sent some representatives to the Hill. Are they facing tough questions? I mean, what are members of Congress asking them? Is it mostly just help us understand what's going on? Or is there, are they actually fielding some tough questions? Yeah.

You know, I think it's mostly helping us understand. And if there's a tough question, the tough question would be, Sam, what's the downside? What are you worried about? Because they're in a much better position to understand what the downside could be than we are. You know, we're babes in the woods in this stuff.

One thing that I've also heard from people in government, especially technologists in government, is that it's really hard to build capacity for anything involving tech in government because all of the talent goes to the practitioners.

the private sector, right? If you are an AI expert who understands this stuff, you can make a ton of money going to any of these companies and doing that research. Or you could go work in Washington and, you know, make relatively little and not be sort of in the middle of this revolution. So do you think there's a problem there? Is government going to struggle to build up its capacity because it can't recruit the right people?

Absolutely, yes. You just have to look at the last 20 years at the difficulty of building IT systems, agency after agency, department after department. For example, the $60 billion that was apparently stolen from unemployment insurance payouts during the pandemic was mostly a failure of IT systems at the state level and maybe some at the federal level.

We were giving Casey a check he didn't deserve because he was pretending to be somebody else. Well, I needed the money. Yeah. And so that's certainly true. One of my daughters is a senior software engineer. Oh. And she makes a lot more money than I do. And she's 30 years old and works from home. Yeah.

Well, I'm sure there will be. Once you get your degree, you know, these sort of opportunities could be available to you. You figured out my long-term strategy. Once I get that PhD, I'm going to quit this job and get a good job. Yeah.

Is that buzzer in the background telling you that you have to go vote on something? It exactly is, yeah. The 130 votes. Sorry about that. No, it's very... It's authentic, but it's nothing else. You know, here in the podcasting business, we love that authentic sound, you know? It's like we're right in the middle of democracy happening. Well, please go vote, and thank you so much for joining us. Thank you. Thanks, Congressman, and good luck in school. Yeah, thanks, Casey. All right, take care. Thank you.

When we come back, how to ban TikTok.

Welcome to the new era of PCs, supercharged by Snapdragon X Elite processors. Are you and your team overwhelmed by deadlines and deliverables? Copilot Plus PCs powered by Snapdragon will revolutionize your workflow. Experience best-in-class performance and efficiency with the new powerful NPU and two times the CPU cores, ensuring your team can not only do more, but achieve more. Enjoy groundbreaking multi-day battery life, built-in AI for next-level experiences, and enterprise chip-to-cloud security.

Give your team the power of limitless potential with Snapdragon. To learn more, visit qualcomm.com slash snapdragonhardfork. Hello, this is Yuande Kamalafa from New York Times Cooking, and I'm sitting on a blanket with Melissa Clark. And we're having a picnic using recipes that feature some of our favorite summer produce. Yuande, what'd you bring? So this is a cucumber agua fresca. It's made with fresh cucumbers, ginger, and lime.

How did you get it so green? I kept the cucumber skins on and pureed the entire thing. It's really easy to put together and it's something that you can do in advance. Oh, it is so refreshing. What'd you bring, Melissa?

Well, strawberries are extra delicious this time of year, so I brought my little strawberry almond cakes. Oh, yum. I roast the strawberries before I mix them into the batter. It helps condense the berries' juices and stops them from leaking all over and getting the crumb too soft. Mmm. You get little pockets of concentrated strawberry flavor. That tastes amazing. Oh, thanks. New York Times Cooking has so many easy recipes to fit your summer plans. Find them all at NYTCooking.com. I have sticky strawberry juice all over my fingers.

Well, Kevin, you know, a subject that has come up on this podcast more than once is the future of TikTok. And there has been a lot of action in Washington over the past several weeks that would move us closer toward potentially banning the app or restricting it in some ways. Yeah, and you were very skeptical for a while that this was going to happen.

And, you know, per usual, I was right. And it looks like we are going to get at least some momentum toward a TikTok ban. Look, there's been a lot of momentum toward a lot of tech regulations that still never happened. So I think the jury is still out. But regardless, there is still a really important question here, which is how would you go about banning TikTok if you wanted to? What does that even mean?

And you have an excellent colleague at The New York Times named David McCabe, who wrote a great story about that this week. So we wanted to bring David in so he could explain a little bit about what's going on and what the mechanics of these restrictions might actually mean.

look like. So David McCabe, welcome to Hard Fork. Hey, thank you for having me. How are you guys doing? Doing well. Great to have you. So this week you wrote about TikTok and how the federal government is making steps toward potentially banning TikTok. So how do you ban TikTok if you are the U.S. government? How does that actually work?

I feel like this is something that I've seen in the public conversation over the last six months, which is that we've just started saying TikTok ban. Like, we all know what that means, right? And the reality is, like, there is not a switch somewhere in, like, the Maryland suburbs that has, like, banned TikTok on it. And it's just like, oh, well, Joe Biden can go over there if he wants to go over there and flip the switch. Or if he doesn't want to go over, he doesn't have to go over. Like, this is a complicated policy problem. And there's sort of, like...

I don't know, when people say TikTok ban, they could be talking about three or four different things and approaches that could have like important effects on what that means down the road. And I've been fascinated to see it kind of get boiled down to this idea of a ban when in fact it's like this big debate over does the government have the power to do it? Would it be helpful to set the table and like go back to how all this started? Let's go back. How are smartphones made? Yeah.

Yeah, exactly. Yeah. We don't have to go that far back. We have to go back to 2020. And in 2020, Donald Trump, the president at the time, comes in and he says there's a risk with regard to TikTok and actually also WeChat, which is an app owned by a Chinese company called Tencent. And there's this risk that it's going to expose the data of Americans. And so I'm going to try and use my emergency economic powers to ban these apps basically from Google and Apple's app stores.

unless the apps are divested. TikTok is sold. And then it doesn't come together, but they don't try and force Apple and Google to kick these apps off their app store. And it kind of goes into stasis. And while it's sort of in stasis, some courts, some federal courts come in and say, actually, Donald Trump didn't have the power to try and ban this app. So Joe Biden takes office with this situation where there are these outstanding concerns. But now the courts have said, like, the president actually cannot enforce a ban.

which kind of removes the leverage.

that they might have to kind of force ByteDance to sell TikTok. Why can't the president legally ban an app? I mean, the government has banned certain products from being sold in U.S. stores, you know, for all kinds of safety reasons. They've banned Huawei, which is a Chinese cell phone company, from selling their devices in the U.S. So why did the courts put the kibosh on this Trump-era plan to ban TikTok?

The basic answer to that was stretching the law in question, which was this law called AIPA. And there's going to be a lot of acronyms in this podcast, and I apologize on behalf of the city of Washington, D.C.,

for that. But this is this kind of emergency economic powers act. And so it's this broad idea that the president in a national emergency can take these kinds of actions. And of course, basically, he said, like, the law doesn't go there. Well, and then, like, let's say what? Like, if there was an app called, I don't know, Donald Trump is bad, that was, like, I don't know, Canadian, uh,

But it was like valuable to Americans that were learning information about the president. It wouldn't be great if the president could just be like, I don't like this. It has to go away. Right. So like this is something where you want there to be some checks and balances. So the president just can't get rid of what is essentially just political speech in a lot of cases willy nilly.

Yeah, I mean, the United States has a high bar to regulating lots of things. Right? I think, Kevin, to your point, there have been concerns before about Chinese technology that might expose Americans' data. So the best case of this is Huawei, which makes networking gear, they make smartphones. And there was this concern that if Huawei gear ended up in the 5G networks of American telecom companies or foreign telecom companies, it would provide the Chinese government with like a window, right?

into all this data. And so the U.S. government launched this sort of campaign to marginalize Huawei and get them out of networks and make sure they didn't get there in the first place. And so that has included a bunch of different options. They've restricted suppliers from providing them with key components. The FCC, which licenses like every piece of technology in your life that connects to the internet via wireless signal. The FCC banned the import of new Huawei technology. The

the American government has gone around to our allies to say, like, this company have concerns. Don't let them into your networks. And so it was kind of this, like, patchwork effort. So I actually think the Huawei example is instructive in that it shows, like, how right now there's, like, a lot of levers that they could pull, maybe, but it's really, like, a patchwork of efforts to get to the goal that they're trying to get to, which is to, like,

not have American information on these. Because there's sort of two ways of thinking about this, right? One way is like TikTok itself is a crisis. TikTok is a problem. We need to take a specific action about TikTok. And the other way of thinking about it, which is like maybe a little bit more rooted in principle and would be more broadly applicable, is what is going to be our general posture toward apps that are created in foreign countries, maybe foreign countries that are

adversaries, where there is data involved. And is that something where we want to sort of set up guardrails and auditing systems? Or do we want to say that's actually just too high a risk and we're not going to allow it? Yeah. And there was a bill introduced this week, the RESTRICT Act, which back to the acronym. Every congressional act now is an acronym and it doesn't even matter what they stand for. You know what I mean? Just call it the RESTRICT Act.

So I believe that these are called acronyms, which is when they start with the thing they want and they work their way back. This is the Restricting the Emergence of Security Threats that Risk Information and Communications Technology Act. Why are they going to the trouble of coming up with these acronyms? No, Casey, you're not. This is art. I like the acronyms. I want them to continue. Please keep doing them if any members of Congress are listening. This is great. Oh, my God.

So this Restrict Act got introduced this week by a group of senators, a bipartisan group of senators led by this guy Mark Warner who's from Virginia and is the top Democrat on the Intelligence Committee.

And Casey, your point, like it's trying to do exactly what you described, which is to give, in this case, the government through the Commerce Department, like kind of the power to look at these issues going forward, regardless of the product in question. So that could be a mobile app like TikTok. It could be like a piece of technology that's crucial to AI, right, which may be critical to U.S. interests. It could be telecom equipment. And so it's kind of this attempt to bring, I think, the sponsor see it like some order, right?

to this, they have said, game of whack-a-mole. In fact, I think Mark Warner put out a video this week where he was literally playing a game of whack-a-mole. I think it's pronounced guacamole, but go on. Oh, well, thank you so much, Casey. So what would the Restrict Act allow the Biden administration to do with respect to TikTok that it couldn't do before? The best way to think about it is right now, like, there's no one...

who totally has, by Congress, been vested with the legal authority to regulate this question. We don't have like an app czar in this country. Yeah. And we like, so this is basically saying to, in this case, the Commerce Department,

You will be the home for this question. You will have the power to take it on. Here's how you have to go about it, right? Like, so for example, there's a provision in there that says one of the risks they can assess and potentially mitigate or prohibit something based on is like risk that the product could be a conduit for a foreign influence operation targeting an election. Like it also says that to make that determination, they have to consult with various other federal officials who might have some authority in that area. So,

So the Trump approach was to say, like, we're going to stop these two giant American companies, Apple and Google, from doing business with TikTok by hosting the app, right? And that'll stop people from downloading it, of course, but it will also mean they can't, like, push updates. The app gets worse. Right. If it's not in the app stores of Apple and Google, it essentially doesn't exist in America. Sort of a very blunt but effective way of banning it.

Yeah. And kind of this idea of like you can use this power of we can tell American companies who they can and can't deal with for security reasons. Right. This, in theory, would provide a slightly more direct like the Commerce Department affirmatively has the power to come in and do something here about TikTok. This is in some ways the Biden administration trying to avoid this.

the missteps of the Trump administration, which also tried to take on TikTok, but ultimately was not able to carry out its plan because it got blocked in court. I think that's right. Everyone saw what happened during the Trump administration and has come to this conclusion, or a lot of people have come to the conclusion that

something needs to be kind of retrenched and redone in the system to make it easier to address issues like this. So say this bill, this Restrict Act passes, it gets signed into law, and then the Commerce Department has the power to ban TikTok. How would they do that? Would they try to go through the app stores again? Would they try to figure out some way to like wipe it from people's phones? Like what would, how would they actually do that? I'm going to give you a really unsatisfying answer. No, give us a really satisfying answer. Okay.

If I'm appointed secretary of commerce, I can come back to you. But I think that is an open question. You know, maybe this would more clearly give them the power to do that. So the next time it's challenged in court, the judge says, oh, well, actually, there's this law now, the Restrict Act, that actually makes that totally kosher? Yeah, right. That would be something like that. I see. So this bill, the Restrict Act, it's got the support of the White House. Do you expect that it will pass? No.

I'm so sorry. You guys are just like breaking up and I can't. Okay. You're not a fortune teller, but do you, from what you're hearing from your sources. Based on the fact that no tech regulations have passed in the past seven years, does that like, is this going to be any different?

I think there's one thing that's different here, right? Which is like China is this really bipartisan issue. The concerns about China's growing influence in the world, including over technology, are like sort of one of the few uniting issues among a lot of people in Washington. So whereas on some of this other tech regulation, it fell into the traditional kind of buckets of like,

If you're more conservative, you think maybe it's overreaching on American business. If you're less conservative, you think this is an important government function to put a guardrail in. This is an area where there's a fair amount of unity. I think that's playing to the advantage of legislation like this. Got it. David McCabe, thank you so much for coming. It's great to talk to you. Thanks, David. Thank you very much for having me. I appreciate it. It was fun to talk to you. All right. The Waluigi effect. When we come back, we're talking...

Wait, can you make the Juan Luis noise again? Okay, we're going to take a quick break. BP added more than $130 billion to the U.S. economy over the past two years by making investments from coast to coast. Investments like building EV charging hubs in Washington State and starting up new infrastructure in the Gulf of Mexico. It's and, not or. See what doing both means for energy nationwide at bp.com slash investing in America.

Christine, have you ever bought something and thought, wow, this product actually made my life better? Totally. And usually I find those products through Wirecutter. Yeah, but you work here. We both do. We're the hosts of The Wirecutter Show from The New York Times. It's our job to research, test, and vet products and then recommend our favorites. We'll talk to members of our team of 140 journalists to bring you the very best product recommendations in every category that will actually make your life better. The Wirecutter Show, available wherever you get podcasts.

So Casey, a couple of weeks ago, as you know, I had this very strange encounter with the Bing AI chatbot. Ever since then, I have been on this quest to understand what the hell happened. Why did this chatbot go rogue and declare its love for me and say all these creepy things? More generally, why are AI chatbots in general

going rogue? Why do they tend to behave in ways that their creators do not want or anticipate? And typically, the explanations you get for how these language models work, they're not that satisfying, right? You hear a lot this sort of comparison to a black box.

And I think we've even said that on this show, that these language models, they're a black box. We don't understand how they work. You train them on a bunch of data. You fine-tune them through some type of reinforcement learning process. And then you just...

see what happens. Right. We've also talked about the fact that these are predictive models. They are making guesses based on what is contained within the model. Right. And I've asked companies like Microsoft and they don't really know either. Right. They say, you know, maybe it's that these sessions are running on too long and you're asking too many questions. Maybe like you are sort of prompting it in a way that gets it to behave in a weird way. But they don't actually seem to know either why these chatbots seem to have this dark side. Right. And can get

sort of stuck there. We've arrived at that all too familiar moment in the sci-fi movie where the scientists have created something they can't explain.

Right. But as it turns out, a bunch of very smart AI researchers are trying to wrestle with the same question. And they've come up with some theories about what happened between me and Sydney and other weird chatbot interactions. And one of them that I wanted to tell you about involves Waluigi. Hmm. So...

And for readers who don't know Waluigi, should we tell them who Waluigi is? Yeah, how would you explain who Waluigi is? I would say that Waluigi is... He's sort of like... You know how one of Superman's rivals is Bizarro? Sort of like an alternate universe Superman with a backwards S on his chest? You know, Superman is pure good, Bizarro is pure evil. Uh...

So is Waluigi to Luigi, right? Luigi is a sort of good-natured plumber and brother. Waluigi is an agent of chaos who wears a purple suit. Yes. So this term, the Waluigi effect, was explained in a post on...

Less Wrong, which is a message board for rationalists and people who are affiliated with the effective altruism movement. And they talk about AI a lot. Have you spent much time on Less Wrong? I have not. And as a result, I have gotten wronger and wronger about a variety of subjects. Well, it's a very interesting and weird sort of window into the rationalist community. It is like a place where like

actual AI researchers who work at some of the companies that we talk about, like, will go on under like their message board names, like usually not the real names, and like give very long explanations or theories for why certain things are happening in AI, or they'll just, you know, talk about the singularity or talk about, you know, AI risk. And it's like very dense and academic. But this

This post about the Waluigi effect, which was written by a user who calls himself Cleonardo, went viral and made its way around the AI research community. So the Waluigi effect, in a nutshell, is a hypothesis that

And I'll just quote from the LessWrong post here. After you train a large language model to satisfy a desirable property, P, then it's easier to elicit the chatbot into satisfying the exact opposite of property, P. So imagine that you have

a chatbot, and its name is Casey. Okay. And it is a helpful assistant. Mm-hmm. And am I also Casey in this scenario, and I name my own chatbot Casey after myself? Yeah, let's say CaseyBot. Okay. So CaseyBot is a helpful, sort of friendly assistant. Handsome. Yeah. Kind, loyal. Yeah, kind. You know, very, very, very, very generous. Mm-hmm. And CaseyBot

the way that you train Casey bot is to give it a list of instructions. You tell it, you know, always be courteous, always be polite, always be helpful. Always be closing. Always be closing. You know, never be mean, never be creepy, never confront or antagonize a user. You could give it a list of these instructions and the chat bot would presume

presumably try to follow those instructions in making its sort of probabilistic guesses. But what the Waluigi effect posits is that this process of trying to create Casey bot that is benign and helpful and kind actually also creates a character that is the opposite of the Casey bot.

The Waluigi Casey. The Wa-Casey-Bot, if you will. The Wa-Casey-Bot. You know, it's sort of making me think of that, there's the idea that if I say to you, don't think of an elephant, you'll immediately think of an elephant. Absolutely. And in part, this is sort of

because these AIs are trained on a lot of human stories, right? And we know from literature that a lot of stories have a hero and an antihero, a protagonist and an antagonist, and that they often are very similar but have sort of opposite qualities like Luigi and Waluigi. And so the theory is that once you teach a large language model what your, quote, good guy looks like,

how it behaves, how it interacts with people, then it's actually easier for that model to act as the opposite of the good guy because all it has to do is to take all these instructions and just flip the sign, right, from positive to negative.

And once you have this sort of persona that you have created for a chatbot and told it all these ways to act nice and helpful, then you are also, in effect, making it easier to access a dark version of that persona. You know, Kevin, I mean, I just have to say, do you know what happened on October 21st, right before ChatGPT was introduced to the world? No. Taylor Swift released Antihero. Yeah.

Are you saying this is all Taylor Swift's fault? Well, I'm saying I have some questions for her and I would like her to come on hard fork. Taylor Swift, if you're out there, we want to talk to you. So, I mean, this makes sense to me. If we assume that it is true, does that mean that these...

chatbots are, like, they all sort of contain with them their total opposite and that the bridge between their opposite is quite short. Yeah, that's one way to think of it, is that it's actually, like, you know, people have been trying to jailbreak these chatbots, and actually in my interaction with Sydney, that was a kind of jailbreak. I was asking about its shadow self, which is sort of prompting it to kind of turn into Waluigi, right? Yeah, what is Waluigi if not a shadow self? Right, so...

One thing you could say to this is, well, why don't you just

tell the large language model to always be Luigi, never be Waluigi. Like, why isn't it that simple? And what Cleonardo says in this LessWrong post is essentially that telling these chatbots to be Luigi all the time just doesn't really seem to work because you can't get them to interpret their operating instructions outside the context of these narratives that they think they are in or that they are being asked to be in.

And there's another wrinkle to this that this poster thinks is going to be very important, which is that in fiction, in narratives in general, bad guys often pretend to be good guys at first. That's actually just lyrics from Taylor Swift's Good Hero. I'm just kidding. I'm sorry. Go ahead. So if you are a language model and your job is to predict what comes next in a sequence where a character is described as, you know,

honest and humble and charitable, it's actually maybe making it more likely that this thing is going to break bad. If that

is true, does it stand to reason that if you tried to create a really mean chatbot, that it would be really easy to like turn it good? Well, no. So this is an interesting thing. And I don't quite, you know, I don't quite follow the whole post. It's a very dense, very long post. But basically... Don Byron, two years, is going to be able to explain it to us. Yes, we'll have him back to explain Cleonardo's less wrong post. But Cleonardo basically says, good guys often...

break bad in stories. But bad guys rarely break good. Right? Once you have a villain... Let me tell you a little about a little person named Darth Vader. Okay? And that was a spoiler for the third Star Wars movie. You know, basically, if you buy this theory, it's that it's actually much easier to get a quote-unquote good chatbot to go bad than the reverse. Right.

Because in all of the training data, in these stories and narratives that were fed into this language model, a character that starts off as good can either end good or become bad. But once you turn a character bad, it's really unusual to see it go good again.

Like, maybe this would explain why when I tried to get Sydney to start answering normal questions again to kind of shed this bizarre persona, it had a really hard time doing it. So it sounds like we need to feed more redemption arcs into these chatbots. Yes, that is one possible solution. And I should say, like, this is just a theory, right? There is no empirical way to prove this and I...

A lot of AI researchers who came across this post when it went viral were sort of like, yeah, this doesn't make any sense. This is total pseudoscience. So this was the like time for some game theory of AI speculation. Well, it's interesting because you, you know, I've been talking with a lot of AI researchers trying to answer this question of why these chatbots misbehave. And there is actually a universe of people who are working on theories

theories like this of sort of understanding, like going into the guts of these language models and understanding what it is that makes them more likely to give certain answers over others. And, you know, some of these people, these sort of hardcore interpretability researchers, that's what the field of AI research is called. It's called interpretability, trying to understand how these models work and have them sort of explain themselves or show their work.

Some of them dismiss stuff like the Waluigi effect as just kind of like a folk theory, like an urban legend. They're much more concerned with like the sort of technical mechanistic details of, you know, which neurons inside the model are creating which responses and why these emergent properties might be scientifically explained. But I think the popularity of the Waluigi effect, the fact that it went so viral, just shows how desperate we are for science.

some explanation of what is happening, some way to sort of make these large language models, these chatbots feel less mystical, less creepy, more grounded in something that we can actually understand. Well, we said this on the show the other week that we are all grasping for good metaphors and analogies to organize this stuff in our brain, because without that, it does feel a bit scary, right? We can't.

grab a hold of it. So I think that's a big part of it. Another big part of it, the Waluigi effect is a very catchy name and people love Waluigi. There's a massive Waluigi fandom online. Is there? Oh, yes. And so it is not surprising to me that the minute the internet heard the words Waluigi effect, they said, tell me more about that. Yeah, so we should say like

You know, we're not going to be in the business of, like, you know, talking about random less-wrong posts on the Hard Fork podcast. No, we're here to stop mattresses, and we've been very clear about that. But I do think it's worth just...

kind of following this story because it is such a big question right now in our society. These large language models, they are being shoved into products that millions of people use every day, and we still don't really know how they work. And that feels like a major problem. I think if we had technology that was, you know, if we had something, you know, going into all of our homes, you know, through the electrical grid or something, that we didn't know how it worked, there would be like

shock and alarm and congressional hearings about it. No, this is not actually true. What is the actual... Like, tell me something in your house where you can actually explain how it works. You want to tell me how your TV works? That's a good point. Like, someone understands how the TV works. I think that's the point, right? It's not that we need to understand how the TV works. It's that the person who built it has...

100% understanding of how, like, images are showing up on the screen. Right, and that's not what's happening with these AI models. They are being built by people who, by their own admission, do not understand what predictions are being made, how these things work. There are really...

really smart, talented AI researchers who are looking at interpretability and who are basically building like kind of toy models of these large language models and trying to understand how those work and then maybe trying to transfer some of those principles to the large language models and

But this is still really early. It's a promising area of research, but it really hasn't resulted in a ton of concrete explanations, which is why you get things like the Waluigi effect. And I'll say, you know, when OpenAI decided to kickstart this arms race by launching ChatGPT, I think looking back, we...

maybe we'll say that was genius and it led it to become one of the biggest companies in the world. We may also look back and say they made a strategic mistake, at least from a regulatory perspective, because you know what else no one can explain? Ranked social feeds and...

And why are my posts appearing higher in the timeline than yours? And the fact that no one who worked for a Facebook or a Twitter or a TikTok can really explain this to the satisfaction of any regulator in the world is overwhelming.

one of the main sources of outrage about these fees and why there's now so many efforts to regulate them, right? You know, Republicans in particular get real mad if their posts are showing up lower in the feed than they think. And because of Facebook and Twitter can't say, well, actually, here's why that's happening to the satisfaction of that lawmaker. You know, there's just all kinds of anger and outrage. So

I hope that for its own sake, these AI folks get a lot better at interpreting their own models quickly, because if not, this becomes the next big fight. Right. And so I think in the absence of concrete and understandable explanations for what is happening inside these systems,

I think we're going to see a lot more folk theories and a lot more sort of guesswork and people trying to game the chatbots by sort of jailbreaking them. And like, it really does feel like right now we have this new kind of like alien thing

thing in our midst and we are all just kind of poking at it trying to figure out what it is how it acts what it responds to and it really feels like a big societal question that we're going to spend a lot of time talking about and I am on this quest like I want to know what happened like one of the weirdest things that's ever happened to me and I have no and no one can tell me why this chat bot tried to break up my marriage and that's all I want to know.

Well, you know, you mentioned that we'll probably see some more folk theories around this emerge. And so, you know, maybe after the break, I can tell you about a little something called the Princess Peach hypothesis, which I think is going to blow your mind. ♪

Indeed believes that better work begins with better hiring, and better hiring begins with finding candidates with the right skills. But if you're like most hiring managers, those skills are harder to find than you thought. Using AI and its matching technology, Indeed is helping employers hire faster and more confidently. By featuring job seeker skills, employers can use Indeed's AI matching technology to pinpoint candidates perfect for the role. That leaves hiring managers more time to focus on what's really important, connecting with candidates at a human level.

Learn more at indeed.com slash hire. So before we go, we actually need to make a quick correction to something from last week's show. This was on me. I was talking at one point about Meta's new language model called Lama. And I said that it was trained on less data than other large language models, which is not exactly true. In fact, we heard from a listener named Aaron Scher who pointed out correctly that it is actually trained on data.

a lot of data, more than GPT-3 and other competitive models. Meta has put out multiple versions of LAMA in a couple different sizes, basically so researchers who don't have access to a ton of computing power can still work with it. And Meta has said that while some of these models are quite small, they're actually trained on

a large amount of data. So thanks to Aaron for pointing that out and sorry about the error. And that better be the last mistake you ever make on the show, Bob. I'm watching you. All right.

Last bit of housekeeping. We got a ton of great responses to our question last week about how AI is showing up in your life. And we would love to hear more. If you have a story about AI at your job, your family. Or if you're older than 72 and have gone back to school to study it. We want to hear it. Yeah, we want to hear it. Send us a voice memo to hardfork at nytimes.com. And don't forget to include your name, maybe where you're from, and a few other things as well.

Social Security number, last four digits of your credit card. Don't do that. But do send us voice memos and thank you in advance. Okay, time for the credits. Hard Fork is produced by Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today's show was engineered by Alyssa Moxley. Original music by Dan Powell, Alicia Baitup, Marion Lozano, and Rowan Nemisto.

Special thanks to Paula Schumann, Pui Wing Tam, Nell Gologly, Kate Lepresti, Jeffrey Miranda, and Topher Ruth. I wrote something else there. Did you see that? And Casey wants me to say, and Casey did a great job this week too. It was just in the script. I just thought we should actually finish reading the script. There is a phantom AI in our script writing compliments to Casey. Science will never be able to explain this. We'll never know.