cover of episode 2024 Is The First ‘AI Election.’ What Does That Mean?

2024 Is The First ‘AI Election.’ What Does That Mean?

Publish Date: 2023/11/30
logo of podcast FiveThirtyEight Politics

FiveThirtyEight Politics

Chapters

Shownotes Transcript

This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.

It's almost unimaginable to me that we won't have major candidates with personalized chatbots, which are have a conversation with this candidate, a chatbot trained to represent this candidate. Wait, in fact, I'm pretty sure I received a automated text message from Nikki Haley while we are recording this podcast right now. It's Nikki Haley. Do you have a minute to talk, Galen? See, now with Nikki Haley, that's no way that if that was Chris Christie, it might really be him because he doesn't have that much to do these days.

Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk. Today, we're delving into a topic that's increasingly critical in our digital age, the role of artificial intelligence in American elections. As we approach the 2024 election, the integration of AI in political strategies and campaigns is not just a possibility, it's a reality.

But this reality comes with a myriad of questions and concerns. How might AI influence voter behavior or the spread of information? What about the ethics and regulations surrounding AI in political campaigns? And if you haven't called it yet,

That was all written by AI, specifically ChatGPT4. We also asked ChatGPT to create a version of the intro that included statistics. It cited the, quote, 58% of Americans who expressed concern about the integration of AI in politics. But when we pressed on that statistics source, it admitted that this was a, quote, hypothetical example created for the context of the podcast introduction.

So there you go. There's just a taste of some of the issues that AI could present in a political environment.

So now here is the human written introduction. ChatGPT launched publicly exactly one year ago today, November 30th. AI, specifically OpenAI, the company responsible for ChatGPT, has been all over the news in the past couple of weeks for what appeared to be at least in part a debate over how quickly to accelerate the technology. This all comes at a time when we're gearing up for a contentious presidential election. And

And Americans do have concerns about the use of AI in politics. For the record, I tracked down what might be the source for ChatGPT's statistic. And according to an AP University of Chicago poll, 58% of Americans think AI tools will increase the spread of false and misleading information during next year's election.

So today, we're going to talk about those risks. And here with me to do it is Ethan Bueno de Mesquita. He is the interim dean of the Harris School of Public Policy at the University of Chicago, and his research focuses on applying game theoretic models to areas like political conflict and electoral accountability. He recently co-authored the white paper, Preparing for Generative AI in the 2024 Election, Recommendations and Best Practices Based on Academic Research. So Ethan, welcome to the podcast.

Thanks. Thanks for having me. It's great to be here. Let's start with the latest in AI drama. As folks have probably heard, in mid-November, the board of OpenAI fired its CEO, Sam Altman. A few days later, Altman was rehired and the board was significantly changed. And depending on who you ask, this back and forth was in part an

argument over accelerationism versus decelerationism in AI, with Sam leading the accelerationist camp and the board maybe representing those who advocate for slower adoption and stronger guardrails. It seems like accelerationism won the day in this case. I'm curious, what, if anything, does this episode tell us about the state of AI going into an election year?

Yeah, I mean, I think we don't exactly know what happened, but it seems like maybe that was what the disagreements were about, or at least some of the disagreements. Whoever knows what's going on inside a corporation's decision making. I do think this question, this sort of broad question about, you know, how aggressive and accelerationist we should be is related to the politics question. Although I think often the accelerationist debate centers around more.

I don't know, from my perspective, a little bit more sci-fi, you know, doomsday kind of scenarios, apocalyptic scenarios than just AI and elections. But I do think AI and elections is, in some sense, our coming AI emergency, our proximate AI emergency. It's not the one where the robots take over the world, but it's the one that we're facing in the really proximate near term.

what are we going to do about creating guardrails or safeguards for the way AI might affect elections in ways that matter a lot for the democracy and for which we only have a few months left, really, to make any substantial decisions. So I do think it's good that we're having this conversation.

Yeah, I think one of the last times we talked about AI on this podcast, we talked about this theoretical question of if you told a machine to turn humans into paperclips, how effective could it be at accomplishing such a task? So maybe that's not exactly what we're talking about here. But I am curious from an elections perspective, what is the nightmare scenario? What is the worst case scenario you can imagine of an AI application in the 2024 election?

There's kind of two classes of things people worry about. Both of them could potentially be somewhat nightmare scenarios. I think one of them is more serious and also more likely. So let me start by focusing on that, which is the kind of deepfake problem. And I think the worst version of that, of course, the worst case scenario could be, you know, arbitrarily bad. But let's think about the realistic worst case scenario would be something like a deepfake

video or image created by AI, which depicts a major candidate in some compromised position or appearing with the leader of a terrorist organization or something slightly more believable than that, but is fake and is dropped in an October surprise kind of way just before election day, convinces a lot of voters who don't have time to fact check and media doesn't have time to fact check, and

Seems to swing the election, seems to matter for election outcomes in ways that make people feel as though election outcomes are in some important sense, uh,

being manipulated by fake information that was generated by a computer in a very convincing way. And of course, we've already seen some instances of deepfakes playing a role in elections. So there's genuine reason to worry both that this is feasible and also that political actors are starting to think of this way. The other kind of use case, which I think is

both less likely to be deeply problematic and also it's a little harder to imagine it radically changing election outcomes, is that people kind of en masse turn to ChatGPT or other AI chatbot tools like that for kind of hard technical factual information about elections and that ChatGPT or whomever, whichever chatbot you like,

gives them the wrong information so that I ask my chatbot, you know, I live in 60605, where's my polling place or how do I register to vote? And it tells me something incorrect. You know, if voters were sort of disenfranchised by that, that would be very worrying as well. I'll say that there are other concerns that have been brought up that I want to get to later on, but let's focus on this deep fake question for a moment, since you say that it's the likelier and more serious potential threat.

One, you said that deepfakes have already been used in elections. What are those examples? Like, do they seem to have materially affected anything? Were voters able to determine that they were deepfakes? What kind of like what was the ultimate impact? And then maybe more to the point, we'll get to solutions like what is to be done about the deepfake challenge?

Let me give you a few examples. So in the recent Turkish elections, there were some interesting things that happened with DeepFake. So there was a video put out showing a candidate in that election who did not ultimately win that election in a compromised position, in particular with the leader of – appearing with the leader of a terrorist organization. The PKK, right? With the PKK, that's right, the Kurdish terrorist organization.

And that's a damaging kind of video in Turkish politics. It was fake. It was put out by political opponents. It came out that it was fake, I think, in some sense soon enough that it's unlikely that it swung anybody's votes and seems unlikely that that changed the election outcome. But it was a genuine use of a real deep fake for genuinely attempting to mislead voters. And that is false.

the really worrying scenario. We've had a couple other instances in the United States and elsewhere. So in the Argentine election that just happened, there were kind of deep fake videos put out that more kind of towed the line. And this is similar to the United States where, say, the Republican National Committee put out this video showing President Biden and Vice President Harris in kind of a dystopian future situation.

in a political ad. And it was a, you know, it was a deep fake in the sense that showed what looked like president Biden and vice president Harris in a fake situation. But in that case, as well as in the similar examples in the Argentine election, I don't think there's any sense in which voters were meant to be fooled by that. And I don't think anybody looked at that video, that RNC ad and thought, Oh, president Biden is actually in that dystopian future. And maybe I am too. Um,

But it was, you know, it's the slippery slope towards using that kind of technology to depict your opponents in unfavorable conditions.

Okay, so let's toy with some hypotheticals here for a moment. Let's pretend that the 2024 election is ultimately a Biden versus Trump election. And as an October surprise, a video comes out, say there's one of former President Donald Trump. It's the pee tape that folks were talking about back when he was elected in 2016, right? Where something happened maybe in Russia that ultimately got filmed, whatever. Okay.

Or a video comes out of Biden. He appears to be stumbling around, doesn't really know where he is, seems like he has full-blown dementia. What do people do? And I'll say from the outset, we don't know whether or not these videos are deepfakes. Those are the worrying kind of scenarios. And they're worrying for two reasons, right, that we should think about. One is that voters might be convinced by those kinds of things and they might be fake.

And that would be bad. Another thing that might happen is they might be real and the candidates might be able to say that's a deep fake and voters. So so whether they're real or false, the fact that you and I are even having this conversation and that we have to have this conversation means voters information environment has really changed. They can't know not just whether false things are false, but also whether true things are true.

And so that changes. And we saw that actually in the Turkish election as well, where a sex tape came out about a candidate and it was, we believe, real. And he claimed it was fake. He said, that's a deep fake. So the information environment really is in this kind of unstable condition now that it wasn't before. And so the question is, what do people do? And I think there's a question for voters. There's a question for politicians. And there's a question for the press. How do you cover that kind of video?

How do you do fact checking? All of these things are very, very difficult. And there's not an easy answer. I think if I was a voter, the kind of thing I would want to do is make sure that I am starting to look to trusted media sources, that I maybe think about there being trusted media sources on both sides of the aisle, that I don't only rely on one side or the other. But I do think there's a kind of one...

kind of silver lining, if you will, relative to, say, the last 10 years of our social media media landscape, is if you're a voter, knowing that you can't know if false things are false and if true things are true, you might be inclined to start heading back towards seeking information from professional journalists.

rather than simply trusting your Facebook feed or your Twitter feed or what have you. And to the extent that there might be some increased reliance by voters on information that is at least vetted and filtered through the eyes of professional journalists, that might be to the good.

I'm curious if there's more we can say about what's to be done about this. According to a peer survey that I took a look at, 50% of Americans don't actually know what a deepfake is. And there are also significant divides according to education attainment level. So if we're starting from a place where half the country appears to not even know what a deepfake is...

Just saying, well, voters will have to figure it out is one approach. And I think there are probably people who would argue, yeah, that's just the approach we're going to have to take. Other people would argue that there's more of a role for regulation here. And that's sort of what you get into in this white paper. Like, what are the actual things that can be done beyond just saying, well, voters turn to traditional media outlets? Because let's be clear, a lot of voters don't trust traditional media outlets, and they're not going to start in the next 12 months.

I think there's a few steps, and I want to be clear, there is not a silver bullet. I think we are heading into an election where it's likely that it will be difficult to know whether some important material is real or fake. And I think we should be eyes open about that, that there's not a silver bullet here. But I do think there are some practical steps we should take.

There's work to be done by educational institutions, by civil society, maybe by local government, et cetera, on kind of AI literacy. I think the most powerful tool for many people in understanding what these technologies can and can't do is half an hour playing with them. And when you learn how capable they are of making things up, I think every academic in the world has had the experience of having

asking ChatGPT to make a syllabus for them. It makes a beautiful syllabus with all these beautiful articles. And you're like, how is it possible in this field I know so much about that I've never read this perfect article? And then you discover it made up the names of all the articles and some of the authors. There's something very powerful about that experience that the more voters who have it, the better.

I think there's also some work to be done. I know it's true that there's low trust in media and there's low trust in every institution there is, historically low trust. But I still think there's work to be done thinking about this problem of provenance and fact checking and trying to do some certification of what do we have good reasons to think are real or not. And so you could imagine some sort of consortium of academics, civil society organizations, journalists,

People from both sides of the political spectrum trying to get together and say, like, we're going to at least be able to say, is there somebody who we know their identity who's willing to sign their name to this video and say, I took that video or I testify that I've seen that thing happen.

So I think some set of institutions, there's some technological fixes along these lines to help track the provenance of videos, but also some set of institutions created by well-meaning people from a variety of points of view willing to say, we will provide some certification about the provenance of a piece of information is the kind of institution that, of course, not every voter is going to believe. And the most conspiracy-minded voters, of

course won't believe. But there's a lot of voters who aren't super engaged in politics, who are not also super conspiratorially minded, who might be looking for some hard information. And I think that we as a society should be looking to provide that for them and building the institutions now so that come October, we can provide them. We don't have to rush.

Some state governments have already waded into this question. I mean, California, Texas, Minnesota, and Washington, I know, are four states that now have laws on the book that address, you know, the publication of deceptive media relating to an election within a certain amount of time before the election takes place. I mean, to give you an example here is in 2023, Minnesota enacted a statute that prohibits the publication of deepfake media to influence an election 90 days prior to an election. Right.

How effective do you view those kinds of interventions as being?

The adjudication, as we've seen, of questions like what's a deep fake, what's misleading, et cetera, is both very difficult and very slow. And so I think the idea that those kinds of safeguards are going to protect election integrity, it's hard to see how that's going to work. That we're going to vote on the day we're going to vote. And I'm going to, you know, the video gets released and it gets released not by the campaign, but by a political action committee. You know, it's hard to see, to me, to see that that's going to have a big effect on

in a practical way on the way the politics plays out. So is there a role for government here? I mean, it's a good question.

When you ask people what kinds of interventions they support, they do support government intervention, although, you know, they're most supportive when you say things like the way that Minnesota law was written, which is, you know, government intervention to restrict, you know, deceptive or misleading deep fakes or deceptive and misleading AI generated content. And the, you know, where the rubber hits the road, of course, is in the adjudication of what's deceptive or misleading and what's a deep fake. Right.

One of the really hard problems here is that being touched by AI is certainly not sufficient to be a deep fake, right? We use AI for all sorts of things. We use AI for red eye reduction. We use AI to take, you know, background figures out of images. We use AI to polish text, et cetera. None of those are deceptive uses of AI. All of those are perfectly reasonable uses. So it won't do to simply say like the government's going to mandate, you know, if you used AI, that's against the law because it's

Everything we do is touched by AI now.

There's a role for government to play, certainly in things like funding civil society to do AI education and things like that. It is less clear to me. And I think it's not like a bad thing to have on the books and regulations that say there's things that are against the rules and we're going to adjudicate them later. And if you broke the rules, you're going to be fined. You're going to go to jail. That does have some deterrent effect. But I think it's not, you know, it's not the kind of strong we're going to get the government to be able to get the bad material out of circulation quickly.

Watermarking is something that you address in your white paper, which I know is another consideration for a government intervention, which would be requiring basically any images that are created by AI to be watermarked as such. You didn't bring that up. I'm curious. Do you think that's effective? Yeah. So watermarking and then labeling are sort of, I think, in some sense, the most discussed tool. And I think they've been a little bit overhyped.

So the basic idea of watermarking, right, is that hard-coded into things that have been changed or created by AI, there'll be something that, say, a social media site can see. And so then when they post that video or that image on their social media site, it'll get a label that says, you know, generated by AI or altered by AI or edited by AI or something like that. And that's one version of tracking provenance.

It's not a terrible idea. It's a long way, again, from a silver bullet. And I think there's a couple problems. The biggest problem is the one I already pointed to, which is that we use AI for so many things. So if I'm going to do red eye reduction using AI, and I'm going to do picture enhancement using AI, and I'm going to remove background objects using AI, all the way up to I'm going to generate defakes with AI, all of those things are going to get the watermark.

And if all of those things get labeled AI for users or for voters, the label becomes meaningless because essentially every image is going to get an AI watermark and an AI label on it. And then they're not going to be able to distinguish things that are fake from things that are completely innocuous.

And if we say, well, what I mean is label the things that have the meaningful, deceptive uses of AI as opposed to the innocuous uses of AI, I'm right back in the kind of gray area where, say, a social media site has to say, was the change in this image…

sufficiently substantial or sufficiently deceptive or whatever that it deserves the label. And that is the exact same. That's we're right back in the kind of all the debates we've been having about content moderation and misinformation online where, where, you know, social media companies and the like have been trying to decide what counts and what doesn't count. And it becomes a political football and nobody knows who should be in charge. We're right back in. Once you create those gray areas, we're right back in all those kinds of wars over content moderation.

Is your conclusion here that the solution is don't enter the gray area? I guess I would say to the extent that what's going to happen is when watermarks get attached to essentially every image,

I don't care whether we do it or not because it won't do anything. Like that won't help anyone with anything. This is referred to as the liar's dividend. Is that correct? Yeah. Where it creates so much skepticism towards all content as AI generated that basically no one trusts content at all.

Yeah, I mean, two things could happen. So I think the thing that many tech companies believe is that that's what will happen, that voters or users will say, oh, everything that says AI is deceptive, I ignore, I just don't believe anything. Another thing that might happen is AI-generated labels completely fade into the background of users' consciousness. Everything has that label on it, and you believe things the same you always did. Like, it just, I'm not convinced that what will happen is the complete degradation of all information as opposed to just they don't have any effect.

There's another idea, which is, you know, related to watermarking, but a little bit different, which is kind of digital signatures. So that is, I could, you know, if I generate a video, or if I'm a, you know, a journalist for a professional news outlet, and I generate a video, there are ways for me to sort of cryptographically sign that.

and say, look, I, reporter for the New York Times or reporter for Knox News, I attest this thing is real and my cryptographic signature is attached to it. And whenever it's shared online, that goes with it. And you can go back and look, can I trace the provenance of this thing back to an actual person who I could hold accountable or an actual news organization who I could hold accountable if it turns out to be deceptive? That's less promising than watermarking in the sense that it's way harder to do at scale and most content will never have that.

But it's more promising than watermarking and labeling in the sense that there's a that has a real effect. If you can attach accountability to a piece of content, then then then there's you know, there's there's there's real attestation that it's that it's genuine. So we've talked about a couple interventions here. Am I correct in concluding that your sense is that the best way to address this problem is through civil society and education?

I think that it's – I think we need sort of a multi-pronged approach. I think that it's a good idea to have some regulatory guardrails so that at least after the fact there can be accountability.

I think that it is a good idea to have civil society, you know, AI education. The third piece I'd say, and I think actually it's more important than maybe the conversation has suggested, is I think there is a real role for politicians and political parties and campaigns here. I would love to see...

every major candidate for every major office, political parties, campaigns, et cetera, take public pledges saying they won't use deceptive AI-generated materials, that they won't use any materials generated by AI to depict their opponents, things like that. I think that, you know, will every politician stick to such a pledge? Of course not. But, you know, dirty campaigning is not something new. We've lived with dirty campaigning for the entire history of democracy. But I think the public statement, I'm not going to do this,

will make it more costly to do it. And I think it would just be a good thing for the democracy, for every campaign and every politician to feel pressure to say, I'm not going to depict my opponent using anything with AI.

This episode is brought to you by Experian. Are you paying for subscriptions you don't use but can't find the time or energy to cancel them? Experian could cancel unwanted subscriptions for you, saving you an average of $270 per year and plenty of time. Download the Experian app. Results will vary. Not all subscriptions are eligible. Savings are not guaranteed. Paid membership with connected payment account required.

You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.

All right, so there's another nightmare scenario, I think, that others have entertained that you address in your white paper but seem to pour cold water on. And that's that AI can essentially be used to hypercharge micro-targeting. And that you can basically turn your campaign over to AI and say, you know, we talked about telling a machine, your goal should be to create as many paperclips as possible.

And the machine ends up turning humans into paperclips. If you tell a machine, you know, I want to increase voter turnout amongst X or Y segment of the population, or perhaps probably more deleterious and more offensive to democracy the way that we think about it. If you told the machine, I want to decrease Black turnout or evangelical turnout as much as possible, right?

What results could you see? Is that possible using AI?

This kind of micro-targeting and manipulation fear is a real one, and it's widely discussed, as you say. We are pretty skeptical of it. Not that we think that campaigns will try to some extent, and we're not saying that none of it's going to happen. We're skeptical that it's going to matter a ton for elections. And the reason for that is basically by looking back at the last 10, 15, 20 years of social science on kind of broadly micro-targeting and manipulation. And so it

These fears have been alive for a long time, but I think they became, especially the micro-targeting fears, became most acute when we started seeing social media, political advertising on social media. Because with social media, you can already micro-target, right? You have a lot of data about the users and you can target your messages to very specific groups of voters and different voters can see different things. And

The best evidence that we are aware of in this space shows a couple things. One, it's extremely difficult to manipulate or persuade voters along these lines. There just are not big problems.

estimated effects of any of these kind of micro-targeting approaches. Campaigns are just not good at manipulating voters, and that may be because voters are, in fact, smarter, more robust, paying more attention than we think they are. It may be that they are paying way less attention than we think we are, so it's just hard to reach them. And it may also be because campaigns are adversarial. And so...

When you try and start manipulating voters with misinformation, with micro-targeting, et cetera, you have a campaign opponent on the other side who's running against you and has incentives to counter-message. And so you just are hard-pressed to find any evidence that micro-targeting or manipulation more broadly is super effective.

And indeed, reflecting that, campaigns to a large extent gave up on it when we think about things like political ads and social media. They didn't give up on political ads and social media, but they gave up to a large extent on micro-targeting. And so,

It's possible that AI is something so new and so radically better than anything we've had before that it will be different. But the evidence of the past suggests that it's not that you can't demobilize any voters or manipulate any voters, but it's very hard to do enough voters to matter for an election outcome.

Let's address those specific examples I mentioned, instructing artificial intelligence to decrease Black voter turnout or evangelical turnout as much as possible. Does that technology exist? What would that look like?

And I mean, let's just acknowledge the uncertainty here, right? Microtargeting through social media may not have ultimately been that effective, or we may have overhyped its effectiveness in 2012 and 2016.

But we just can't know really, I think, what's possible in this scenario. So how do we prepare for it or address it? Or maybe you disagree with that. Yeah, I mean, no, we can't know. For sure we can't, like, no, no. Like, in the sense that, like, I know if I drop a ball, it'll fall to the earth. But so what are campaigns actually going to do? I think we shouldn't, you know, we shouldn't imagine the paperclip scenario that we just tell a machine, like,

you think for yourself and make up all the strategies, et cetera. Like what are campaigns actually going to do? They're going to maybe tell a, uh, uh, you know, an AI program, an AI to start writing ads to post online or to email to users. And they're going to tell the machine to automate doing a bunch of AB testing, right? Right. Different versions of the ad with different images to target every different voter and keep experimenting and see which ones get click through, see ones, which ones get engagement, see which ones, whatever, uh,

and start, you know, predicting for which kind of voter it is, which kind of ad more likely to be viewed, more likely to be responded to, more likely to result in a donation, whatever. And they will be able to automate both the generation of the race. So we've already been able to do that in the sense that campaigns, you know, at least since President Obama's first campaign, they've been doing tons of A-B testing of political messaging. But

Now, not only will they be able to have their computers predict which ads are more effective, but they'll be able to have their computers generate those ads too. You don't need a human to write them. So that's some savings in time. You can do it faster. You can do it more efficiently. But that's still the same basic technology.

You know, the computer's writing the copy, but it's still copy and it's still an email or a Facebook ad or whatever that goes to a voter and that voter reads it and responds to it. So it doesn't seem to me to be so radically different, different other than we can do it at higher scale for lower cost.

And so it seems likely to me that the results of the past, which is just doing that isn't that effective. Voters aren't that responsive to those things. There's just not that much room for moving voters or manipulating voters or demobilizing voters with this kind of technology, like ads and messaging, still seems pretty likely to be right to me. I will say the bigger risk to me is –

voters might believe that everyone's being manipulated. The narrative, it's the AI election and we've all been manipulated and demobilized by the machines, that is a worrying narrative for legitimacy. The risk that we all perceive ourselves to have been manipulated by the machines and the machines chose the election outcome, we should worry about that, to me, more than we should worry about the fact that we're actually being manipulated in that way.

Yeah, I think part of all of this, and we saw this with the blow up over Sam Altman, is that right now we're debating approaches and oversight to technology that doesn't yet exist. And it's very unclear what that technology will end up looking like. We're just talking about this as far as elections are concerned. But the OpenAI board and other people who argue about this online and in boardrooms are trying to figure out how to approach something that is just unknown.

From your perspective as somebody who's deep into the data and thinks about democracy, how do you approach something like that? Where you're seeking to create regulation around something, a technology that doesn't really exist, at least not in the way that people worry it could exist someday. So I would say a few things first.

One, there are real trade-offs and we should be careful. Like we should, you know, I don't want to be a, I guess a shell for expansionism, but, and I'm not, I think there are real trade-offs and we need to think about it and regulation should be on the table for sure. But there are, even at the level of democracy, there's trade-offs. So for example, with big restrictions. So for example, one of the places I think this technology, even as it currently exists, will be good for democracy is

is in things like helping down-ticket candidates, challengers to entrenched incumbents who don't have a ton of money and can't hire a media director, et cetera, generate copy and get their message out. So

I think there's some chance for, you know, for for down ticket elections and for low funded candidates that this will increase electoral competition in ways that we should we should probably value. And that seems to have been true even for things like social media, political ads that that underfunded candidates and down ticket candidates use them a lot because they couldn't afford TV ads and they couldn't afford a media director, etc.,

So I think we shouldn't only think about the negative. I think there's potential positives. There's another potential positive, which is on the voter side. You know, I think a perennial problem in American democracy that we worry about is voters not knowing enough about their candidates, not knowing enough about policy.

Chatbots are kind of fun, and they do, I think, a bad job of giving factual information, but a pretty good job of synthesizing information. And so if you start talking to a chatbot and say, like, here's the kinds of things I think, does it seem like I'm a Republican or a Democrat? Or here's a long policy, you know, here's a platform from a candidate's website that I grabbed, can you summarize it for me in five bullet points? It'll do a decent job of that. And

You know, you might learn a lot as a voter that's useful for democracy. Maybe there'll be, you know, a virtual, you know, a chatbot, Ron DeSantis, and you'll be able to learn about his policy views in a way that's more engaging than listening to one of his speeches. So wait, can I just pause for one second? I read these sort of upsides of AI for democracy examples that you've given in the white paper.

And I was kind of like, I don't know, this feels a little forced to maybe like we have to find some upsides. Like how realistic is this that voters are going to be using chat GPT to like better understand democracy or that ultimately this technology is going to help the underdog big campaigns won't have better access to better technology.

Big campaigns will absolutely have better access to this technology than small campaigns, which doesn't mean it won't help even the playing field, right? So big campaigns have access to better everything than small campaigns. And the question is, which ones shrink the differential? And it seems to me like...

writing really compelling ad copy by hand, that's a thing that's very expensive. And if you think about the gap between the well-funded and the poorly funded candidate, the writing copy by hand is really advantageous to the well-funded candidate. Whereas the...

automated is like, they're still going to have people who write better prompts or whatever, but it's going to make the gap smaller, my guess. So I think that's real. And I do think that was a real effect of online ads that there were these candidates who nobody would have ever heard of. Now they're still going to lose almost all the time, right? The, you know, the entrenched incumbents almost always going to win. But one of the ways voters learn about

Who they should vote for for state treasurer, a thing about which they know nothing, is either they see an ad or they don't. There's nothing else they're going to learn. And so if the state treasurer candidates can afford ads now and they couldn't before, I do think that's a real thing. Ads are genuinely one of the ways voters learn about candidates in democracy.

That's not our idealized version from political philosophy class, but that's what goes on. So I think that's one thing. Yeah. The other is I actually think it's – I think it's – I don't know if it's going to happen for 24, but it's almost unimaginable to me that in the near future we won't have major candidates with personalized chatbots, which are have a conversation with this candidate –

you know, a chat bot trained to represent this candidate and as a kind of a fun way to engage, especially young voters in learning about the, I think that seems actually, that seems almost certain to happen to me. Wait, in fact, in fact, I,

I'm pretty sure I received a automated text message from Nikki Haley while we are recording this podcast right now. It's Nikki Haley. Do you have a minute to talk, Galen? Question mark. Have you heard the news, Galen? I have a significant update on the state of my race. I need you to know. Nikki. See?

Now, with Nikki Haley, that's no way – if that was Chris Christie, it might really be him because he doesn't have that much to do these days. But for Nikki Haley, that's got to be automated, right? She's busy. So I think – and like I think you could imagine – I don't know. I've got a 17-year-old. Like I could imagine –

young voters engaging with a chatbot, but not engaging in politics very many other ways, and learning something. I don't know, that doesn't seem impossible to me. I'll keep an open mind. I'll keep an open mind. I didn't mean to throw too much cold water on your potential insights for AI in democracy. But I also cut you off. I think you were mentioning other aspects to this regulating something that we don't know where it will ultimately end up. Yeah. So, I mean, there's always lots of reasons to be...

uh, worried about regulating technologies, the most important of which in some ways separate from, um, any promise for democracy, which is of course not going to be the main use case of AI is that, uh, you know, certain kinds of regulation. And I think many of the regulations that, for example, Sam Altman has, uh, advocated for things like, um, licensing or limits on, on, you know, unless if you're above some amount of compute, um,

are going to be very good for incumbents, for Microsoft and for OpenAI and for Meta, and very bad for new entrants into that market. I think that's a real thing to worry about. I'll say, I think that's a real thing to worry about.

in democracy. So the kind of the one thing we talk about a bit, which is a little more long run than the 24 election is one of the things that could really happen here if, as we've been talking about, so much of our political information ends up intermediated by AIs.

Over time, content moderation decisions on AI, things like which kinds of inputs are allowed and which kinds of inputs aren't, which kinds of outputs are considered out of bounds and which kinds of outputs aren't, et cetera, those kinds of content moderation decisions by AI companies are going to become extraordinarily consequential for democracy, for the kind of information we can access, for the kinds of thoughts we can have. I don't want to sound too much like Foucault or something, but the way in which our information is filtered to us by the AIs is going to shape how we think about politics.

And if that's controlled by one or two giant tech companies, that's profoundly worrying for democracy. And that is, it seems to me, the sort of thing we really ought to be thinking about as we start thinking about regulation of AI is how do we create an environment with guardrails that prevent a couple CEOs from being the people who are making all the content moderation decisions about what we get to think about and not think about inside politics. Yeah.

Essentially the same debates that we've been having about social media for the past decade or so, which is our largely conservative voices diminished on social media platforms. We could have the same conversations as pertains to AI.

Yeah, so that's one version of it. On social media, we've been having these conversations about, right, ought Mark Zuckerberg be the person deciding what is and or is not allowed in American political discourse? Or ought Elon Musk be that person? Right? That's one version of it. There's others that are maybe, at least at the moment, less obviously partisan valenced and more just worrying for democracy issues.

You know, to what extent do new ideas get integrated? To what extent does the use of these tools prevent new creative thoughts from entering the discourse quickly, et cetera, because they're trained on old data, they're whatever. There's all sorts of unintended consequences that might happen if we get too much concentration of control over political information, right?

As you say, we don't know what this technology will do, but I think the ways in which political information might become more and more concentrated, the ways in which we might start using these things to filter our ideas and our thinking and our writing and all of these things, there's lots of ways, not just the Facebook content moderation, our conservative voices being silenced, our left-wing voices being amplified, whatever. But others, and I think...

If democracy wants anything, it wants kind of a robust flow of information. And one or two companies controlling the thing which generates so much of our information would be very worrying if, in fact, AI goes that direction. A theme of this podcast to wrap up here and of this conversation in particular is uncertainty. And right now, as pertains to AI, we have known unknowns. We also have a lot of unknown unknowns, I imagine. Yeah.

By the nature of unknown unknowns being unknown, I can't ask you what are the unknown unknowns. I think that's the most time I've ever said unknown in one sentence. But what are the areas where like we develop pop obsessions, I think, with aspects of AI? What are the areas that maybe I haven't asked about or aren't really being talked about in the public sphere today that you think about as somebody who's closer to the source? Yeah.

One thing we haven't talked about, although we've touched on things close to it, is the way in which AI itself has become the story inside politics, as opposed to politics being the story and AI being the tool. And I think that's bad. So if one is a candidate for office who's, say, not at the top of the race, so fighting for attention, one can guarantee attention by doing a stunt with AI.

And I think you see that with things like the DeSantis AI ad or the RNs. Like, it's just a story, not because there's anything compelling about the AI image, whatever, but he used AI in the elections and we're worried about democracy. And so I think we are creating some bad incentives to head down the slippery slope by making AI newsworthy in itself. Not, I think, you and I doing that, but I think... Should we ditch this whole episode right now? Yeah, let's just throw it out. We regret writing the white paper. We apologize to everyone. But I think... But I think...

The media thinking about what makes the AI newsworthy and not incentivizing AI stunts would be a good thing. So I think that it's not a huge thing, but I think that's a piece that we're a little bit worried about is how newsworthy AI is in and of itself.

Yeah, I actually that's a different road than the one I thought you were going to go down when you first started talking about the focus should be our politics and not AI is like, you know, I've reported on democratic systems for quite some time now. And one of the big focuses of some of my reporting in the past was on gerrymandering. And when you go far enough down that road, you see that

we like to pretend that gerrymandering is responsible for so much of what ails our politics, but really the root goes much deeper. And that while some percentage of the polarization may be gerrymandering, it's quantifiable to the extent that it's like 5% or something like that. And that we ourselves are, are self segregating more than we're drawing ourselves into segregated districts, essentially. So,

And so I wonder if like in covering AI, the concerns that we have about AI are ultimately concerns that we have about the people who are participating in our politics and the way that they're behaving or willing to behave.

Yeah, I think that's very well put. I mean, I think that's true for many of our intuitions about our polarization or whatever, primaries, gerrymandering, et cetera. So many of these things, when you look at how much they matter, they matter a little bit. It's worth thinking about AI in this way, too. And one thing you said, I think, really sort of hammered home for me a thing that I think, which is,

Many of the things we worry about with AI and the practice of our politics, micro-targeting and manipulating voters, demobilizing voters, putting out misleading information or scandalous information about your opponent, these have been features of our politics forever. Politics is a dirty business. Politicians behave themselves very badly in campaigns all the time, always have,

Always will. And we shouldn't be too convinced when a politician behaves themselves badly and then wins an election and they used AI to do so, that that means that AI destroyed elections or campaigns because they were going to behave themselves badly anyway.

They have a new tool and maybe they can do it a little better, a little more at scale, a little more cheaply. And that's not to the good. But when we see a candidate put out a dirty trick with an AI, they might have put out a dirty trick just by editing some audio before, or they might have put out a dirty trick in lots of different ways. And so

one of the reasons these effects are small is because there are substitutes for those technologies. And so, you know, it's true they'll use AI and do bad things, but they've always done bad things. And elections have worked for a long time in lots of ways. Of course, democracy is far from perfect, and our democracy certainly is far from perfect. But the fact that we'll see AI used for some bad things doesn't mean AI has broken electoral democracy.

So your message here is in part, don't fully disempower yourself and in doing so, disempower our democracy by just throwing up your hands and saying, AI is f***ed.

the whole thing. Absolutely. I mean, absolutely. Like that is a message of ours. I think throughout the white paper that we think the narrative is dangerous as well as the technology and voters should like, it's important to vote, right? It's important to participate as a, but, and of course all sorts of bad things happen and people try and manipulate and there's misinformation and whatever. And also we vote for the candidates. And also, you know, you're choosing often in America between two candidates,

And you can tell which one is more on your side or you can tell which one is more honest, even if there's lots of deceptive information and noise in the process. I do think that's a real message.

All right. So we got to hold two things in our head at once. One, be on alert for the ways that AI, in particular deepfakes, can be used in the 2024 election to manipulate the electorate. But also don't be so cynical as to believe that elections have been taken over by AI already. Thank you so much, Ethan. I really appreciate you joining me today. It was my pleasure. Thank you so much.

My name is Galen Droop. Tony Chow is in the control room. Our producers are Shane McKeon and Cameron Tretavian, and our intern is Jayla Everett. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we will see you soon.