cover of episode It's Not Yet Time To Start Worrying About The Polls

It's Not Yet Time To Start Worrying About The Polls

Publish Date: 2022/9/15
logo of podcast FiveThirtyEight Politics

FiveThirtyEight Politics

Chapters

Shownotes Transcript

I want to clarify for the viewers that this is not wine. I'm drinking cold brew out of a wine glass. Mm-hmm. Mm-hmm. But maybe it is wine. You know, the joy of remote work is that you, Nate, nor our listeners will ever know what is actually in this cup. Hello, and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk. I'm Nate Silver. And this...

It is Model Talk. Nate, as the listeners can probably hear, we are unfortunately remote again. We would have liked to be in studio, but your schedule is a little crazy. At the very least, we are recording a Model Talk. It's been a minute. Where have you been? What have you been up to? I went to Montreal for a Labor Day weekend. Canada's nice, man. How is Canada not f***ed up like every other country in the world?

I don't know. Canadian listeners, please call in and answer Nate's open question. I mean, I'm sure there are problems with Canada. I don't know. I mean, you had like Rob Ford. They haven't won the Stanley Cup in a long time, right? They've got some issues. But overall, man. And the worst thing about it is it really f***ing

humble about it, which is like more annoying. Like Canada, you should brag more about how you're like one of the most competently run companies. You're multilingual. You're multi-ethnic. Countries, yeah. You have successful businesses. Your wealth. I mean, Canada, even though it's really –

cold. Maybe it's because it's cold, right? In the US, we can go to the beach and in Canada, they have to be like all, I don't know, it's terrible all winters. They have to be very industrious, I guess. Nate, it sounds like you want to move to Canada. No, I wouldn't. Yeah. Wait, so why after all that, would you not move to Canada? I like the US and our dynamic risk-taking system.

So then you don't actually think Canada has all that much to brag about? No, it does. I'm wrong. Alright.

All right, fair enough. So I guess we should talk about the model. And we have a familiar caveat to start off this recording. It is Tuesday, September 13th. So when, Jay Leshner, you were listening to this, this data may be a couple days old. But here's what the forecast shows today. Republicans have a 73% chance of winning control of the House, according to the deluxe version of the model. And Democrats have a 69% chance of winning the Senate.

The governor's race is... Already out of date, Galen. It's already out of date? Did you just update it between the five minutes that I was looking at this and started recording? I don't personally update the model. 5E does. But according to 5E, Democrats still have a 71% chance of winning the Senate election.

And 27 in the House. Wait, so this is just about as close to parity for Republicans and Democrats between the House and Senate that we have ever gotten? That's interesting, yeah. 5E is excited about that.

All right. Thank you for updating me with those correct numbers, Nate. If, of course, you are listening to this on Thursday or later, those numbers might have changed. So you can go check them out on the FiveThirtyEight website. When it comes to the governor's races, not all that much has changed. They haven't changed nearly as much as the Senate race, for example, or the Senate forecast race.

But here are some of the key races. Arizona and Kansas are toss-ups. Wisconsin, Oregon, and Nevada lean Democratic. Michigan and Pennsylvania are likely Democratic. And Georgia, Florida, and Alaska are likely Republican.

Nate, I should also mention that there are primaries tonight in Rhode Island, New Hampshire, and Delaware. And when I say tonight, I do mean Tuesday, September 13th. So by the time folks are listening to this, they will know the result of probably the most interesting race in this primary set, which is the...

the Republican Senate primary in New Hampshire. If there was a significant or surprising result, we will talk about it on Monday. Fear not. But we unfortunately do not know the results of those primary races yet. We're going to record this podcast nonetheless. Are you ready, Nate? I'm prepared. You are prepared. Okay. So we haven't done this in a while. It's been maybe a month even since

We laid out the numbers. How would you describe the state of the race beyond just, you know, 70 Republicans have a 73 percent chance of winning the House and Democrats have a 71 percent chance of winning the Senate? Well, I kind of think we can reduce everything down to that one number so we can end the podcast now.

But, you know, we've in past elections taken great pains to explain probability, put it in context, talk about uncertainty. You know, it's post Labor Day now. We're less than two months away from the election. How should people be thinking about those numbers in reality? I mean, 71 is a pretty nice number. It's higher than 70. It's lower than 72. OK, I'm being goofy. Nate, are you high right now? I'm not. No, I mean, look, they're both...

If you dealt with 538, it always seems like things are in the 70-30 range, right? Okay, now maybe we really should just close up shop. No, what are you asking?

30% happens all the time. Aaron judge is like a three Oh three batting average, right? That's a Republican chances of winning the Senate are still an Aaron judge base hit, right? Or Democrats holding the house. Isn't that was 72, right? In a poker hand, if you have a flush draw, uh,

against a made hand. It makes it about 30% of the time, a little more actually, but in that general ballpark. So these are like very, very ordinary course of events if you were to have Republicans win the Senate or Democrats keep the House. Those are both very, very, very plausible outcomes. With that said, clearly you've seen movement toward the Democrats that's been quite steady really, right? I mean, if you look at our forecast, like almost every day, the number has either held steady or gone up.

up for Democrats in the Senate. So I kind of keep waiting for like an opportunity to say, OK, now it's been a little bit of a tick back to Republicans. Let's talk about why the fundamentals are still favorable for the GOP. And in some ways they are. I mean, we have the whole history of midterm elections being very bad for the president's party. Typically, Biden's approval rating is improved, but still fairly poor. But you've had this steady increase, I think maybe in part because like the model is was a little skeptical of this kind of summer election.

polling surge and election surge too, given the special election results for Democrats, the longer it holds, the more willing it is to buy that maybe we will defy history. And this will be the rare year where you have a relatively neutral year in a midterm year.

So actually, you mentioning that improvement for Democrats brings up a listener question that we got. Dylan asks, should it be surprising that the House and Senate forecasts have shown near linear improvement for Democrats throughout August? Is this because the model is slow to pick up on sharp improvements? Or has the environment really been steadily improving for Democrats week to week?

No, I think it's more the former that, I mean, I don't know what slow or fast means, right? In general, polling is pretty steady in midterm elections until the summer. This summer you had major events happening, particularly the Dobbs decision that, if you don't know by now, overturned Roe v. Wade. And so, yeah, it's kind of slowly...

Scaling back its skepticism is one way to put it. But it's designed to be conservative about moving, at least at this point in the cycle. It'll get more aggressive by the time we get to the last couple of weeks. But a lot of the movement in polls can be just noise. It doesn't know, though, that there's a credible cause behind it. If you knew that event X occurred on this date,

We actually do this for some of our presidential forecasts now, right? We, after debates and conventions--

kind of reset the calendar so the model will move more aggressively after kind of these known events. We don't do that for like one-off news developments like Roe being overturned. But in principle, that would be the right way to critique the model, right? You would say this is a major event. You shouldn't have to wait for two months before you price this in, right? The first five or six polls should have told you that already. We're also, by the way, not accounting for special

Election results, there are different things going on. Also, in the deluxe forecast, we use expert ratings. I think those can be a little bit lagging, frankly, sometimes. I think the experts weren't expecting this

shift back. But yeah, basically the model's been conservative. We can debate whether that's a good choice or not. But it's not like things are getting better and better for Democrats. I don't think as much as the model has been skeptical and now there's finally enough data to say, yeah, this looks a little bit unusual this year. Nate, you're hitting on all kinds of questions that are

listeners submitted. So I do have some questions of my own that I want to get to, but Dylan has a great follow-up question given what you mentioned. Dylan writes, I'm under the impression the model doesn't use special election results in making its calculation. If that's the case, could Nate wildly speculate about how including results from this year might affect the current probabilities for the House and Senate? So we did look at this when we designed the model, I guess, four years ago and found that using special election results

didn't really improve things and added a lot of complication, in part because you are supposed to have lots of other polling data. The way I think about special elections is that they provide some cap maybe above and beyond what the model says on Democrats' downside. Clearly, you're going to have a lot of Democratic enthusiasm this year. You may also have Republican enthusiasm more so than you saw in the specials, but that cap's

some of the scenarios, like in 2014 or 2010, where Democrats just didn't show up at the polls at all, right? So that hedges their downside a little bit. It probably ought to make you a little bit less worried, although still somewhat worried, about the polling being off in a Democratic direction again, as it was in 2020 and 2016. In these races where you've had polls, Democrats have

matched or beat the polls in these special elections, as well as the Kansas abortion referendum, sometimes by a pretty big margin. And so that seems like it's important confirmatory evidence. But no, the model does not use that data directly.

On the same theme of special elections, our next question from Matt, in a casual tone, asks, Yo Galen, I got beef with the model. Dems just won an Alaska special, but the model hasn't adjusted odds of Dems winning Alaska at large in general at all. What gives, bro? Should we expect that special election outcomes will change the forecast for those elections in November?

I mean, that's the kind of thing that should show up in the expert ratings in the Alaska seat, right? So has there been no change at all? Is that true? This question may have been sent in a couple weeks ago, but I don't think there's been much change at all, if there has been any. So there is a change here where Democrats went from like 14% to 21% in Alaska. And I assume that reflects the fact that some expert raters

updated their Alaska rankings after the special election. So yeah, look, there probably is a case that if you actually had a special election in this one district or state, then that provides some extra information. We already used like 15 different types of data in this election forecast. So some of it's just a matter of like, you have to draw some boundaries. But yeah, in principle, then I think our model is probably underrating

Democrats in Alaska, at least the classic version where we aren't accounting for those expert ratings. We have done deep dives on this in the past, but I think we still have yet to do one in the 2022 cycle. And so you mentioned we have 15 different types of data in the model for maybe new listeners or to refresh our memories. What are the key components that go into the forecast? Well,

Polls. So obviously, if you have local polls of a race, that's very nice. Otherwise, the generic ballot. The next most important rating is probably historical, what we call the partisan lean index. So that is historically based on recent presidential and state legislative election results, how a district lines up relative to the country as a whole. Is it Republican leaning or Democratic leaning? Those tend to be fairly stable.

There's various indicators pertinent to the candidates, how much money they've raised, how much electoral experience they have. If they are a current member of Congress, what their voting record is, we find that candidates who are more moderate tend to overperform. If there's redistricting, there's information on how much overlap there is between their old district and their new district. So it's a lot of data.

Is there one question that we got was about the incumbency factor, which you just described. Marcus asks, how much does tenure as an incumbent matter in the model? Would a long sitting senator's chances be priced differently than someone like Warnock, who's only been in for a couple of years? I think we actually do not use tenure.

the number of terms anymore. I think a previous version did and the current version of the midterm model doesn't. What we do do is look at the incumbents margin in their most recent election, including whether they were facing another incumbent. If they beat another incumbent, it's more impressive than winning reelection as an incumbent. So we look at how their previous race went. That's adjusted in some complicated ways if they're in a new district

We also look at congressional approval ratings. That is how popular is Congress overall. That tends to dictate the degree of the incumbency advantage. The incumbency advantage, by the way, is much smaller than it once was. There are also characteristics of a state or district that tend to predict a stronger incumbency advantage if you are from a small state.

Sanders from small states, meaning low population states, maybe not geographically small, tend to get reelected more often. They're better known to their constituents. They're more constituent services. They can do pork barrel more effectively. If you're a state like Alaska or West Virginia, you can get a lot of gifts because you're an influential vote, right? And voters at home like that to some degree. So yeah, it's, you know, of course, the model accounts for incumbency, but in a pretty complicated, sophisticated, quote unquote, way.

So we got some broad questions about the forecast as well.

And one question that I think a lot of people have some version, have asked some version of this, is this question from Kyle. And it's, at what point will the model stop assuming a reversion towards Republicans in the generic ballot? For example, if we're 30 days out from the election and the generic ballot average is still at D plus one, one point advantage for Democrats in the generic ballot, which is when you ask Republicans,

voters if they would vote for Democrats or Republicans for Congress in general. The question is, will the model start to assume that 2022 will be a neutral or Democratic-leaning environment at that point? So let me back up a second, right? Right now, the generic ballot has Democrats up by 1.2 percentage points. In our forecast of the popular vote, Republicans are ahead by 2.8 percentage points. So that's a four-point gap, right? So what explains that?

One factor that is not much discussed is that there are more races this year where there's no Democratic candidate than races where there's no Republican candidate. And so that causes a skew that if everyone goes to the polls intending to vote for a Democrat or Republican in line with the numbers in the generic ballot, sometimes you actually don't have a Democrat to vote for in your district. And so

We think the GOP will do better in the popular vote than the generic ballot per se. That's one source of difference. Another source is that many of these generic ballot polls are still among registered voters or even all adults.

Based on historic trends, as well as data so far this year, the model assumes that registered voter polls are likely to be more favorable to Democrats than likely voter polls. So that's another factor. And then, yes, the model still has a--

Prior, an initial expectation that it was going to be a good year for Republicans. Every day that you don't see movement toward Republicans on the generic ballot, that prior gets a little bit weaker. But that won't phase out until literally the day before the election.

Interesting. You know, you wrote a piece recently for the website called The Asterisk Election, basically saying that there are these certain circumstances in which the midterm curse doesn't hold. They're well known at this point. We've talked about them on this podcast. They're post-September 11th during the Bill Clinton impeachment trial, post the Cuban Missile Crisis, the Great Depression, etc.

So in an ideal world where there be a lever you could pull to say, hey, these are unprecedented times and this might be an asterisk election, maybe you shouldn't wait that prior as a normal midterm model would. Well, I don't know. I mean, I think there will be plenty of explanations if Democrats do have a relatively good year, right? I mean, I think Dobbs and Trump's involvement were in the aftermath of a pandemic.

I guess we're still in a pandemic, technically. Wait, when do pandemics end? Sorry, I'm getting us off track, but when are we going to say that we're not in a pandemic anymore? I think when the WHO has some committee say we're not in a pandemic anymore. Okay, all right. We'll follow up on that in a future Model Talk, but go ahead. Anyway, the ahistoric nature of 2022. No, I mean, you don't want to assume it in advance. I mean, look, a model is...

incorporates a great deal of information. There can always be subjective information or hard to actualize information that may exist outside the model, right? And so I might say, hey, maybe we have our forecast and maybe I lean a little more optimistic about this state or this result for this party or that party for this or that reason, right? But yeah, I mean, again, the model is trying to be pretty comprehensive and

factors that don't generalize well, like subjective one-off factors. I mean, who knows? In 2020, we made to our presidential model a bunch of adjustments that were inspired in part by COVID because it was clearly such a weird circumstance, right, where it literally affected the mechanics of voting, right, the most severe pandemic, the most deaths,

in a year, I think, right? In 100 years or something, right? Like that was something where it does require you to step back and say, okay, I think we actually have to be adults here and acknowledge that like, this is not a normal circumstance. This is more speculation that it wouldn't be, people will have no shortage of reasons if Democrats somehow like keep the House or lose five House seats and win two Senate seats. There'll be no shortage of explanations for that. And so, I don't know. I don't know if it's an argument that

you should be more bullish on Democrats' chances than the model, or just that the bullishness the model already reflects? Because at this point, the model is kind of saying that it's more likely that Democrats gain a seat in the Senate, right? What's the average forecast? And the average forecast, they went up with 51 seats. So the model is already saying that something unusual is likely to happen this year, right? And so it's kind of explaining why that number is 72% and not 25% or something.

Nate, there is some perhaps negative news for Democrats that came out this morning that I do want to get to. So let's talk about the latest inflation data.

You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.

I love sports. I love them so much, I never want them to stop. But as the playoffs wind down, we get fewer games, and the sports aren't sporting like I want them to. But FanDuel lets me keep the sports going whenever I want. All I have to do is open the app and dream up bets anytime I'm in the mood.

And this summer, FanDuel is hooking up all customers with a boost or a bonus daily. That's right. There's something for everyone every day all summer long. So head over to FanDuel.com slash sports fan and start making the most out of your summer. FanDuel, official sports betting partner of Major League Baseball.

Must be 21 plus and present in Virginia. First online real money wager only. $10 first deposit required. A bonus issued is non-withdrawable bonus bets that expire seven days after receipt. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling problem? Call 1-800-GAMBLER.

The most recent inflation data came out on Tuesday and showed prices rising 8.3% compared with a year ago. Overall, inflation was down slightly from last month, but core inflation, which strips out gas and food prices, was up to 6.3% from 5.9% last month.

As gas prices have fallen, I think focus has shifted away from inflation a little bit. And Biden's approval rating has risen over the past month, noticeably. So how should we think, Nate, about this latest inflation data in the context of the election and Biden's approval? I mean, it depends in part on whether voters are looking at the short-term trend or the long-term trend. So in the short run last month, you had

a big decline in gas prices that's reducing headline inflation, but you had an increase in non-gas prices relative to last month that was, in fact, a little bit bigger than expected. So if voters are noticing that

Back-to-school supplies are expensive. They're noticing that restaurant dining is still quite expensive or things like that. Maybe they travel for Labor Day. That sector of the economy tends to be very pricey right now. And inflation is up a lot over the past year. It's not up nearly as much over the past month. And so how voters weigh those things, I think, is not quite certain. Gas prices, I think, have a singular importance.

impact because they're like literally advertised on giant signs every time you'd go driving. And because it's an expense that, you know, that people kind of feel at the margin a lot, you know, the fact that gas prices seem to be steady or declining is maybe bullish for Biden. But yeah, I mean, Biden rather had a better inflation report today. The market was off quite a lot today over concerns that the Fed will raise interest rates.

It's considered cringe to talk about the stock market, but people notice that. It's in their very visible indicator. So yeah, that news, it's incremental. It wasn't a terrible report, but worse than economists expected. That's probably the most important news of the week from an election forecasting standpoint.

But I'm trying to make sense of this, I guess, in the context of Biden's approval rating improving over the past month. Like those rises in core inflation, if people were experiencing them, they experienced them during the month that Biden's approval rating actually improved, right? Like we're looking at data from a month that already happened.

So are people just reacting more intensely to gas prices than they are to prices at restaurants or for new vehicles or rent? I think I'm trying to recite the sectors of core inflation that rose the most based on what I saw this morning, right? Like these two things happened at the same time. Core inflation sped up and Biden's approval rating improved by like five points. Well, core inflation is important to economists today.

Because energy prices are more cyclical and can dominate the index, right? To a regular voter, I think they probably put more weight on energy prices because it's just more visible. And so from a consumer, like kind of in the consumer-weighted bucket of goods, they may perceive prices as declining. Because prices did decline. The whole bucket, including energy prices, did decline month over month. And like the consumer-weighted index of it where maybe they –

double weight energy prices, then that might outweigh the fact that food and stuff like that is still really expensive. So yeah, from a consumer standpoint, I think it looks a little bit better than from an economic standpoint. All right. That makes sense.

So speaking of, you know, potential bad news for Democrats, our friend Nate Cohn at the Upshot poured some cold water on Democrats' hopes in his most recent report, essentially drawing folks' attention to a trend that he sees in the 2022 polling. And I'd like to get your take on some of this. So here's what he writes. Quote,

Good for Wisconsin Democrats might be too good to be true. The state was ground zero for survey error in 2020 when pre-election polls proved to be too good to be true for Mr. Biden. In the end, the polls overestimated Mr. Biden by about eight percentage points. Eerily enough, Mr. Barnes, he's talking about Mandela Barnes, the Senate candidate who's running against Ron Johnson there, is faring better than expected by a similar margin. So eight percentage points again. And

He goes on to say the Wisconsin data is just one example of a broader pattern across the battlegrounds. The more the polls overestimated Mr. Biden last time, the better Democrats seem to be doing relative to expectations. And conversely, Democrats are posting less impressive numbers in some of the states where the polls were fairly accurate two years ago, like Georgia.

So we have talked about polling error before on this podcast. We have even talked specifically about Wisconsin and Ohio. In fact, I think the most recent model talk, we really dug into this question. But we didn't get quite this precise about it, which is that Nate Cohn sees a trend where, you know, he's not saying that all the polling is off, but it looks like

there might be a 2020 style polling error pattern repeating itself. How much credence do you give to this sort of red flag that he's waving when it comes to the polling industry? I basically think people have the wrong prior on this question. I think people should have a fairly strong initial view that the direction of polling error is

is hard to predict. Meaning, yes, it's very possible that polls could, again, overrate Democrats. It's also possible they could overrate Republicans. So I'm going to treat the abstract version of this first. So why was this a good belief to have? There are a few reasons. Number one, historically, if you look at polls, the error bounces back from year to year. So it's not very predictable in that sense.

Number two, if you look at kind of like pundit conventional wisdom about which direction the polling era will be, that does not tend to be very accurate. It was accurate, I guess, in 2020, but there are lots of years in 2014. There were big debates about how polls would underrate Democrats in the midterms. They wound up overrating Democrats instead, right? This type of thing happens a lot. So people's guesswork has not been very good. Number three, pollsters have pretty big incentives to be accurate. So yes, if...

If you notice that the kind of thermostat or thermometer, I guess, in your room says the temperature is 70 when the temperature is really 75, you probably don't leave that unchanged, right? You probably get a new thermometer or adjust the thermometer to work properly, and then you might not have the same problem going forward. You can also overcompensate. You tinker with three things to make sure Republicans have better numbers. It turns out that...

You only need to do one of those things. The other two create a bias in the direction. That's another argument. Number four, the market for polling is a market in the sense that polling is not cheap. Which people do polling depends on which ones are successful or not. We also have pollster ratings that look at their past success. You now have a larger number of conservative leaning pollsters who now have a higher weight MFI 38 model based on

The fact that they generally had good years in 2020. So there are already a lot of checks and balances in the model. And I don't know. I think people are a little bit superstitious about an event that I think if you're really trying to step back and be rigorous about this, then I don't know that that's a safe

People go, the presumption, oh, the presumption is that the polls are going to be biased, and maybe they won't be. I think the wrong part of it is foregrounded and backgrounded. The presumption should be that polling can be biased, but the bias is unpredictable. But maybe here are some reasons to worry that there'll be, again, a--

bias toward Democrats this year. So we have talked a good amount on this podcast about how pollsters do have incentives to change their methods if they're repeatedly getting errors in the same direction. But Cohn, in his most recent article, writes, most pollsters have

haven't made significant methodological changes since the last election. The major polling community post-mortem declared that it was, quote, impossible to definitively ascertain what went wrong in the 2020 election. He does then go on to say that some pollsters are making efforts to deal with the challenge. One pollster in particular, he said,

points out, says that they're restricting the number of Democratic primary voters, early voters, and other super engaged Democrats in their surveys, which Cohn also says the New York Times and Siena College polls are doing.

Let's talk about both of those things. One, do you find it concerning that most pollsters aren't making methodological changes? And amongst those that are, so the Times, Siena College, and perhaps other pollsters that might be limiting Democratic primary voters in their sample, is that the way to go about things? I don't necessarily accept the premise that most pollsters aren't making changes. I would want to see, I mean, you know, look, there are reasons to think that

that maybe that 2020 was a quirk. The best reason is that because of COVID, this is at a point in the pandemic where pre-vaccine people were still locking down and being very careful and Democrats were much more careful than Republicans. Therefore, they were much more likely to be at home answering phone calls. Like that to me seems like a pretty valid reason why you might've had a polling error in 2020 that might not apply. Also in elections without Trump on the ballot,

there hasn't been a Republican-leaning polling error, right? In 2018, there was no systematic error. There wasn't some races, right? And those... But I do want to say, because, like, those polling errors that we did see in 2018 happened in places like Ohio and Pennsylvania, to some extent in Michigan, happened in Florida. You know, some of these key states that were looking at competitive races in this cycle. And so...

Given that, I mean, you would start to see like maybe there was an underestimation of Republicans performance in 2016, 2018 and 2020. Yeah, I don't necessarily inherently know why that would be the case. I mean, it's worth pointing out, I think. I mean, look, I should preface this by all saying that like by saying we our model actually does assume that Democrats will not perform as well.

as the polls currently show, right? That's based on, in some of these races, the quote-unquote fundamentals being worse for Democrats than the polls show. It's based on the fact that it's September, it's not November, you still have almost two months to go, and there's some reason to suspect there'll be some

reversion of the historical mean, you still have some polls that are among registered voters. We think when everything converts to likely voters, that that might hurt Democrats by a point or so, although clearly you don't have the sort of major enthusiasm gap that you've had in some past election cycles. So is it true that our model doesn't buy that the polls are giving the best snapshot of the Wisconsin race, right? But I don't think the right assumption is that like,

polls have a systematic Democratic bias. They haven't over the long run, they haven't in elections-- I mean, they did in 2016, 2020, and 2014 for that matter. 2012, you had a Republican bias in the polls. So far in these special elections since Dobbs, you've had a Republican bias, if anything. I'm just holding onto that prior as a good macro rule,

more strong than other people might. What would have to happen for you to give up that prior? If I were just convinced that the polls were systematically biased and that couldn't be fixed, then I probably wouldn't run a polling website anymore. Oh, interesting. But what would have to happen in the real world? What would be an indicator to you that there was a systematic bias? How many election cycles would we have to get through?

At least two more? At least two more. Okay. So you're sticking around for at least two more, Nate. The thing about elections, man, is like... That face that you just made when I say that was priceless. You only have one election every two years. You know what I mean? Or every four years if you don't care about midterms. And so people dwell on the fact that like, oh my God, this happened twice in a row. Or actually, if you do look at the midterm, who would have last three times, right? Something happening two times out of three...

is not a particularly robust trend, right? It's just that elections are very consequential and people like to cast blame. People like to moralize about elections, right? Nate, it's all your fault, actually. Okay, fair enough. We actually got a question along those lines. Well, not really along those lines. It's a question that we have answered before, but we'll answer it again later.

Eve asks, does the model factor in bias introduced by voters or poll responders reacting to the 538 model? I got to say, Eve, I think she might give the 538 model a little too much credit. But this idea that we ourselves, the 538 website, the forecast, this podcast maybe, are shaping voter behavior. Sure, in the sense that maybe any media can shape voter behavior, obviously. Yeah.

Right. Any media can. The standard answer here is that in a two-way race, you have a Democrat and Republican running that there's not much evidence that media coverage systematically affects things. In a multi-way race, like a primary, you want to get media coverage. Just the volume and tone of media coverage can matter because voters are strategic. If voters see a poll showing Elizabeth Warren

falling behind Bernie Sanders, they may say, well, I like Warren more than Sanders, but I like Sanders more than Biden. So I'll switch down to Bernie. That accelerates her decline in the polls, becomes self-fulfilling. So those dynamics of media coverage and horse race coverage are very important in the multi-way tactical strategic voting context. I think they're less important in a two-way race. And it's not clear directionally which way they would go. Because we get accused of

of, oh, you're trying to suppress turnout by having this candidate do well, right? Or the reverse. So you're trying to lower morale by having, like, it's not very clear kind of in which direction this goes. To be frank, there have been some academic papers on this, some of which are okay and don't have much of an effect, some of which are dubious because a lot of papers are dubious, let's be honest. So yeah, but I think the short answer is I don't think it affects things much. But I mean, this year was weird in that like the 538 model kind of

came out right at a time when there was a shift in conventional wisdom because all these things started happening for Democrats, right? Inflation started to abate to some degree. The Dobbs decision was handed down like kind of right when the model was coming out basically. And then you have these various legislative successes from Biden that were not necessarily preordained. COVID situation's been relatively tame. So they've kind of had like a

best course scenario. And like, I don't think that like, or best case scenario, this does not seem to me like one of these times when there's like spurious shifts in the narrative for no reason. Big news events have happened. They all more or less happen to favor Democrats. The polls have shifted as a result. This seems to be like a pretty robust kind of conclusion and not the conventional wisdom run amok.

Zach asks a question related to what you were just saying about how if you did want to shape voter behavior with a polling result, how would you do it in the first place? Zach asks this.

Should polls released by campaigns be interpreted differently based on who's ahead? For example, people behind might want to emphasize that they're real contenders, while people ahead might want to claim the race is quite tight to motivate voters to show up. So do we always assume that internal polls are inflating that candidate's chances, or do they sometimes deflate that candidate's chances? I mean, I can think of...

Some instances, right? Sometimes, I mean, sometimes a campaign will release an internal poll if it's worried about voter complacency. I think the Obama campaign did that once in Pennsylvania, right? Sometimes if campaigns are competing for national fundraising dollars, maybe a candidate who is assumed to be way ahead will release a poll or leak information.

a rumor of a poll that may or may not actually exist to try to like create a perception that that race needs more money if they're behind in fundraising. So I can think of like time to tap in, but it's, it's, it's unusual. I mean, you know, are there some patterns, you know, probably when you're further behind, you probably need to catch up more, right? If you're eight points ahead, you don't have to exaggerate that margin necessarily. Right. But we just use a one size fits all adjustment for,

for partisan polls and don't try to get too cute. All right, let's enter the rapid fire portion of this model talk. This question harkens back to the conversation we were just having, and it comes from a Twitter user named Special Puppy. Which race do the model's expectations and your personal expectations diverge the most? Asking you to second guess your model, Nate. Let me look at our actual up-to-the-minute forecast. I mean, some of the ones...

Nate mentioned, other Nate. Like I personally would. Wisconsin. I don't think I buy that's 50-50. Maybe it's 60-40 Johnson, right? You know, I probably see Nevada, which is a state where polls have done pretty well. But nonetheless, I think that's probably more of a toss-up than 60-40% Democrat. And we're also looking at the deluxe. I think if we looked at the light, you might have more second-guessing. I would, for sure, yeah. Because the deluxe is a kind of building in

some conventional wisdom in the form of these forecasts. In Georgia, I actually think I'd have Warnock as a heavier favorite than 53%. Walker certainly seems to have a lot of flaws as a candidate. I'm not sure I buy that Democrats are at 37% in North Carolina. I take the under on that.

So that's like many of the races now I have. I guess I have somewhat different opinions, right, than in the forecast. Yeah, I mean, well, one of the interesting things that's happening in Georgia is that the governor, Brian Kemp, is pretty heavily favored to win re-election. But if you were going to say that you have even more confidence in Raphael Warnock than our forecast shows...

you would be talking about a situation where there's an even bigger ticket split going on potentially, which... That's fair. That's fair. People have pointed to that and said, I'm not really sure that that will be the case. Ticket splitting has declined significantly, et cetera, et cetera. But I think actually maybe Texas 2018 is a good example where you saw Greg Abbott

was not in a competitive race. I think he won by the governor's race in Texas by like 14 points or something. Whereas Ted Cruz won by what, two and a half points against Beto O'Rourke. And Texas isn't a particularly elastic state. So it's, it's something that you could imagine happening. Yeah. Again, these are subtle shifts, but yeah, I'd have,

Wisconsin, North Carolina, and Nevada, I'm personally a little bit more bullish on Republicans on the model shows. In Georgia, I'm a little bit more bullish on Warnock. So record that. Probably not guaranteed at all to be any more accurate than our model. But for posterity's sake, those are the ones that look a little bit different to my model. There you go. Well indulged. I remember the days when you refused to second guess your forecast. Thank you for playing along with the listeners, Nate. Okay. Okay. Next question.

Why is there not a forecast of all of the governor's races combined, i.e. which party is forecasted to control the governor's mansion for the majority of the country's population? There is that, I think, actually. It's just not in the cute 5E graphical part of the forecast. I think if you go down to scroll to the very bottom...

where it says download the data. It says polls and model outputs. I believe that model outputs, like the top line output file, gives you information on what the likelihood is that each party controls a majority of governorships. And what I think is a better metric, actually, is which party will have governorships controlling a majority of the population. It's not like the Senate, where there's a Senate of governors, right? It's like,

Each state matters more. The governor of California probably has a lot more influence on a larger share of the population than the governor of Wyoming. And so, you know, so that's a metric that we actually calculate. But you have to go down to the very bottom of the interactive and download the actual data. News you can use, Nate. Next question. What is the probability that the balance of power in the Senate will once again be decided by a Georgia runoff? Oh, God.

Let me look at this, right? I mean, how often does it come down to one seat is the first question, right? So 25% of the time, it comes down to one seat. Of that 25% of the time, how much is Georgia? You know, my guess is not more than 20% of that 25%. And then how likely is Georgia to go to a runoff? We probably do calculate that somewhere. Let me see if...

It looks like Georgia's in a runoff about 25% of the time. I think it's actually less likely than people think, right? Because you need that three-way parlay, right? That it comes down to one seat, that Georgia is decisive, and that Georgia goes to a runoff, right? I mean, it might be in the realm of

I'm trying to think if I'm thinking about this correctly on the fly, but it's probably in the realm of 10% or something like that, or maybe a little lower. Not like, oh, this is inevitably going to happen again. There'll be a lot of elections where you have a Georgia runoff. It happens, again, I'm trying to look at the Sims here, again, 25% of the time or something. But some of those will be, oh, Democrats blew it, so the best they can do is 49 seats, or oh, the GOP's

they're at best going to have 49 of their own. So it's just a matter of the Democrats size, their majority. So not, you know, it's a material possibility, but not as high as people might assume. Another rapid fire question. This listener asks, would you ever consider creating a knob that would allow you to simulate polling error and how it would affect the forecast? You can kind of do this with the make your own path thing, but go ahead, Nate. Yeah, you can kind of do it with the make your own path thing. I mean, um,

I think it's a thought. It's not a bad thought. What's a little tricky is like, it's unclear like where the uncertainty comes from then. Because if you like, if you assume you know in which direction the polls are going to be wrong, right? Then it's kind of nothing to predict, right? You just made your own predictions. There's like no uncertainty really. I mean, I think what the New York Times did with like, what happens if the polling errors are as significant as they were in

2020, I think that's, or, you know, that's what their innate newsletter, if you did that, right. Like that seems like a useful mental experiment. You know, I probably do it in both directions, right. What if there's a 2012, I guess, style error where Democrats overperform their polls. I mean, it does seem like if you look at like the polls only version, Democrats have enough of a lead in some of these States that kind of like Biden in 2020, they could probably withstand a

polling error, I think, right? Like look at all the Democrats where, I don't have the actual forecast, but like, so in Nevada, Wisconsin, Arizona, right? In all these states, the light, meaning polls only model has Democrats with a 70% chance or higher, right? So if you assume there's a polling error, I guess they lose Georgia, but then they win somehow Wisconsin and they win Pennsylvania, right? So even if polls are off by like three points, I think they actually like

still keep it 50-50 or even gain a seat, right? So in that sense, their lead is more robust in the Senate than you might think. We got, I don't know if this is a serious question. I don't think it's a serious question. Ask it anyway. Would you say that Provincetown is real America? I am guessing that this is a reference to Max Boot's opinion piece.

in which he argued that Prince Town and I think Martha's Vineyard are in fact real America or just as much real America as the Rust Belt diners where reporters go to do election reporting. Give me as serious of an answer as you can on this one, Nate. Prince Town is probably the most unusual town of 5,000 or more people in the entire United States.

Right. It's extremely gay. It's extremely wealthy. It's extremely liberal. It's extremely white. It's extremely middle aged. Yeah. I mean, it's even unusual for like a gay population, how like ethnically homogenous it is in some ways. So this is like the worst example you could have picked. I can. You know what? Don't get me wrong. I can name a place that's even less real American. Where? Where I am right now.

Oh, that's true. That's true. I think I'm currently on Fire Island. Listeners have gotten mad at me about my use of prepositions here. I'm currently in the Pines. I think it's probably even less real than Provincetown.

No, I mean, no. I mean, can you even drive there? Are there cars? You cannot drive. There are no cars. There aren't even bicycles because there are only boardwalks and sand and the boardwalks are about three feet wide. So you can only walk. There isn't even a grocery store that's open year round. The grocery store closes in October. Okay, that is more unusual for sure. I agree. I agree.

So if you had to pick a place to represent, quote unquote, real America, where would you pick? I think Nathaniel Rickage did this once and said New Haven, Connecticut. But first of all, Yale is extremely weird, right? And New England is weird. So I'm not sure I can accept that. I mean, you know, um...

Kansas city is probably a pretty good approximation, but maybe there aren't enough Hispanic voters there. I mean, what about, I mean, you said Las Vegas once, which I, you know, it's not Las Vegas just because of the level of tourism, but I think, you know, Nevada in general, the level of like suburban sprawl and the, you know, ethnic and racial diversity, the,

you know, we talk about American elections are decided in basically the suburbs, exurbs, whatever. Like that's where, that's where so, so many Americans live. And like, yes, America is whatever the census says, like 75% urbanized, but mostly what they're talking about in those circumstances are places that people would recognize as suburbs. Yeah. I mean, Las Vegas off the strip is not a bad version of the quote unquote real America. I mean, a lot of these growing cities are,

Houston's a fascinating city that's very ethnically diverse, right? Atlanta, I guess demographics are, it's more black than the country as a whole, though Atlanta has lots of every kind of person pretty much. What about New Jersey? I feel like New Jersey is pretty mean American and mean as an average. If you look at just the racial demographics of New Jersey,

It's, I think, a very close match for the US. However- Oh, is it too wealthy? But it's very wealthy and suburban, right? It's unusual for a state to be- I mean, there's some rural parts of New Jersey in the south and west of Jersey, but it's unusual to have so few major cities, but still high population density. By the way, New England is also weird for that reason too, right? You still have a lot of small towns in New England that are close to one another that is-

Not at all true for most of the rest of the U.S. Okay. So to answer this listener's question, Provincetown is not real America, but that doesn't mean that we should necessarily spend all of the time we do at Rust Belt Diners. There are lots of other places to go. In fact, Nate and I will be making a reporting trip to Las Vegas for exactly this reason. Of course.

We'll compare calendars once we get off the podcast. Nate, that's it. Any words of wisdom as we depart? We're going to be doing more of these model talks in the run-up to the election. I just have to nail your schedule down. And hopefully we'll be back in studio soon. I always like being in studio. My advice is don't waste your time debating about individual polls or pollsters. That's amateur hour, folks. That's amateur hour.

develop a process for how you systematically deal with different types of pollsters or data points. Don't just on a one-off basis do like special pleading. Oh, I don't trust this poll. I don't like this poll. Like don't, don't do that. Don't do that. Don't do that. All right, Nate, great last words. We'll leave it there. Thank you so much for another model talk. Okay. Thank you, Galen.

My name is Galen Druk. Sophia Leibowitz is in the control room. Chadwick Matlin is our editorial director and Emily Vanesky is our intern. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or a review in the Apple podcast store or tell someone about us. Thanks for listening and we will see you soon. Bye.