COVID made polls too Democratic
Brian Stryker / Oren Savir
Polling error in 2020, while within the historical range of polling error, has caused a lot of heartburn among Democratic strategists, pollsters, and data analysts. We’re still at work understanding the 2020 election’s polling misses, but one theory that looks accurate is that the COVID-19 pandemic caused polls to overstate Democrats’ vote share. For a state like Wisconsin with a heavy COVID outbreak near the election, the added COVID error was as much as 7 points.
Simply put, people more worried about COVID were more likely to take polls, and those people were also more likely to vote for Joe Biden regardless of their age, education, race, gender, or party identification. Early in the pandemic, pollsters saw that people stuck at home were picking up the phone more — and this analysis shows people who took the pandemic seriously kept that up through election day. Here we show that this had a direct impact on the polling results. In particular, how bad a state’s mid-October outbreak in a state pretty accurately predicted how much the polls erred in Joe Biden’s direction in that state.
What we did
To analyze the link between COVID-19 and polling misses, we:
1) Calculated the polling error in each of the 50 states as the difference between FiveThirtyEight’s final polling average for that state and its final election result. Source: FiveThirtyEight, New York Times
2) To estimate the severity of a state’s outbreak, took the daily increase in positive cases two weeks before the election (per 100,000 residents) and the daily increase in positive deaths two days before the election (also per 100,000). We used rolling seven-day averages to minimize noise in COVID-19 data collection. Source: The COVID Tracking Project
We then plotted deaths and cases per person in a state on one axis and polling error on another.
What we found
Visually, the story is clear — the downward line means that the worse the outbreak, the more polls underestimated Trump’s support. (the larger the text, the more public polling in the state).
These relationships are strong and statistically significant (R² of 0.3895 and 0.3352 respectively for stats folks, for each of these variables individually, and well, well past the 95% confidence level). This means the number of confirmed COVID-19 cases two weeks before the election predicted nearly 40% of the error in a state’s polling. It also means that for every additional one in 17,000 people with a confirmed coronavirus case in a state, the polls were off an additional percentage point. Or, to look at it another way, if there were a COVID-free state where nobody was worried about catching COVID, this model expects there would have been only two points of polling error.
This is pretty a striking relationship, especially since both of these datasets are noisy and error-prone themselves. COVID reporting can vary in quality and method by state, COVID worry isn’t perfectly driven by deaths or cases, we’re missing positive cases in many states due to lack of testing, and polling is subject to a margin of sampling error and other forms of error.
To throw out potential outliers, we narrowed this down to just the 10 most-polled states. Again, the trend is visually striking, with polling error and COVID-19 outbreaks running closely together:
The Midwest is instructive, particularly Minnesota. It was a state with a very low prevalence of COVID-19 in mid-October — it has a Democratic Governor who was not hamstrung by the courts — but aside from that, it’s demographically similar enough to its neighbors of North Dakota, South Dakota, Iowa, and Wisconsin (heavily white and majority non-college). However, the polling error was remarkably different in these states, in part because Minnesota had the lowest COVID-19 rates of all five:
One thing we had to untangle is a state’s COVID rate and its Trump support, because those two are intimately intertwined (states with a higher Trump vote share had more COVID in mid-October). When we put Trump’s vote share and COVID rates into a regression analysis to separate them, it’s clear both drove polling error: polls were more off in states where Trump did well (quite possibly due to turnout dynamics), and they also were more off in states where COVID was spreading most, even accounting for Trump support.
This helps explain why the polling miss looked so different this year than 2016, in two key ways:
1) The national polls were off this year while they weren’t in 2016 — this year polling was too pro-Biden virtually everywhere, while 2016 polling overestimated Trump’s vote share in states like Texas and Arizona while understating it in Wisconsin, Iowa, Pennsylvania, and Michigan.
2) Of those four states where the 2016 polling was such a big miss, polling got a bit better in key swing states Michigan and Pennsylvania from 2016 to 2020, two states that were relatively low in COVID-19 in mid-October. It got no better in Wisconsin or Iowa, two higher-coronavirus states.
One thing that heavily drove the 2016 error was the swing from Obama to Trump in a state: a state’s Trumpiness, beyond its Republican-ness, made polls more wrong. Trump’s support did drive polling error this time too, some. When we put Trump support and COVID-19 rates in a regression model, we found that a 15-point increase in Trump’s vote share from state to state added only one point to the polling error. While we can’t yet analyze voter files, we suspect higher-than-assumed GOP turnout was another source of error in polls. However, this just wasn’t the dominant factor it was in 2016. Unlike in 2016, neither the swing from Clinton to Trump (2016 to 2020) nor from Romney to Trump (2012 to 2020) drove error so clearly and dominantly.
What this means
The pandemic didn’t cause 100% of polling error in 2020. But it is clear that Trump outperformed his polls the most in places with bad outbreaks. He also did worst in polls in states when the outbreaks were at their peaks, which probably was some combination of real-world changes and artifacts in polling.
We take a few things from this at the same time:
· Getting partisanship right alone can’t save polls. In many cases, we were making sure our polls were weighted to partisan measures like party registration and of people saying they voted for Trump in 2016, thinking we were accounting for samples where we saw too many Democrats. But in fact, we know people who were more worried about COVID — regardless of partisanship or past behavior— were more pro-Biden, more supportive of COVID restrictions, and less approving of the job Trump was doing on the pandemic.
· Some of the error was a (please God) one-time issue. It is our sincere, sincere hope we never get another election where we can test this theory, that a pandemic ripping through our country that only one party takes seriously causes a miss in polling. There was also no realistic way to be sure this would be a problem pre-election, and there was no realistic way to account for it — how would one have even begun to guess that COVID added 5 points more error in Wisconsin than Minnesota, for example?
· This doesn’t mean pollsters are off the hook. Part of what let COVID-19 make polls off is that response rates to polls are so low to begin with. If only 1 in 100 people are answering the phone, a 1-point increase in Democratic vs Republican response rates (or COVID-fearing vs non-COVID-fearing rates) can massively impact polls at a level that just can’t happen when most Americans were taking polls like was the case half a century ago. Even as we root out additional sources of 2020 polling error, we’re going to need to keep thinking about ways to deal with declining response rates. This may include different modes like SMS, online, and phone to spread out the risk of one mode leading to more non-response, though our analysis of internal and public polling doesn’t show any one mode doing better than another.
· We’re all going to have to rely on polls as guides, not crystal balls. The angst caused by polling error underscores how reliant we’ve become on polls in politics. Polling drives campaign spending decisions, it shifts donations across Senate races, and analyzing it made Nate Silver famous. We’re obviously believers in polls, in particular well-crafted research that helps campaigns plan their strategy. But at the same time, even as pollsters we’d admit that using polling as the only metric to drive campaign strategy and resource allocation doesn’t make sense. We don’t know what will cause the poll error next time it’s an issue, but we know it’s coming — in both 2000 and 2012, the national polls were more off than in 2020. For campaigns in particular, we need to better price in the fact that every poll can be wrong in the same direction and devise our strategy with that in mind.