close
close

Mondor Festival

News with a Local Lens

Pollsters should be happy with Donald Trump
minsta

Pollsters should be happy with Donald Trump

Election nightI was nervous. I am a pollster and former political data journalist. I knew that if the polls sniffed again — as they did in 2020 — Americans would view the polls as irreparably broken.

Now there is enough data to reach a verdict – and, despite what you may have heardthe polls went well. No, the data wasn’t perfect and the industry still faces long-term challenges. But we have proven that we can get closer to the goal – which is the best we can reasonably expect from a poll.

We have proven that we can get closer to the goal – which is the best we can reasonably expect from a survey.

You don’t have to take my word for it. Let’s compare the average of pre-election polls — calculated by FiveThirtyEight And RealClearPolitics — to the latest results in swing states where NBC News projected a winner.

In states that decided elections, polls were generally down 1 to 3 points. In the national popular vote, the RCP average had Trump ahead of 0.1 and he will probably win by 1 or 2 points. For polls – crude instruments that typically use fewer than a thousand interviews to estimate how a state or nation as a whole feels – an error of 1 to 3 percentage points is great.

Polls in competitive Senate elections were only slightly less accurate. Some results have not yet been finalized, but so far the main Senate elections have had only two uncomfortable failures: an overestimation Democratic Sen. Jacky Rosen of Nevada And underestimating Republican Sen. Ted Cruz of Texas about 5 points each. In other draws where 538 and RCP averaged, the polls missed by just a few percentage points. Again, it is unrealistic to expect polls to deliver all results: a 1-3 point error is as good as it gets.

  • In the Nevada Senate race, the 538 average put Rosen 5.7 ahead of Republican Sam Brown, and the RCP average put him 4.9 ahead. The latest results show Rosen, the predicted winner, up 1.4.
  • In the Michigan Senate race, the average of 538 put Democrat Elissa Slotkin ahead of Republican Mike Rogers by 3.6, and the RCP average by 2.3. Results show Slotkin, the predicted winner, up 0.3.
  • In Ohio, the 538 average had Republican Bernie Moreno up 0.8 over Democrat Sherrod Brown, and the RCP average had Republican up 1.7. The results show Moreno, the predicted winner, up 4.
  • In Wisconsin, the average of 538 allowed Democrat Tammy Baldwin to advance by 2.2 over Republican Eric Hovde, and the RCP average by 1.8. The results show Baldwin, the predicted winner, up 0.9.
  • In Montana, the 538 average had Republican Tim Sheehy up 6.9 over Democrat Jon Tester, and the RCP average had Republican up 7.7. The results show Sheehy, the predicted winner, up 7.4.
  • And in Texas, the 538 average had Cruz up 4 over Democrat Colin Allred, and the RCP average had Cruz up 4.4. The results show Cruz, the predicted winner, up 8.6.

These results are strong and should keep the survey industry alive. But we do not yet have a clean bill of health.

Surveys are still plagued by non-response: Almost 99% of people selected for a survey do not respond. Some of the groups we most need to learn about — young voters, Latino voters, the politically disengaged — are the hardest to survey.

These results are strong and should keep the survey industry alive. But we do not yet have a clean bill of health.

Some pollsters might also be “herd.” This happens when an unprincipled pollster gets an unexpected result, they may throw it in the trash or tinker with their statistical models until their poll matches the average. This might explain why a suspicious number of polls in Pennsylvania have shown Trump and Harris exactly tied.

And, while the polls weren’t wrong by much, they consistently underestimated Trump by a few points. Ideally, the polls would be unbiased – underestimating Harris about half the time, rather than missing only Trump voters.

Pollsters have partial solutions to each of these problems. We use statistical tools, such as weighting, to ensure that less responsive groups receive the right influence. These same weights can create the illusion of “clustering” – that is, some pollsters adjusted their sample to get the right number of Trump and Biden 2020 supporters, which naturally brought their results closer to the average. And, although polls missed part of Trump’s 2024 advantage, the error was significantly smaller than in 2020.

We don’t have complete solutions to these problems. No one knows how to turn America into a nation of greedy investigators or how to stop anxious, underfunded companies from unconsciously pushing their results toward the average. And we haven’t found that last slice of the Trump vote. But I hope that on Tuesday we gave our industry a little more time to resolve these issues. I’d like to think we deserved it.