What the Polls Got Right This Year, and Where They Went Wrong

It was a good year for polls. This time, they got the basic story of the election right: a Democratic House and a Republican Senate. And on average, the final polls were closer to the results than any election in a decade. Best of all, the polls were relatively unbiased, meaning that one party didn’t systematically overperform or underperform its final poll results.

But while the big picture is much better than in 2016, when the polls systematically underestimated Donald J. Trump in the battleground states, some details are eerily similar. The geographic distribution of polling error was much like in 2016, even though the average poll wasn’t particularly biased at all.

It was enough for election night to briefly feel reminiscent of 2016, as polls underestimated Republicans in several key states and races. It raises questions about whether polls remain vulnerable to a 2016-like error in 2020, when the race promises to be tighter and focused on the kinds of predominantly white, working-class states where the polls underestimated Republicans.

The results aren’t yet final in some races, but at the moment, the average Senate poll over the final three weeks of the race was off by 4.3 percentage points, about a point better than the longer-term average. In the House, the average error was six points, close to average.

On average, the polls were biased toward Democrats (meaning the Democrats did worse in the elections than polls indicated they would) by 0.4 points, making this year’s polls the least biased since 2006 and nothing like the polls in 2016, which were three points more Democratic than the results.

Even though the polls were pretty accurate in the aggregate, there were points during election night — as the Republicans beat the polls in Indiana, Missouri, Florida, Tennessee and Ohio — that briefly felt like 2016 all over again.

The geographic distribution was similar; so was the party that did better than expected. Less significant, but still notable, is that the polls underestimated Democrats in several states where they also underestimated Democrats in 2016, like California, New York and Nevada.

It’s still too early to unpack exactly why.

The New York Times sponsored 43 polls with Siena College over the final 21 days of the race, mostly of House districts. It will be months before we have the data necessary to fully analyze our own polls, though in general they don’t show much evidence of the same pattern. (Our final polls were particularly accurate, differing from the results by an average of three points, well beneath the six-point average.)

Our polls did not overestimate Democrats in the less educated states and districts of the East, where one might expect the phenomenon to show up, though they did underestimate Democrats in California and the Southwest.

Across all polls, some of the misses seem likely to defy the convenient explanations: Late, high-quality polls of Missouri and Florida showed Democratic candidates over 50 percent, for instance, precluding the possibility that late shifts among undecided voters were responsible.

There are other possible explanations.

Some polls still don’t properly represent less educated voters, which was thought to be one of the major drivers of error in 2016. On the other hand, there are plenty of high-quality pollsters that do so and still had very Democratic results in critical states.

The higher-than-expected turnout might have inadvertently contributed to a 2016-like pattern, since lower-turnout voters in the big urban states tend to be nonwhite and Democratic, while lower-turnout voters in rural, less educated states tend to be white working-class voters.

In the Times Upshot/Siena polls, undecided voters tended to follow a similar pattern: In the Sun Belt, the undecided voters tended to be nonwhite Democrats; in the North, they were more likely to be white voters without a degree.

These kinds of errors would actually be acceptable for pollsters. There’s nothing they can do about late movement among undecided voters or an unexpectedly high turnout. And neither error would call the underlying methodology of polling into question.

An unacceptable error — and the worst-case possibility — would be if the polls had some kind of underlying, deeper challenge in reaching Mr. Trump’s supporters.

There has long been a theory that Mr. Trump’s supporters have lower social trust, which is correlated with not responding to pollsters, but there’s little public data to support it. And most of the efforts to validate the methodology of public polling is occurring at the national level, even though the challenge appears concentrated in particular parts of the country.

It’s important to emphasize that this pattern is less pronounced than it was in 2016. The polls were just fine in Pennsylvania, for instance. But it’s hard to say whether the polls have earned a clean bill of health for 2020, even if they were still good enough to get the job done this fall.

Be the first to comment

Leave a Reply

Your email address will not be published.


*