02
  • Publications
  • Thoughts
  • How did the pollsters get it so wrong?

    There was always something disquieting for me about the 2016 election. I started to get seriously uncomfortable with the public polling back in August. There were a few reasons. Mostly it was little things like the swings in margins that Clinton was winning by or that polls in swing states like Ohio and Pennsylvania were coming up with results like 0% of African Americans voting for Trump.

    But the biggest one was a gut feeling: there was all of this volatility going on in the polls, but when you stopped to think about it, were there really that many people standing between Hillary and Donald, scratching their heads, weighting up the pros and cons, and trying to decide really and truly between these two candidates? I mean, really?

    In other election years, yes. perhaps I’d believe it as far out as the conventions – at least a bit. But this was a year unlike any other for American voters: we had two celebrities with shockingly low favourability running against each other. Voter emotions were running very high on both sides. There was no model to compare it to. And there wasn’t really much getting to know the candidates left to do.

    I believed that that the volatility that we saw in the polls since the conventions wasn’t shifts in undecided voters, but instead had to do with willingness of voters to admit who they were planning on voting for. This is a pretty well established thing in the academic literature. Regardless of what the 24 hour news cycle tells you, the convention bump isn’t usually flakey undecided voters suddenly getting on board and then drifting off again from a candidate. It has much more to do with pride in ¬†(and acceptability of) one’s candidate of choice. You’re more likely to take a pollster’s call if you’re feeling like your candidate is doing well. If they aren’t doing well, you are less likely to take the call or fill in an online survey. And because of, well, the infamy of both candidates, this year felt like this would be even more so the case.

    This circles back to my issue with the polling stories in August saying that no African-Americans in the rust belt were going to vote for Trump. That just didn’t make sense. There has historically been friction between some African-American communities and some immigrant communities, particularly where there is a scarcity of employment. Like, one can assume, in the rust belt. At the end of the day, it just didn’t sit right with me: it was much easier to believe that minority voters in swing states were having a hard time admitting that they planned on voting for Trump rather than Trump’s anti-immigration message resonating with literally not a single member of the African-American community. And it couldn’t just be that subgroup that was being missed out in pollster’s analysis.

    So what went wrong?

    This was an election unlike any other in American history. And, as far as I am aware, American pollsters asked their questions this year like it was indeed any other year. They didn’t approach it as the outlier that it was. Then, they built their models using previous performance and the same old questions around likelihood to vote, favourability, and the infamous horse race. At the end of the day, Nate Silver can be a genius in crunching his numbers, but if you ask the wrong questions, no algorithm can fix the data.

    This debacle needs to result in a serious soul searching about the role of research in public discourse and on the campaign trail – but not whether or not research is useful. We fundamentally need to recognise that not all research is good research.

    Good research starts with good questions and good survey design. And perhaps most of all, a recognition of the moment that the research is taking place in. What do I mean by that? Not all moments are the same. Politics does not happen in a vacuum. Often, that perfect, unbiased questionnaire touted as best practice by the industry does not take into account the human taking the survey. Survey design is frankly both an art and a science. You need to understand human behaviour before you can quantify it Рand that is what seems to have been lost in the numbers recently: the humanity behind that is sitting behind the numbers, the actual individual people answering each question posed to them.

    It is really starting to look like a lot of people in the West are angry, economically disenfranchised, and segregated on socio-economic as well as racial lines. A generation ago, you could be a janitor at a secondary school and afford to buy a home for your family and hope to send your kids to university – not in London, not In New York, not in Paris, not in Rome but pretty much everywhere else. And that’s gone. So people are angry.

    And we need to talk with people about it. We need to understand their language, their needs and their drive. We need to not discount the moment. We need to understand the anger underlying the language, the conversations, the emotions, impacting people’s every day decisions. Last June it was Brexit, yesterday it was Trump. And this anger behind everything isn’t going away.

    Next
    Prev
    To Top
    Portland Loading Symbol