Why the pollsters got it wrong

We have a new tradition in politics in which some people glory in the fact that the polls got it wrong. It might begin with ‘all these handsome experts with all their fancy laptops and they can’t even tell us exactly how an election will turn out’, and sometimes it ends with, ‘yet, I knew it all along’. I think that the people who say it most are the ones that are pleased with the result and want to stick it to the people who didn’t predict it: ‘if, like me, they’d looked up from their laptops and spoken to real people, they’d have seen what would happen’.

To my mind, it’s always surprising when so many polls seem to do so well. Think for a second about what ‘pollsters’ do: they know they can’t ask everyone how they will vote (and why), so they take a small sample and use it as a proxy for the real world. To make sure the sample isn’t biased by selection, they develop methods to generate respondents randomly. To try to make the most of their resources, and make sure that their knowledge is cumulative, they use what they think they know about the population to make sure that they get enough responses from a ‘representative’ sample of the population. In many cases, that knowledge comes from things like focus groups or one-to-one interviews to get richer (qualitative) information than we can achieve from asking everyone the same question, often super-quickly, in a larger survey.

This process involves all sorts of compromises and unintended consequences when we have a huge population but limited resources: we’d like to ask everyone in person, but it’s cheaper to (say) get a 4-figure response online or on the phone; and, if we need to do it quickly, our sample will be biased towards people willing to talk to us.* So, on top of a profound problem – the possibility of people not telling the truth in polls – we have a potentially less profound but more important problem: the people we need to talk to us aren’t talking to us. So, we get a misleading read because we’re asking an unrepresentative sample (although it is nothing like as unrepresentative as proxy polls from social media, the word ‘on the doorstep’, or asking your half-drunk mates how they’ll vote).

Sensible ‘pollsters’ deal with such problems by admitting that they might be a bit off: highlighting their estimated ‘margin of error’ from the size of their sample, then maybe crossing their fingers behind their backs if asked about the likelihood of more errors based on non-random sampling. So, ignore this possibility for error at your peril. Yet, people do ignore it despite the peril! Here are two reasons why.

  1. Being sensible is boring.

In a really tight-looking two-horse race, the margin of error alone might suggest that either horse might win. So, a sensible interpretation of a poll might be (say), ‘either Clinton or Trump will get the most votes’. Who wants to hear or talk about that?! You can’t fill a 24-hour news cycle and keep up shite Twitter conversations by saying ‘who knows?’ and then being quiet. Nor will anyone pay much attention to a quietly sensible ‘pollster’ or academic telling them about the importance of embracing uncertainty. You’re in the studio to tell us what will happen, pal. Otherwise, get lost.

  1. Recognising complexity and uncertainty is boring.

You can heroically/ stupidly break down the social scientific project into two competing ideas: (1) the world contains general and predictable patterns of behaviour that we can identify with the right tools; or (2) the world is too complex and unpredictable to produce general laws of behaviour, and maybe your best hope is to try to make sense of how other people try to make sense of it. Then, maybe (1) sounds quite exciting and comforting while (2) sounds like it is the mantra of a sandal-wearing beansprout-munching hippy academic. People seem to want a short, confidently stated, message that is easy to understand. You can stick your caveats.

Can we take life advice from this process?

These days I’m using almost every topic as a poorly-constructed segue into a discussion about the role of evidence in politics and policy. This time, the lesson is about using evidence correctly for the correct purpose. In our example, we can use polls effectively for their entertainment value. Or, campaigners can use them as the best-possible proxies during their campaigns: if their polls tell them they are lagging in one area, give it more attention; if they seem to have a big lead in another area; give it less attention. The evidence won’t be totally accurate, but it gives you enough to generate a simple campaigning strategy. Academics can also use the evidence before and after a campaign to talk about how it’s all going. Really, the only thing you don’t expect poll evidence to do is predict the result. For that, you need the Observers from Fringe.

The same goes for evidence in policymaking: people use rough and ready evidence because they need to act on what they think is going on. There will never be enough evidence to make the decision for you, or let you know exactly what will happen next. Instead, you combine good judgement with your values, sprinkle in some evidence, and off you go. It would be silly to expect a small sample of evidence – a snapshot of one part of the world – to tell you exactly what will happen in the much larger world. So, let’s not kid ourselves about the ability of science to tell us what’s what and what to do. It’s better, I think, to recognise life’s uncertainties and act accordingly. It’s better than blaming other people for not knowing what will happen next.

 

*I say ‘we’ and ‘us’ but I’ve never conducted a poll in my life. I interview elites in secret and promise them anonymity.

1 Comment

Filed under Academic innovation or navel gazing, Folksy wisdom, Uncategorized

One response to “Why the pollsters got it wrong

  1. Very briefly, I’m not so certain that the pollsters called the US presidential election entirely wrong. They told us Clinton was slightly ahead, and she was. They said it was close, and it was. What they didn’t predict was how the key states would vote. Two out of three correct, and quite a few pollsters hesitated to call the key states. I’d say that’s not bad. Or did I miss something?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s