Daily Archives: November 10, 2016

Why the pollsters got it wrong

We have a new tradition in politics in which some people glory in the fact that the polls got it wrong. It might begin with ‘all these handsome experts with all their fancy laptops and they can’t even tell us exactly how an election will turn out’, and sometimes it ends with, ‘yet, I knew it all along’. I think that the people who say it most are the ones that are pleased with the result and want to stick it to the people who didn’t predict it: ‘if, like me, they’d looked up from their laptops and spoken to real people, they’d have seen what would happen’.

To my mind, it’s always surprising when so many polls seem to do so well. Think for a second about what ‘pollsters’ do: they know they can’t ask everyone how they will vote (and why), so they take a small sample and use it as a proxy for the real world. To make sure the sample isn’t biased by selection, they develop methods to generate respondents randomly. To try to make the most of their resources, and make sure that their knowledge is cumulative, they use what they think they know about the population to make sure that they get enough responses from a ‘representative’ sample of the population. In many cases, that knowledge comes from things like focus groups or one-to-one interviews to get richer (qualitative) information than we can achieve from asking everyone the same question, often super-quickly, in a larger survey.

This process involves all sorts of compromises and unintended consequences when we have a huge population but limited resources: we’d like to ask everyone in person, but it’s cheaper to (say) get a 4-figure response online or on the phone; and, if we need to do it quickly, our sample will be biased towards people willing to talk to us.* So, on top of a profound problem – the possibility of people not telling the truth in polls – we have a potentially less profound but more important problem: the people we need to talk to us aren’t talking to us. So, we get a misleading read because we’re asking an unrepresentative sample (although it is nothing like as unrepresentative as proxy polls from social media, the word ‘on the doorstep’, or asking your half-drunk mates how they’ll vote).

Sensible ‘pollsters’ deal with such problems by admitting that they might be a bit off: highlighting their estimated ‘margin of error’ from the size of their sample, then maybe crossing their fingers behind their backs if asked about the likelihood of more errors based on non-random sampling. So, ignore this possibility for error at your peril. Yet, people do ignore it despite the peril! Here are two reasons why.

  1. Being sensible is boring.

In a really tight-looking two-horse race, the margin of error alone might suggest that either horse might win. So, a sensible interpretation of a poll might be (say), ‘either Clinton or Trump will get the most votes’. Who wants to hear or talk about that?! You can’t fill a 24-hour news cycle and keep up shite Twitter conversations by saying ‘who knows?’ and then being quiet. Nor will anyone pay much attention to a quietly sensible ‘pollster’ or academic telling them about the importance of embracing uncertainty. You’re in the studio to tell us what will happen, pal. Otherwise, get lost.

  1. Recognising complexity and uncertainty is boring.

You can heroically/ stupidly break down the social scientific project into two competing ideas: (1) the world contains general and predictable patterns of behaviour that we can identify with the right tools; or (2) the world is too complex and unpredictable to produce general laws of behaviour, and maybe your best hope is to try to make sense of how other people try to make sense of it. Then, maybe (1) sounds quite exciting and comforting while (2) sounds like it is the mantra of a sandal-wearing beansprout-munching hippy academic. People seem to want a short, confidently stated, message that is easy to understand. You can stick your caveats.

Can we take life advice from this process?

These days I’m using almost every topic as a poorly-constructed segue into a discussion about the role of evidence in politics and policy. This time, the lesson is about using evidence correctly for the correct purpose. In our example, we can use polls effectively for their entertainment value. Or, campaigners can use them as the best-possible proxies during their campaigns: if their polls tell them they are lagging in one area, give it more attention; if they seem to have a big lead in another area; give it less attention. The evidence won’t be totally accurate, but it gives you enough to generate a simple campaigning strategy. Academics can also use the evidence before and after a campaign to talk about how it’s all going. Really, the only thing you don’t expect poll evidence to do is predict the result. For that, you need the Observers from Fringe.

The same goes for evidence in policymaking: people use rough and ready evidence because they need to act on what they think is going on. There will never be enough evidence to make the decision for you, or let you know exactly what will happen next. Instead, you combine good judgement with your values, sprinkle in some evidence, and off you go. It would be silly to expect a small sample of evidence – a snapshot of one part of the world – to tell you exactly what will happen in the much larger world. So, let’s not kid ourselves about the ability of science to tell us what’s what and what to do. It’s better, I think, to recognise life’s uncertainties and act accordingly. It’s better than blaming other people for not knowing what will happen next.

 

*I say ‘we’ and ‘us’ but I’ve never conducted a poll in my life. I interview elites in secret and promise them anonymity.

1 Comment

Filed under Academic innovation or navel gazing, Folksy wisdom, Uncategorized

We all want ‘evidence based policy making’ but how do we do it?

Here are some notes for my talk to the Scottish Government on Thursday as part of its ‘inaugural ‘evidence in policy week’. The advertised abstract is as follows:

A key aim in government is to produce ‘evidence based’ (or ‘informed’) policy and policymaking, but it is easier said than done. It involves two key choices about (1) what evidence counts and how you should gather it, and (2) the extent to which central governments should encourage subnational policymakers to act on that evidence. Ideally, the principles we use to decide on the best evidence should be consistent with the governance principles we adopt to use evidence to make policy, but what happens when they seem to collide? Cairney provides three main ways in which to combine evidence and governance-based principles to help clarify those choices.

I plan to use the same basic structure of the talks I gave to the OSF (New York) and EUI-EP (Florence) in which I argue that every aspect of ‘evidence based policy making’ is riddled with the necessity to make political choices (even when we define EBPM):

ebpm-5-things-to-do

I’ll then ‘zoom in’ on points 4 and 5 regarding the relationship between EBPM and governance principles. They are going to videotape the whole discussion to use for internal discussions, but I can post the initial talk here when it becomes available. Please don’t expect a TED talk (especially the E part of TED).

EBPM and good governance principles

The Scottish Government has a reputation for taking certain governance principles seriously, to promote high stakeholder ‘ownership’ and ‘localism’ on policy, and produce the image of a:

  1. Consensual consultation style in which it works closely with interest groups, public bodies, local government organisations, voluntary sector and professional bodies, and unions when making policy.
  2. Trust-based implementation style indicating a relative ability or willingness to devolve the delivery of policy to public bodies, including local authorities, in a meaningful way

Many aspects of this image were cultivated by former Permanent Secretaries: Sir John Elvidge described a ‘Scottish Model’ focused on joined-up government and outcomes-based approaches to policymaking and delivery, and Sir Peter Housden labelled the ‘Scottish Approach to Policymaking’ (SATP) as an alternative to the UK’s command-and-control model of government, focusing on the ‘co-production’ of policy with local communities and citizens.

The ‘Scottish Approach’ has implications for evidence based policy making

Note the major implication for our definition of EBPM. One possible definition, derived from ‘evidence based medicine’, refers to a hierarchy of evidence in which randomised control trials and their systematic review are at the top, while expertise, professional experience and service user feedback are close to the bottom. An uncompromising use of RCTs in policy requires that we maintain a uniform model, with the same basic intervention adopted and rolled out within many areas. The focus is on identifying an intervention’s ‘active ingredient’, applying the correct dosage, and evaluating its success continuously.

This approach seems to challenge the commitment to localism and ‘co-production’.

At the other end of the spectrum is a storytelling approach to the use of evidence in policy. In this case, we begin with key governance principles – such as valuing the ‘assets’ of individuals and communities – and inviting people to help make and deliver policy. Practitioners and service users share stories of their experiences and invite others to learn from them. There is no model of delivery and no ‘active ingredient’.

This approach seems to challenge the commitment to ‘evidence based policy’

The Goldilocks approach to evidence based policy making: the improvement method

We can understand the Scottish Government’s often-preferred method in that context. It has made a commitment to:

Service performance and improvement underpinned by data, evidence and the application of improvement methodologies

So, policymakers use many sources of evidence to identify promising, make broad recommendations to practitioners about the outcomes they seek, and they train practitioners in the improvement method (a form of continuous learning summed up by a ‘Plan-Do-Study-Act’ cycle).

Table 1 Three ideal types EBBP

This approach appears to offer the best of both worlds; just the right mix of central direction and local discretion, with the promise of combining well-established evidence from sources including RCTs with evidence from local experimentation and experience.

Four unresolved issues in decentralised evidence-based policy making

Not surprisingly, our story does not end there. I think there are four unresolved issues in this process:

  1. The Scottish Government often indicates a preference for improvement methods but actually supports all three of the methods I describe. This might reflect an explicit decision to ‘let a thousand flowers bloom’ or the inability to establish a favoured approach.
  2. There is not a single way of understanding ‘improvement methodology’. I describe something akin to a localist model here, but other people describe a far more research-led and centrally coordinated process.
  3. Anecdotally, I hear regularly that key stakeholders do not like the improvement method. One could interpret this as a temporary problem, before people really get it and it starts to work, or a fundamental difference between some people in government and many of the local stakeholders so important to the ‘Scottish approach’.

4. The spectre of democratic accountability and the politics of EBPM

The fourth unresolved issue is the biggest: it’s difficult to know how this approach connects with the most important reference in Scottish politics: the need to maintain Westminster-style democratic accountability, through periodic elections and more regular reports by ministers to the Scottish Parliament. This requires a strong sense of central government and ministerial control – if you know who is in charge, you know who to hold to account or reward or punish in the next election.

In principle, the ‘Scottish approach’ provides a way to bring together key aims into a single narrative. An open and accessible consultation style maximises the gathering of information and advice and fosters group ownership. A national strategic framework, with cross-cutting aims, reduces departmental silos and balances an image of democratic accountability with the pursuit of administrative devolution, through partnership agreements with local authorities, the formation of community planning partnerships, and the encouragement of community and user-driven design of public services. The formation of relationships with public bodies and other organisations delivering services, based on trust, fosters the production of common aims across the public sector, and reduces the need for top-down policymaking. An outcomes-focus provides space for evidence-based and continuous learning about what works.

In practice, a government often needs to appear to take quick and decisive action from the centre, demonstrate policy progress and its role in that progress, and intervene when things go wrong. So, alongside localism it maintains a legislative, financial, and performance management framework which limits localism.

How far do you go to ensure EBPM?

So, when I describe the ‘5 things to do’, usually the fifth element is about how far scientists may want to go, to insist on one model of EBPM when it has the potential to contradict important governance principles relating to consultation and localism. For a central government, the question is starker:

Do you have much choice about your model of EBPM when the democratic imperative is so striking?

I’ll leave it there on a cliff hanger, since these are largely questions to prompt discussion in specific workshops. If you can’t attend, there is further reading on the EBPM and EVIDENCE tabs on this blog, and specific papers on the Scottish dimension

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

Paul Cairney, Siabhainn Russell and Emily St Denny (2016) “The ‘Scottish approach’ to policy and policymaking: what issues are territorial and what are universal?” Policy and Politics, 44, 3, 333-50

The politics of evidence-based best practice: 4 messages

 

 

4 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy, Scottish politics, Storytelling