Tag Archives: science

Evidence-based policymaking: political strategies for scientists living in the real world

Note: I wrote the following discussion (last year) to be a Nature Comment but it was not to be!

Nature articles on evidence-based policymaking often present what scientists would like to see: rules to minimise bias caused by the cognitive limits of policymakers, and a simple policy process in which we know how and when to present the best evidence.[1]  What if neither requirement is ever met? Scientists will despair of policymaking while their competitors engage pragmatically and more effectively.[2]

Alternatively, if scientists learned from successful interest groups, or by using insights from policy studies, they could develop three ‘take home messages’: understand and engage with policymaking in the real world; learn how and when evidence ‘wins the day’; and, decide how far you should go to maximise the use of scientific evidence. Political science helps explain this process[3], and new systematic and thematic reviews add new insights.[4] [5] [6] [7]

Understand and engage with policymaking in the real world

Scientists are drawn to the ‘policy cycle’, because it offers a simple – but misleading – model for engagement with policymaking.[3] It identifies a core group of policymakers at the ‘centre’ of government, perhaps giving the impression that scientists should identify the correct ‘stages’ in which to engage (such as ‘agenda setting’ and ‘policy formulation’) to ensure the best use of evidence at the point of authoritative choice. This is certainly the image generated most frequently by health and environmental scientists when they seek insights from policy studies.[8]

Yet, this model does not describe reality. Many policymakers, in many levels and types of government, adopt and implement many measures at different times. For simplicity, we call the result ‘policy’ but almost no modern policy theory retains the linear policy cycle concept. In fact, it is more common to describe counterintuitive processes in which, for example, by the time policymaker attention rises to a policy problem at the ‘agenda setting’ stage, it is too late to formulate a solution. Instead, ‘policy entrepreneurs’ develop technically and politically feasible solutions then wait for attention to rise and for policymakers to have the motive and opportunity to act.[9]

Experienced government science advisors recognise this inability of the policy cycle image to describe real world policymaking. For example, Sir Peter Gluckman presents an amended version of this model, in which there are many interacting cycles in a kaleidoscope of activity, defying attempts to produce simple flow charts or decision trees. He describes the ‘art and craft’ of policy engagement, using simple heuristics to deal with a complex and ‘messy’ policy system.[10]

Policy studies help us identify two such heuristics or simple strategies.

First, respond to policymaker psychology by adapting to the short cuts they use to gather enough information quickly: ‘rational’, via trusted sources of oral and written evidence, and ‘irrational’, via their beliefs, emotions, and habits. Policy theories describe many interest group or ‘advocacy coalition’ strategies, including a tendency to combine evidence with emotional appeals, romanticise their own cause and demonise their opponents, or tell simple emotional stories with a hero and moral to exploit the biases of their audience.[11]

Second, adapt to complex ‘policy environments’ including: many policymakers at many levels and types of government, each with their own rules of evidence gathering, network formation, and ways of understanding policy problems and relevant socioeconomic conditions.[2] For example, advocates of international treaties often find that the evidence-based arguments their international audience takes for granted become hotly contested at national or subnational levels (even if the national government is a signatory), while the same interest groups presenting the same evidence of a problem can be key insiders in one government department but ignored in another.[3]

Learn the conditions under which evidence ‘wins the day’ in policymaking

Consequently, the availability and supply of scientific evidence, on the nature of problems and effectiveness of solutions, is a necessary but insufficient condition for evidence-informed policy. Three others must be met: actors use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems; the policy environment becomes broadly conducive to policy change; and, actors exploit attention to a problem, the availability of a feasible solution, and the motivation of policymakers, during a ‘window of opportunity’ to adopt specific policy instruments.10

Tobacco control represents a ‘best case’ example (box 1) from which we can draw key lessons for ecological and environmental policies, giving us a sense of perspective by highlighting the long term potential for major evidence-informed policy change. However, unlike their colleagues in public health, environmental scientists have not developed a clear sense of how to produce policy instruments that are technically and politically feasible, so the delivery of comparable policy change is not inevitable.[12]

Box 1: Tobacco policy as a best case and cautionary tale of evidence-based policymaking

Tobacco policy is a key example – and useful comparator for ecological and environmental policies – since it represents a best case scenario and cautionary tale.[13] On the one hand, the scientific evidence on the links between smoking, mortality, and preventable death forms the basis for modern tobacco control policy. Leading countries – and the World Health Organisation, which oversees the Framework Convention on Tobacco Control (FCTC) – frame tobacco use as a public health ‘epidemic’ and allow their health departments to take the policy lead. Health departments foster networks with public health and medical groups at the expense of the tobacco industry, and emphasise the socioeconomic conditions – reductions in (a) smoking prevalence, (b) opposition to tobacco control, and (c) economic benefits to tobacco – most supportive of tobacco control. This framing, and conducive policymaking environment, helps give policymakers the motive and opportunity to choose policy instruments, such as bans on smoking in public places, which would otherwise seem politically infeasible.

On the other hand, even in a small handful of leading countries such as the UK, it took twenty to thirty years to go from the supply of the evidence to a proportionate government response: from the early evidence on smoking in the 1950s prompting major changes from the 1980s, to the evidence on passive smoking in the 1980s prompting public bans from the 2000s onwards. In most countries, the production of a ‘comprehensive’ set of policy measures is not yet complete, even though most signed the FCTC.

Decide how far you’ll go to maximise the use of scientific evidence in policymaking

These insights help challenge the naïve position that, if policymaking can change to become less dysfunctional[1], scientists can be ‘honest brokers’[14] and expect policymakers to use their evidence quickly, routinely, and sincerely. Even in the best case scenario, evidence-informed change takes hard work, persistence, and decades to achieve.

Since policymaking will always appear ‘irrational’ and complex’[3], scientists need to think harder about their role, then choose to engage more effectively or accept their lack of influence.

To deal with ‘irrational’ policymakers, they should combine evidence with persuasion, simple stories, and emotional appeals, and frame their evidence to make the implications consistent with policymakers’ beliefs.

To deal with complex environments, they should engage for the long term to work out how to form alliances with influencers who share their beliefs, understand in which ‘venues’ authoritative decisions are made and carried out, the rules of information processing in those venues, and the ‘currency’ used by policymakers when they describe policy problems and feasible solutions.[2] In other words, develop skills that do not come with scientific training, avoid waiting for others to share your scientific mindset or respect for scientific evidence, and plan for the likely eventuality that policymaking will never become ‘evidence based’.

This approach may be taken for granted in policy studies[15], but it raises uncomfortable dilemmas regarding how far scientists should go, to maximise the use of scientific evidence in policy, using persuasion and coalition-building.

These dilemmas are too frequently overshadowed by claims – more comforting to scientists – that politicians are to blame because they do not understand how to generate, analyse, and use the best evidence. Scientists may only become effective in politics if they apply the same critical analysis to themselves.

[1] Sutherland, W.J. & Burgman, M. Nature 526, 317–318 (2015).

[2] Cairney, P. et al. Public Administration Review 76, 3, 399-402 (2016)

[3] Cairney, P. The Politics of Evidence-Based Policy Making (Palgrave Springer, 2016).

[4] Langer, L. et al. The Science of Using Science (EPPI, 2016)

[5] Breckon, J. & Dodson, J. Using Evidence. What Works? (Alliance for Useful Evidence, 2016)

[6] Palgrave Communications series The politics of evidence-based policymaking (ed. Cairney, P.)

[7] Practical lessons from policy theories (eds. Weible, C & Cairney, P.) Policy and Politics April 2018

[8] Oliver, K. et al. Health Research Policy and Systems, 12, 34 (2016)

[9] Kingdon, J. Agendas, Alternatives and Public Policies (Harper Collins, 1984)

[10] Gluckmann, P. Understanding the challenges and opportunities at the science-policy interface

[11] Cairney, P. & Kwiatkowski, R. Palgrave Communications.

[12] Biesbroek et al. Nature Climate Change, 5, 6, 493–494 (2015)

[13] Cairney, P. & Yamazaki, M. Journal of Comparative Policy Analysis

[14] Pielke Jr, R. originated the specific term The honest broker (Cambridge University Press, 2007) but this role is described more loosely by other commentators.

[15] Cairney, P. & Oliver, K. Health Research Policy and Systems 15:35 (2017)


Filed under Evidence Based Policymaking (EBPM), public policy

The role of evidence in UK policymaking after Brexit

We are launching a series of papers on evidence and policy in Palgrave Communications. Of course, we used Brexit as a hook, to tap into current attention to instability and major policy change. However, many of the issues we discuss are timeless and about surprising levels of stability and continuity in policy processes, despite periods of upheaval.

In my day, academics would build their careers on being annoying, and sometimes usefully annoying. This would involve developing counterintuitive insights, identifying gaps in analysis, and challenging a ‘common wisdom’ in political studies. Although not exactly common wisdom, the idea of ‘post truth’ politics, a reduction in respect for ‘experts’, and a belief that Brexit is a policymaking game-changer, are great candidates for some annoyingly contrary analysis.

In policy studies, many of us argue that things like elections, changes of government, and even constitutional changes are far less important than commonly portrayed. In media and social media accounts, we find hyperbole about the destabilising and changing impact of the latest events. In policy studies, we often stress stability and continuity.  My favourite old example regards the debates from the 1970s about electoral reform. While some were arguing that first-past-the-post was a disastrous electoral system since it produces swings of government, instability, and incoherent policy change, Richardson and Jordan would point out surprisingly high levels of stability and continuity.

Finer and Jordan Cairney

In part, this is because the state is huge, policymakers can only pay attention to a tiny part of it, and therefore most of it is processed as a low level of government, out of the public spotlight.

UPP p106

These insights still have profound relevance today, for two key reasons.

  1. The role of experts is more important than you think

This larger process provides far more opportunities for experts than we’d associate with ‘tip of the iceberg’ politics.

Some issues are salient. They command the interest of elected politicians, and those politicians often have firm beliefs that limit the ‘impact’ of any evidence that does not support their beliefs.

However, most issues are not salient. They command minimal interest, they are processed by other policymakers, and those policymakers are looking for information and advice from reliable experts.

Indeed, a lot of policy studies highlight the privileged status of certain experts, at the expense of most members of the public (which is a useful corrective to the story, associated with Brexit, that the public is too emotionally driven, too sceptical of experts, and too much in charge of the future of constitutional change).

So, Brexit will change the role of experts, but expect that change to relate to the venue in which they engage, and the networks of which they are a part, more than the practices of policymakers. Much policymaking is akin to an open door to government for people with useful information and a reputation for being reliable in their dealings with policymakers.

  1. Provide less evidence for more impact

If the problem is that policymakers can only pay attention to a tiny proportion of their responsibilities, the solution is not to bombard them with a huge amount of evidence. Instead, assume that they seek ways to ignore almost all information while still managing to make choices. The trick may be to provide just enough information to prompt demand for more, not oversupply evidence on the assumption that you have only one chance for influence.

With Richard Kwiatkoswki, I draw on policy and psychology studies to help us understand how to supply evidence to anyone using ‘rational’ and ‘irrational’ ways to limit their attention, information processing, and thought before making decisions.

Our working assumption is that policymakers need to gather information quickly and effectively, so they develop heuristics to allow them to make what they believe to be good choices. Their solutions often seem to be driven more by their emotions than a ‘rational’ analysis of the evidence, partly because we hold them to a standard that no human can reach. If so, and if they have high confidence in their heuristics, they will dismiss our criticism as biased and naïve. Under those circumstances, restating the need for ‘evidence-based policymaking’ is futile, and naively ‘speaking truth to power’ counterproductive.

Instead, try out these strategies:

  1. Develop ways to respond positively to ‘irrational’ policymaking

Instead of automatically bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond pragmatically, to pursue the kinds of evidence informed policymaking that is realistic in a complex and constantly changing policymaking environment.

  1. Tailor framing strategies to policymaker cognition

The usual advice is to minimise the cognitive burden of your presentation, and use strategies tailored to the ways in which people pay attention to, and remember information.

The less usual advice includes:

  • If policymakers are combining cognitive and emotive processes, combine facts with emotional appeals.
  • If policymakers are making quick choices based on their values and simple moral judgements, tell simple stories with a hero and a clear moral.
  • If policymakers are reflecting a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with the ‘lens’ through which actors in those coalitions understand the world.
  1. Identify the right time to influence individuals and processes

Understand what it means to find the right time to exploit ‘windows of opportunity’.

‘Timing’ can refer to the right time to influence an individual, which involves how open they are to, say, new arguments and evidence.

Or, timing refers to a ‘window of opportunity’ when political conditions are aligned. I discuss the latter in a separate paper on effective ‘policy entrepreneurs’.

  1. Adapt to real-world organisations rather than waiting for an orderly process to appear

Politicians may appear confident of policy and with a grasp of facts and details, but are (a) often vulnerable and therefore defensive or closed to challenging information, and/ or (b) inadequate in organisational politics, or unable to change the rules of their organisations.

So, develop pragmatic strategies: form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

  1. Recognise that the biases we ascribe to policymakers are present in ourselves and our own groups.

Identifying only the biases in our competitors may help mask academic/ scientific examples of group-think, and it may be counterproductive to use euphemistic terms like ‘low information’ to describe actors whose views we do not respect. This is a particular problem for scholars if they assume that most people do not live up to their own imagined standards of high-information-led action (often described as a ‘deficit model’ of engagement).

It may be more effective to recognise that: (a) people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves.; and, (b) a fundamental aspect of evolutionary psychology is that people need to get on with each other, so showing simple respect – or going further, to ‘mirror’ that person’s non-verbal signals – can be useful even if it looks facile.

This leaves open the ethical question of how far we should go to identify our biases, accept the need to work with people whose ways of thinking we do not share, and how far we should go to secure their trust without lying about one’s beliefs.

At the very least, we do not suggest these 5 strategies as a way to manipulate people for personal gain. They are better seen as ways to use psychology to communicate well. They are also likely to be as important to policy engagement regardless of Brexit. Venues may change quickly, but the ways in which people process information and make choices may not.



Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, UK politics and policy

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed




Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Why the pollsters got it wrong

We have a new tradition in politics in which some people glory in the fact that the polls got it wrong. It might begin with ‘all these handsome experts with all their fancy laptops and they can’t even tell us exactly how an election will turn out’, and sometimes it ends with, ‘yet, I knew it all along’. I think that the people who say it most are the ones that are pleased with the result and want to stick it to the people who didn’t predict it: ‘if, like me, they’d looked up from their laptops and spoken to real people, they’d have seen what would happen’.

To my mind, it’s always surprising when so many polls seem to do so well. Think for a second about what ‘pollsters’ do: they know they can’t ask everyone how they will vote (and why), so they take a small sample and use it as a proxy for the real world. To make sure the sample isn’t biased by selection, they develop methods to generate respondents randomly. To try to make the most of their resources, and make sure that their knowledge is cumulative, they use what they think they know about the population to make sure that they get enough responses from a ‘representative’ sample of the population. In many cases, that knowledge comes from things like focus groups or one-to-one interviews to get richer (qualitative) information than we can achieve from asking everyone the same question, often super-quickly, in a larger survey.

This process involves all sorts of compromises and unintended consequences when we have a huge population but limited resources: we’d like to ask everyone in person, but it’s cheaper to (say) get a 4-figure response online or on the phone; and, if we need to do it quickly, our sample will be biased towards people willing to talk to us.* So, on top of a profound problem – the possibility of people not telling the truth in polls – we have a potentially less profound but more important problem: the people we need to talk to us aren’t talking to us. So, we get a misleading read because we’re asking an unrepresentative sample (although it is nothing like as unrepresentative as proxy polls from social media, the word ‘on the doorstep’, or asking your half-drunk mates how they’ll vote).

Sensible ‘pollsters’ deal with such problems by admitting that they might be a bit off: highlighting their estimated ‘margin of error’ from the size of their sample, then maybe crossing their fingers behind their backs if asked about the likelihood of more errors based on non-random sampling. So, ignore this possibility for error at your peril. Yet, people do ignore it despite the peril! Here are two reasons why.

  1. Being sensible is boring.

In a really tight-looking two-horse race, the margin of error alone might suggest that either horse might win. So, a sensible interpretation of a poll might be (say), ‘either Clinton or Trump will get the most votes’. Who wants to hear or talk about that?! You can’t fill a 24-hour news cycle and keep up shite Twitter conversations by saying ‘who knows?’ and then being quiet. Nor will anyone pay much attention to a quietly sensible ‘pollster’ or academic telling them about the importance of embracing uncertainty. You’re in the studio to tell us what will happen, pal. Otherwise, get lost.

  1. Recognising complexity and uncertainty is boring.

You can heroically/ stupidly break down the social scientific project into two competing ideas: (1) the world contains general and predictable patterns of behaviour that we can identify with the right tools; or (2) the world is too complex and unpredictable to produce general laws of behaviour, and maybe your best hope is to try to make sense of how other people try to make sense of it. Then, maybe (1) sounds quite exciting and comforting while (2) sounds like it is the mantra of a sandal-wearing beansprout-munching hippy academic. People seem to want a short, confidently stated, message that is easy to understand. You can stick your caveats.

Can we take life advice from this process?

These days I’m using almost every topic as a poorly-constructed segue into a discussion about the role of evidence in politics and policy. This time, the lesson is about using evidence correctly for the correct purpose. In our example, we can use polls effectively for their entertainment value. Or, campaigners can use them as the best-possible proxies during their campaigns: if their polls tell them they are lagging in one area, give it more attention; if they seem to have a big lead in another area; give it less attention. The evidence won’t be totally accurate, but it gives you enough to generate a simple campaigning strategy. Academics can also use the evidence before and after a campaign to talk about how it’s all going. Really, the only thing you don’t expect poll evidence to do is predict the result. For that, you need the Observers from Fringe.

The same goes for evidence in policymaking: people use rough and ready evidence because they need to act on what they think is going on. There will never be enough evidence to make the decision for you, or let you know exactly what will happen next. Instead, you combine good judgement with your values, sprinkle in some evidence, and off you go. It would be silly to expect a small sample of evidence – a snapshot of one part of the world – to tell you exactly what will happen in the much larger world. So, let’s not kid ourselves about the ability of science to tell us what’s what and what to do. It’s better, I think, to recognise life’s uncertainties and act accordingly. It’s better than blaming other people for not knowing what will happen next.


*I say ‘we’ and ‘us’ but I’ve never conducted a poll in my life. I interview elites in secret and promise them anonymity.

1 Comment

Filed under Academic innovation or navel gazing, Folksy wisdom, Uncategorized