Tag Archives: hierarchy of evidence

Policy Analysis in 750 Words: Who should be involved in the process of policy analysis?

This post forms one part of the Policy Analysis in 750 words series overview.

Think of two visions for policy analysis. It should be primarily:

These choices are not mutually exclusive, but there are key tensions between them that should not be ignored, such as when we ask:

  • how many people should be involved in policy analysis?
  • whose knowledge counts?
  • who should control policy design?

Perhaps we can only produce a sensible combination of the two if we clarify their often very different implications for policy analysis. Let’s begin with one story for each and see where they take us.

A story of ‘evidence-based policymaking’

One story of ‘evidence based’ policy analysis is that it should be based on the best available evidence of ‘what works’.

Often, the description of the ‘best’ evidence relates to the idea that there is a notional hierarchy of evidence according to the research methods used.

At the top would be the systematic review of randomised control trials, and nearer the bottom would be expertise, practitioner knowledge, and stakeholder feedback.

This kind of hierarchy has major implications for policy learning and transfer, such as when importing policy interventions from abroad or ‘scaling up’ domestic projects.

Put simply, the experimental method is designed to identify the causal effect of a very narrowly defined policy intervention. Its importation or scaling up would be akin to the description of medicine, in which the evidence suggests the causal effect of a specific active ingredient to be administered with the correct dosage. A very strong commitment to a uniform model precludes the processes we might associate with co-production, in which many voices contribute to a policy design to suit a specific context (see also: the intersection between evidence and policy transfer).

A story of co-production in policymaking

One story of ‘co-produced’ policy analysis is that it should be ‘reflexive’ and based on respectful conversations between a wide range of policymakers and citizens.

Often, the description is of the diversity of valuable policy relevant information, with scientific evidence considered alongside community voices and normative values.

This rejection of a hierarchy of evidence also has major implications for policy learning and transfer. Put simply, a co-production method is designed to identify the positive effect – widespread ‘ownership’ of the problem and commitment to a commonly-agreed solution – of a well-discussed intervention, often in the absence of central government control.

Its use would be akin to a collaborative governance mechanism, in which the causal mechanism is perhaps the process used to foster agreement (including to produce the rules of collective action and the evaluation of success) rather than the intervention itself. A very strong commitment to this process precludes the adoption of a uniform model that we might associate with narrowly-defined stories of evidence based policymaking.

Where can you find these stories in the 750-words series?

  1. Texts focusing on policy analysis as evidence-based/ informed practice (albeit subject to limits) include: Weimer and Vining, Meltzer and Schwartz, Brans, Geva-May, and Howlett (compare with Mintrom, Dunn)
  2. Texts on being careful while gathering and analysing evidence include: Spiegelhalter
  3. Texts that challenge the ‘evidence based’ story include: Bacchi, T. Smith, Hindess, Stone

 

How can you read further?

See the EBPM page and special series ‘The politics of evidence-based policymaking: maximising the use of evidence in policy

There are 101 approaches to co-production, but let’s see if we can get away with two categories:

  1. Co-producing policy (policymakers, analysts, stakeholders). Some key principles can be found in Ostrom’s work and studies of collaborative governance.
  2. Co-producing research to help make it more policy-relevant (academics, stakeholders). See the Social Policy and Administration special issue ‘Inside Co-production’ and Oliver et al’s ‘The dark side of coproduction’ to get started.

To compare ‘epistemic’ and ‘reflexive’ forms of learning, see Dunlop and Radaelli’s ‘The lessons of policy learning: types, triggers, hindrances and pathologies

My interest has been to understand how governments juggle competing demands, such as to (a) centralise and localise policymaking, (b) encourage uniform and tailored solutions, and (c) embrace and reject a hierarchy of evidence. What could possibly go wrong when they entertain contradictory objectives? For example:

  • Paul Cairney (2019) “The myth of ‘evidence based policymaking’ in a decentred state”, forthcoming in Public Policy and Administration(Special Issue, The Decentred State) (accepted version)
  • Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open AccessPDF
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x PDF
  • Paul Cairney (2017) “Evidence-based best practice is more political than it looks: a case study of the ‘Scottish Approach’”, Evidence and Policy, 13, 3, 499-515 PDF

 

9 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: William Dunn (2017) Public Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary. This book is a whopper, with almost 500 pages and 101 (excellent) discussions of methods, so 800 words over budget seems OK to me. If you disagree, just read every second word.  By the time you reach the cat hanging in there baby you are about 300 (150) words away from the end.

Dunn 2017 cover

William Dunn (2017) Public Policy Analysis 6th Ed. (Routledge)

Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical assessment, and communication of policy-relevant knowledge … to solve practical problemsIts practitioners are free to choose among a range of scientific methods, qualitative as well as quantitative, and philosophies of science, so long as these yield reliable knowledge’ (Dunn, 2017: 2-3).

Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’, to help solve problems with analysis that people can trust.

This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world should be and how solutions get us there (as opposed to policy studies/ research seeking primarily to explain what happens).

Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:

  1. The idea of ‘best practice’ characterised by 5-step plans.
  • In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather information; the role they perform in an organisation; the time constraints and incentive structures in organisations and political systems; the expectations and standards of their profession; and, the need to work with teams consisting of many professions/ disciplines (2017: 15-6)
  • The cost (in terms of time and resources) of conducting multiple research and analytical methods is high, and highly constrained in political environments (2017: 17-8; compare with Lindblom)
  1. The too-narrow idea of evidence-based policymaking
  • The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’ undermines a researcher’s ability to adapt well to the evidence-demands of policymakers (2017: 68; 4 compare with Why don’t policymakers listen to your evidence?).

To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3) and after (Qs4-5) policy intervention (2017: 5-7; 54-6):

  1. What is the policy problem to be solved?
  • For example, identify its severity, urgency, cause, and our ability to solve it.
  • Don’t define the wrong problem, such as by oversimplifying or defining it with insufficient knowledge.
  • Key aspects of problems including ‘interdependency’ (each problem is inseparable from a host of others, and all problems may be greater than the sum of their parts), ‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change rather than being solved), and ‘hierarchy’ (which level or type of government is responsible) (2017: 70; 75).
  • Problems vary in terms of how many relevant policymakers are involved, how many solutions are on the agenda, the level of value conflict, and the unpredictability of outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
  • ‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a problem, and ward against making too many assumptions about its nature and cause; produce models of cause-and-effect; and make a problem seem solve-able, such as by placing boundaries on its coverage. These methods foster creativity, which is useful when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69; 77; 81-107).
  • Problem definition draws on evidence, but is primarily the exercise of power to reduce ambiguity through argumentation, such as when defining poverty as the fault of the poor, the elite, the government, or social structures (2017: 79; see Stone).
  1. What effect will each potential policy solution have?
  • Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
  • ‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
  • However, even the most-effective expert/ theory-based methods to extrapolate from the past are flawed, and it is important to communicate levels of uncertainty (2017: 118-23; see Spiegelhalter).
  1. Which solutions should we choose, and why?
  • ‘Prescription’ methods help provide a consistent way to compare each potential solution, in terms of its feasibility and predicted outcome, rather than decide too quickly that one is superior (2017: 55; 190-2; 220-42).
  • They help to combine (a) an estimate of each policy alternative’s outcome with (b) a normative assessment.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer & Schwartz, and Stone on the meaning of these values).
  • For example, cost benefit analysis (CBA) is an established – but problematic – economics method based on finding one metric – such as a $ value – to predict and compare outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
  • Cost effectiveness analysis uses a $ value for costs, but compared with other units of measurement for benefits (such as outputs per $) (2017: 217-9)
  • Although such methods help us combine information and values to compare choices, note the inescapable role of power to decide whose values (and which outcomes, affecting whom) matter (2017: 204)
  1. What were the policy outcomes?
  • ‘Monitoring’ methods help identify (say): levels of compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly (such as on clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
  • Monitoring is crucial because it is so difficult to predict policy success, and unintended consequences are almost inevitable (2017: 250).
  • However, the data gathered are usually no more than proxy indicators of outcomes. Further, the choice of indicators reflect what is available, ‘particular social values’, and ‘the political biases of analysts’ (2017: 262)
  • The idea of ‘evidence based policy’ is linked strongly to the use of experiments and systematic review to identify causality (2017: 273-6; compare with trial-and-error learning in Gigerenzer, complexity theory, and Lindblom).
  1. Did the policy solution work as intended? Did it improve policy outcomes?
  • Although we frame policy interventions as ‘solutions’, few problems are ‘solved’. Instead, try to measure the outcomes and the contribution of your solution, and note that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).  
  • Policy evaluation is not an objective process in which we can separate facts from values.
  • Rather, values and beliefs are part of the criteria we use to gauge success (and even their meaning is contested – 2017: 322-32).
  • We can gather facts about the policy process, and the impacts of policy on people, but this information has little meaning until we decide whose experiences matter.

Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since policymaking is continuous, and evaluations of past choices inform current choices.

Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the impact of knowledge regarding one question on the other four (2017: 7-13; contrast with Meltzer & Schwartz, Thissen & Walker).

Developing arguments and communicating effectively

Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4 for advice on how to write briefs, memos, and executive summaries and prepare oral testimony).

He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:

  • The claim itself, such as a description (size, cause) or evaluation (importance, urgency) of a problem, and prescription of a solution
  • The things that support it (including reasoning, knowledge, authority)
  • Incorporating the things that could undermine it (including any ‘qualifier’, the communication of uncertainty about current knowledge, and counter-arguments).

The key stages of communication (2017: 392-7; 405; 432) include:

  1. ‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing with ‘political and organizational constraints’ and suggesting something that can actually be done.
  2. ‘Documentation’, focusing on synthesising information from many sources, organising it into a coherent argument, translating from jargon or a technical language, simplifying, summarising, and producing user-friendly visuals.
  3. ‘Utilization’, by making sure that (a) communications are tailored to the audience (its size, existing knowledge of policy and methods, attitude to analysts, and openness to challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn from each other.

 

hang-in-there-baby

 

Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy cycles

Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which culminated in new ways to express old ambitions, such as to:

  1. Use ‘systems thinking’, to understand the interdependence between many elements in complex policymaking systems (see also socio-technical and socio-ecological systems).
  • Note the huge difference between (a) policy analysis discussions of ‘systems thinking’ built on the hope that if we can understand them we can direct them, and (b) policy theory discussions that emphasise ‘emergence’ in the absence of central control (and presence of multi-centric policymaking).
  • Also note that Dunn (2017: 73) describes policy problems – rather than policymaking – as complex systems. I’ll write another post (short, I promise) on the many different (and confusing) ways to use the language of complexity.
  1. Promote ‘evidence based policy, as the new way to describe an old desire for ‘technocratic’ policymaking that accentuates scientific evidence and downplays politics and values (see also 2017: 60-4).

In that context, see Dunn’s (47-52) discussion of comprehensive versus bounded rationality:

  • Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in favour of a continuous process of ‘questioning and answering’.
  • This approach is a pragmatic response to the lack of order and predictability of policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5 step policy analyses (compare with Meltzer & Schwartz).

Dunn (2017: 41-7) also provides an unusually useful discussion of the policy cycle. Rather than seeing it as a mythical series of orderly stages, Dunn highlights:

  1. Lasswell’s original discussion of policymaking functions (or functional requirements of policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering knowledge), ‘promotion’ (persuasion and argumentation while defining problems), ‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is made and carried out), and ‘appraisal’ (2017: 42-3).
  2. The constant interaction between all notional ‘stages’ rather than a linear process: attention to a policy problem fluctuates, actors propose and adopt solutions continuously, actors are making policy (and feeding back on its success) as they implement, evaluation (of policy success) is not a single-shot document, and previous policies set the agenda for new policy (2017: 44-5).

In that context, it is no surprise that the impact of a single policy analyst is usually minimal (2017: 57). Sorry to break it to you. Hang in there, baby.

hang-in-there-baby

 

13 Comments

Filed under 750 word policy analysis, public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Evidence-informed policymaking: context is everything

I thank James Georgalakis for inviting me to speak at the inaugural event of IDS’ new Evidence into Policy and Practice Series, and the audience for giving extra meaning to my story about the politics of ‘evidence-based based policymaking’. The talk (using powerpoint) and Q&A is here:

 

James invited me to respond to some of the challenges raised to my talk – in his summary of the event – so here it is.

I’m working on a ‘show, don’t tell’ approach, leaving some of the story open to interpretation. As a result, much of the meaning of this story – and, in particular, the focus on limiting participation – depends on the audience.

For example, consider the impact of the same story on audiences primarily focused on (a) scientific evidence and policy, or (b) participation and power.

Normally, when I talk about evidence and policy, my audience is mostly people with scientific or public health backgrounds asking why do policymakers ignore scientific evidence? I am usually invited to ruffle feathers, mostly by challenging a – remarkably prevalent – narrative that goes like this:

  • We know what the best evidence is, since we have produced it with the best research methods (the ‘hierarchy of evidence’ argument).
  • We have evidence on the nature of the problem and the most effective solutions (the ‘what works’ argument).
  • Policymakers seems to be ignoring our evidence or failing to act proportionately (the ‘evidence-policy barriers’ argument).
  • Or, they cherry-pick evidence to suit their agenda (the ‘policy based evidence’ argument).

In that context, I suggest that there are many claims to policy-relevant knowledge, policymakers have to ignore most information before making choices, and they are not in control of the policy process for which they are ostensibly in charge.

Limiting participation as a strategic aim

Then, I say to my audience that – if they are truly committed to maximising the use of scientific evidence in policy – they will need to consider how far they will go to get what they want. I use the metaphor of an ethical ladder in which each rung offers more influence in exchange for dirtier hands: tell stories and wait for opportunities, or demonise your opponents, limit participation, and humour politicians when they cherry-pick to reinforce emotional choices.

It’s ‘show don’t tell’ but I hope that the take-home point for most of the audience is that they shouldn’t focus so much on one aim – maximising the use of scientific evidence – to the detriment of other important aims, such as wider participation in politics beyond a reliance on a small number of experts. I say ‘keep your eyes on the prize’ but invite the audience to reflect on which prizes they should seek, and the trade-offs between them.

Limited participation – and ‘windows of opportunity’ – as an empirical finding

NASA launch

I did suggest that most policymaking happens away from the sphere of ‘exciting’ and ‘unruly’ politics. Put simply, people have to ignore almost every issue almost all of the time. Each time they focus their attention on one major issue, they must – by necessity – ignore almost all of the others.

For me, the political science story is largely about the pervasiveness of policy communities and policymaking out of the public spotlight.

The logic is as follows. Elected policymakers can only pay attention to a tiny proportion of their responsibilities. They delegate the rest to bureaucrats at lower levels of government. Bureaucrats lack specialist knowledge, and rely on other actors for information and advice. Those actors trade information for access. In many cases, they develop effective relationships based on trust and a shared understanding of the policy problem.

Trust often comes from a sense that everyone has proven to be reliable. For example, they follow norms or the ‘rules of the game’. One classic rule is to contain disputes within the policy community when actors don’t get what they want: if you complain in public, you draw external attention and internal disapproval; if not, you are more likely to get what you want next time.

For me, this is key context in which to describe common strategic concerns:

  • Should you wait for a ‘window of opportunity’ for policy change? Maybe. Or, maybe it will never come because policymaking is largely insulated from view and very few issues reach the top of the policy agenda.
  • Should you juggle insider and outsider strategies? Yes, some groups seem to do it well and it is possible for governments and groups to be in a major standoff in one field but close contact in another. However, each group must consider why they would do so, and the trade-offs between each strategy. For example, groups excluded from one venue may engage (perhaps successfully) in ‘venue shopping’ to get attention from another. Or, they become discredited within many venues if seen as too zealous and unwilling to compromise. Insider/outsider may seem like a false dichotomy to experienced and well-resourced groups, who engage continuously, and are able to experiment with many approaches and use trial-and-error learning. It is a more pressing choice for actors who may have only one chance to get it right and do not know what to expect.

Where is the power analysis in all of this?

image policy process round 2 25.10.18

I rarely use the word power directly, partly because – like ‘politics’ or ‘democracy’ – it is an ambiguous term with many interpretations (see Box 3.1). People often use it without agreeing its meaning and, if it means everything, maybe it means nothing.

However, you can find many aspects of power within our discussion. For example, insider and outsider strategies relate closely to Schattschneider’s classic discussion in which powerful groups try to ‘privatise’ issues and less powerful groups try to ‘socialise’ them. Agenda setting is about using resources to make sure issues do, or do not, reach the top of the policy agenda, and most do not.

These aspects of power sometimes play out in public, when:

  • Actors engage in politics to turn their beliefs into policy. They form coalitions with actors who share their beliefs, and often romanticise their own cause and demonise their opponents.
  • Actors mobilise their resources to encourage policymakers to prioritise some forms of knowledge or evidence over others (such as by valuing scientific evidence over experiential knowledge).
  • They compete to identify the issues most worthy of our attention, telling stories to frame or define policy problems in ways that generate demand for their evidence.

However, they are no less important when they play out routinely:

  • Governments have standard operating procedures – or institutions – to prioritise some forms of evidence and some issues routinely.
  • Many policy networks operate routinely with few active members.
  • Certain ideas, or ways of understanding the world and the nature of policy problems within it, becomes so dominant that they are unspoken and taken for granted as deeply held beliefs. Still, they constrain or facilitate the success of new ‘evidence based’ policy solutions.

In other words, the word ‘power’ is often hidden because the most profound forms of power often seem to be hidden.

In the context of our discussion, power comes from the ability to define some evidence as essential and other evidence as low quality or irrelevant, and therefore define some people as essential or irrelevant. It comes from defining some issues as exciting and worthy of our attention, or humdrum, specialist and only relevant to experts. It is about the subtle, unseen, and sometimes thoughtless ways in which we exercise power to harness people’s existing beliefs and dominate their attention as much as the transparent ways in which we mobilise resources to publicise issues. Therefore, to ‘maximise the use of evidence’ sounds like an innocuous collective endeavour, but it is a highly political and often hidden use of power.

See also:

I discussed these issues at a storytelling workshop organised by the OSF:

listening-new-york-1-11-16

See also:

Policy in 500 Words: Power and Knowledge

The politics of evidence-based policymaking

Palgrave Communications: The politics of evidence-based policymaking

Using evidence to influence policy: Oxfam’s experience

The UK government’s imaginative use of evidence to make policy

 

6 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.

The event’s description sums up key conclusions in the literature on policy learning and policy transfer:

  1. technology and ‘entrepreneurs’ help ideas spread internationally, and domestic policymakers can use them to be more informed about global policy innovation, but
  2. there can be major unintended consequences to importing ideas, such as the adoption of policy solutions with poorly-evidenced success, or a broader sense of failed transportation caused by factors such as a poor fit between the aims of the exporter/importer.

In this post, I connect these conclusions to broader themes in policy studies, which suggest that:

  1. policy learning and policy transfer are political processes, not ‘rational’ or technical searches for information
  2. the use of evidence to spread policy innovation requires two interconnected choices: what counts as good evidence, and what role central governments should play.
  3. the following ’11 question guide’ to evidence based policy transfer serves more as a way to reflect than a blueprint for action.

As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.

anzog auckland transfer ad

Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?

Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:

  1. ‘Evidence based’ is a highly misleading description of the use of information in policy.
  2. To transfer a policy blueprint completely, in this manner, would require all places and contexts to be the same, and for the policy process to be technocratic and apolitical.
  3. There are general academic guides on how to learn lessons from others systematically – such as Richard Rose’s ‘practical guide’  – but most academic work on learning and transfer does not suggest that policymakers follow this kind of advice.

Rose 10 lessons rotated

Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.

Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:

3 reasons why ‘evidence based’ does not describe policymaking

In a series of ANZSOG talks on ‘evidence based policymaking’ (EBPM), I describe three main factors, all of which are broadly relevant to transfer:

  1. There are many forms of policy-relevant evidence and few policymakers adhere to a strict ‘hierarchy’ of knowledge.

Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.

  1. Policymakers must find ways to ignore most evidence – such as by combining ‘rational’ and ‘irrational’ cognitive shortcuts – to be able to act quickly.

The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.

  1. They do not control the policy process in which they engage.

We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.

The literature on ‘policy learning’ tells a similar story

Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.

We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:

1.It is collective and rule-bound

Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.

2.’Evidence based’ is one of several types of policy learning

  • Epistemic. Primarily by scientific experts transmitting knowledge to policymakers.
  • Reflection. Open dialogue to incorporate diverse forms of knowledge and encourage cooperation.
  • Bargaining. Actors learn how to cooperate and compete effectively.
  • Hierarchy. Actors with authority learn how to impose their aims; others learn the limits to their discretion.

3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.

Their analysis can produce relevant take-home points such as:

  • Experts will be ineffective if they assume that policy learning is epistemic. The assumption will leave them ill-prepared to deal with bargaining.
  • There is more than one legitimate way to learn, such as via deliberative processes that incorporate more perspectives and forms of knowledge.

What does the literature on transfer tell us?

‘Policy transfer’ can describe a spectrum of activity:

  • driven voluntarily, by a desire to learn from the story of another government’s policy’s success. In such cases, importers use shortcuts to learning, such as by restricting their search to systems with which they have something in common (such as geography or ideology), learning via intermediaries such as ‘entrepreneurs’, or limiting their searches for evidence of success.
  • driven by various forms of pressure, including encouragement by central (or supranational) governments, international norms or agreements, ‘spillover’ effects causing one system to respond to innovation by another, or demands by businesses to minimise the cost of doing business.

In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:

  • Failing to generate or use enough evidence on what made the initial policy successful
  • Failing to adapt that policy to local circumstances
  • Failing to back policy change with sufficient resources

However, other studies highlight some major qualifications:

  • If the process is about using ideas about one system to inform another, our attention may shift from ‘transfer’ to ‘translation’ or ‘transformation’, and the idea of ‘successful transfer’ makes less sense
  • Transfer success is not the same as implementation success, which depends on a wider range of factors
  • Nor is it the same as ‘policy success’, which can be assessed by a mix of questions to reflect political reality: did it make the government more re-electable, was the process of change relatively manageable, and did it produce intended outcomes?

The use of evidence to spread policy innovation requires a combination of profound political and governance choices

When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.

For example, consider these ideal-types or models in table 1:

Table 1 3 ideal types of EBBP

In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.

In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.

In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.

Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer  

In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.

  1. What problem did policymakers say they were trying to solve, and why?
  2. What solution(s) did they produce?
  3. Why?

Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2)  ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.

4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.

5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?

6. How do we account for the role of scale, and the different cultures and expectations in each policy field?

Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.

7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?

8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?

9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?

10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?

Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.

11. What will be the relationship between evidence and governance?

Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?

In conclusion

Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.

This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.

Paul Cairney Auckland Policy Transfer 12.10.18

 

 

8 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

The Politics of Evidence-Based Policymaking: ANZSOG talks

This post introduces a series of related talks on ‘the politics of evidence-based policymaking’ (EBPM) that I’m giving as part of larger series of talks during this ANZOG-funded/organised trip.

The EBPM talks begin with a discussion of the same three points: what counts as evidence, why we must ignore most of it (and how), and the policy process in which policymakers use some of it. However, the framing of these points, and the ways in which we discuss the implications, varies markedly by audience. So, in this post, I provide a short discussion of the three points, then show how the audience matters (referring to the city as a shorthand for each talk).

The overall take-home points are highly practical, in the same way that critical thinking has many practical applications (in other words, I’m not offering a map, toolbox, or blueprint):

  • If you begin with (a) the question ‘why don’t policymakers use my evidence?’ I like to think you will end with (b) the question ‘why did I ever think they would?’.
  • If you begin by taking the latter as (a) a criticism of politics and policymakers, I hope you will end by taking it as (b) a statement of the inevitability of the trade-offs that must accompany political choice.
  • We may address these issues by improving the supply and use of evidence. However, it is more important to maintain the legitimacy of the politicians and political systems in which policymakers choose to ignore evidence. Technocracy is no substitute for democracy.

3 ways to describe the use of evidence in policymaking

  1. Discussions of the use of evidence in policy often begin as a valence issue: who wouldn’t want to use good evidence when making policy?

However, it only remains a valence issue when we refuse to define evidence and justify what counts as good evidence. After that, you soon see the political choices emerge. A reference to evidence is often a shorthand for scientific research evidence, and good often refers to specific research methods (such as randomised control trials). Or, you find people arguing very strongly in the almost-opposite direction, criticising this shorthand as exclusionary and questioning the ability of scientists to justify claims to superior knowledge. Somewhere in the middle, we find that a focus on evidence is a good way to think about the many forms of information or knowledge on which we might make decisions, including: a wider range of research methods and analyses, knowledge from experience, and data relating to the local context with which policy would interact.

So, what begins as a valence issue becomes a gateway to many discussions about how to understand profound political choices regarding: how we make knowledge claims, how to ‘co-produce’ knowledge via dialogue among many groups, and the relationship between choices about evidence and governance.

  1. It is impossible to pay attention to all policy relevant evidence.

There is far more information about the world than we are able to process. A focus on evidence gaps often gives way to the recognition that we need to find effective ways to ignore most evidence.

There are many ways to describe how individuals combine cognition and emotion to limit their attention enough to make choices, and policy studies (to all intents and purposes) describe equivalent processes – described, for example, as ‘institutions’ or rules – in organisations and systems.

One shortcut between information and choice is to set aims and priorities; to focus evidence gathering on a small number of problems or one way to define a problem, and identify the most reliable or trustworthy sources of evidence (often via evidence ‘synthesis’). Another is to make decisions quickly by relying on emotion, gut instinct, habit, and existing knowledge or familiarity with evidence.

Either way, agenda setting and problem definition are political processes that address uncertainty and ambiguity. We gather evidence to reduce uncertainty, but first we must reduce ambiguity by exercising power to define the problem we seek to solve.

  1. It is impossible to control the policy process in which people use evidence.

Policy textbooks (well, my textbook at least!) provide a contrast between:

  • The model of a ‘policy cycle’ that sums up straightforward policymaking, through a series of stages, over which policymakers have clear control. At each stage, you know where evidence fits in: to help define the problem, generate solutions, and evaluate the results to set the agenda for the next cycle.
  • A more complex ‘policy process’, or policymaking environment, of which policymakers have limited knowledge and even less control. In this environment, it is difficult to know with whom engage, the rules of engagement, or the likely impact of evidence.

Overall, policy theories have much to offer people with an interest in evidence-use in policy, but primarily as a way to (a) manage expectations, to (b) produce more realistic strategies and less dispiriting conclusions. It is useful to frame our aim as to analyse the role of evidence within a policy process that (a) we don’t quite understand, rather than (b) we would like to exist.

The events themselves

Below, you will find a short discussion of the variations of audience and topic. I’ll update and reflect on this discussion (in a revised version of this post) after taking part in the events.

Social science and policy studies: knowledge claims, bounded rationality, and policy theory

For Auckland and Wellington A, I’m aiming for an audience containing a high proportion of people with a background in social science and policy studies. I describe the discussion as ‘meta’ because I am talking about how I talk about EBPM to other audiences, then inviting discussion on key parts of that talk, such as how to conceptualise the policy process and present conceptual insights to people who have no intention of deep dives into policy theory.

I often use the phrase ‘I’ve read it, so you don’t have to’ partly as a joke, but also to stress the importance of disciplinary synthesis when we engage in interdisciplinary (and inter-professional) discussion. If so, it is important to discuss how to produce such ‘synthetic’ accounts.

I tend to describe key components of a policymaking environment quickly: many policy makers and influencers spread across many levels and types of government, institutions, networks, socioeconomic factors and events, and ideas. However, each of these terms represents a shorthand to describe a large and diverse literature. For example, I can describe an ‘institution’ in a few sentences, but the study of institutions contains a variety of approaches.

Background post: I know my audience, but does my other audience know I know my audience?

Academic-practitioner discussions: improving the use of research evidence in policy

For Wellington B and Melbourne, the audience is an academic-practitioner mix. We discuss ways in which we can encourage the greater use of research evidence in policy, perhaps via closer collaboration between suppliers and users.

Discussions with scientists: why do policymakers ignore my evidence?

Sydney UNSW focuses more on researchers in scientific fields (often not in social science).  I frame the question in a way that often seems central to scientific researcher interest: why do policymakers seem to ignore my evidence, and what can I do about it?

Then, I tend to push back on the idea that the fault lies with politics and policymakers, to encourage researchers to think more about the policy process and how to engage effectively in it. If I’m trying to be annoying, I’ll suggest to a scientific audience that they see themselves as ‘rational’ and politicians as ‘irrational’. However, the more substantive discussion involves comparing (a) ‘how to make an impact’ advice drawn from the personal accounts of experienced individuals, giving advice to individuals, and (b) the sort of advice you might draw from policy theories which focus more on systems.

Background post: What can you do when policymakers ignore your evidence?

Early career researchers: the need to build ‘impact’ into career development

Canberra UNSW is more focused on early career researchers. I think this is the most difficult talk because I don’t rely on the same joke about my role: to turn up at the end of research projects to explain why they failed to have a non-academic impact.  Instead, my aim is to encourage intelligent discussion about situating the ‘how to’ advice for individual researchers into a wider discussion of policymaking systems.

Similarly, Brisbane A and B are about how to engage with practitioners, and communicate well to non-academic audiences, when most of your work and training is about something else entirely (such as learning about research methods and how to engage with the technical language of research).

Background posts:

What can you do when policymakers ignore your evidence? Tips from the ‘how to’ literature from the science community

What can you do when policymakers ignore your evidence? Encourage ‘knowledge management for policy’

See also:

  1. A similar talk at LSHTM (powerpoint and audio)

2. European Health Forum Gastein 2018 ‘Policy in Evidence’ (from 6 minutes)

https://webcasting.streamdis.eu/Mediasite/Play/8143157d976146b4afd297897c68be5e1d?catalog=62e4886848394f339ff678a494afd77f21&playFrom=126439&autoStart=true

 

See also:

Evidence-based policymaking and the new policy sciences

 

8 Comments

Filed under Evidence Based Policymaking (EBPM)

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in this article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Evidence-based policymaking: political strategies for scientists living in the real world

Note: I wrote the following discussion (last year) to be a Nature Comment but it was not to be!

Nature articles on evidence-based policymaking often present what scientists would like to see: rules to minimise bias caused by the cognitive limits of policymakers, and a simple policy process in which we know how and when to present the best evidence.[1]  What if neither requirement is ever met? Scientists will despair of policymaking while their competitors engage pragmatically and more effectively.[2]

Alternatively, if scientists learned from successful interest groups, or by using insights from policy studies, they could develop three ‘take home messages’: understand and engage with policymaking in the real world; learn how and when evidence ‘wins the day’; and, decide how far you should go to maximise the use of scientific evidence. Political science helps explain this process[3], and new systematic and thematic reviews add new insights.[4] [5] [6] [7]

Understand and engage with policymaking in the real world

Scientists are drawn to the ‘policy cycle’, because it offers a simple – but misleading – model for engagement with policymaking.[3] It identifies a core group of policymakers at the ‘centre’ of government, perhaps giving the impression that scientists should identify the correct ‘stages’ in which to engage (such as ‘agenda setting’ and ‘policy formulation’) to ensure the best use of evidence at the point of authoritative choice. This is certainly the image generated most frequently by health and environmental scientists when they seek insights from policy studies.[8]

Yet, this model does not describe reality. Many policymakers, in many levels and types of government, adopt and implement many measures at different times. For simplicity, we call the result ‘policy’ but almost no modern policy theory retains the linear policy cycle concept. In fact, it is more common to describe counterintuitive processes in which, for example, by the time policymaker attention rises to a policy problem at the ‘agenda setting’ stage, it is too late to formulate a solution. Instead, ‘policy entrepreneurs’ develop technically and politically feasible solutions then wait for attention to rise and for policymakers to have the motive and opportunity to act.[9]

Experienced government science advisors recognise this inability of the policy cycle image to describe real world policymaking. For example, Sir Peter Gluckman presents an amended version of this model, in which there are many interacting cycles in a kaleidoscope of activity, defying attempts to produce simple flow charts or decision trees. He describes the ‘art and craft’ of policy engagement, using simple heuristics to deal with a complex and ‘messy’ policy system.[10]

Policy studies help us identify two such heuristics or simple strategies.

First, respond to policymaker psychology by adapting to the short cuts they use to gather enough information quickly: ‘rational’, via trusted sources of oral and written evidence, and ‘irrational’, via their beliefs, emotions, and habits. Policy theories describe many interest group or ‘advocacy coalition’ strategies, including a tendency to combine evidence with emotional appeals, romanticise their own cause and demonise their opponents, or tell simple emotional stories with a hero and moral to exploit the biases of their audience.[11]

Second, adapt to complex ‘policy environments’ including: many policymakers at many levels and types of government, each with their own rules of evidence gathering, network formation, and ways of understanding policy problems and relevant socioeconomic conditions.[2] For example, advocates of international treaties often find that the evidence-based arguments their international audience takes for granted become hotly contested at national or subnational levels (even if the national government is a signatory), while the same interest groups presenting the same evidence of a problem can be key insiders in one government department but ignored in another.[3]

Learn the conditions under which evidence ‘wins the day’ in policymaking

Consequently, the availability and supply of scientific evidence, on the nature of problems and effectiveness of solutions, is a necessary but insufficient condition for evidence-informed policy. Three others must be met: actors use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems; the policy environment becomes broadly conducive to policy change; and, actors exploit attention to a problem, the availability of a feasible solution, and the motivation of policymakers, during a ‘window of opportunity’ to adopt specific policy instruments.10

Tobacco control represents a ‘best case’ example (box 1) from which we can draw key lessons for ecological and environmental policies, giving us a sense of perspective by highlighting the long term potential for major evidence-informed policy change. However, unlike their colleagues in public health, environmental scientists have not developed a clear sense of how to produce policy instruments that are technically and politically feasible, so the delivery of comparable policy change is not inevitable.[12]

Box 1: Tobacco policy as a best case and cautionary tale of evidence-based policymaking

Tobacco policy is a key example – and useful comparator for ecological and environmental policies – since it represents a best case scenario and cautionary tale.[13] On the one hand, the scientific evidence on the links between smoking, mortality, and preventable death forms the basis for modern tobacco control policy. Leading countries – and the World Health Organisation, which oversees the Framework Convention on Tobacco Control (FCTC) – frame tobacco use as a public health ‘epidemic’ and allow their health departments to take the policy lead. Health departments foster networks with public health and medical groups at the expense of the tobacco industry, and emphasise the socioeconomic conditions – reductions in (a) smoking prevalence, (b) opposition to tobacco control, and (c) economic benefits to tobacco – most supportive of tobacco control. This framing, and conducive policymaking environment, helps give policymakers the motive and opportunity to choose policy instruments, such as bans on smoking in public places, which would otherwise seem politically infeasible.

On the other hand, even in a small handful of leading countries such as the UK, it took twenty to thirty years to go from the supply of the evidence to a proportionate government response: from the early evidence on smoking in the 1950s prompting major changes from the 1980s, to the evidence on passive smoking in the 1980s prompting public bans from the 2000s onwards. In most countries, the production of a ‘comprehensive’ set of policy measures is not yet complete, even though most signed the FCTC.

Decide how far you’ll go to maximise the use of scientific evidence in policymaking

These insights help challenge the naïve position that, if policymaking can change to become less dysfunctional[1], scientists can be ‘honest brokers’[14] and expect policymakers to use their evidence quickly, routinely, and sincerely. Even in the best case scenario, evidence-informed change takes hard work, persistence, and decades to achieve.

Since policymaking will always appear ‘irrational’ and complex’[3], scientists need to think harder about their role, then choose to engage more effectively or accept their lack of influence.

To deal with ‘irrational’ policymakers, they should combine evidence with persuasion, simple stories, and emotional appeals, and frame their evidence to make the implications consistent with policymakers’ beliefs.

To deal with complex environments, they should engage for the long term to work out how to form alliances with influencers who share their beliefs, understand in which ‘venues’ authoritative decisions are made and carried out, the rules of information processing in those venues, and the ‘currency’ used by policymakers when they describe policy problems and feasible solutions.[2] In other words, develop skills that do not come with scientific training, avoid waiting for others to share your scientific mindset or respect for scientific evidence, and plan for the likely eventuality that policymaking will never become ‘evidence based’.

This approach may be taken for granted in policy studies[15], but it raises uncomfortable dilemmas regarding how far scientists should go, to maximise the use of scientific evidence in policy, using persuasion and coalition-building.

These dilemmas are too frequently overshadowed by claims – more comforting to scientists – that politicians are to blame because they do not understand how to generate, analyse, and use the best evidence. Scientists may only become effective in politics if they apply the same critical analysis to themselves.

[1] Sutherland, W.J. & Burgman, M. Nature 526, 317–318 (2015).

[2] Cairney, P. et al. Public Administration Review 76, 3, 399-402 (2016)

[3] Cairney, P. The Politics of Evidence-Based Policy Making (Palgrave Springer, 2016).

[4] Langer, L. et al. The Science of Using Science (EPPI, 2016)

[5] Breckon, J. & Dodson, J. Using Evidence. What Works? (Alliance for Useful Evidence, 2016)

[6] Palgrave Communications series The politics of evidence-based policymaking (ed. Cairney, P.)

[7] Practical lessons from policy theories (eds. Weible, C & Cairney, P.) Policy and Politics April 2018

[8] Oliver, K. et al. Health Research Policy and Systems, 12, 34 (2016)

[9] Kingdon, J. Agendas, Alternatives and Public Policies (Harper Collins, 1984)

[10] Gluckmann, P. Understanding the challenges and opportunities at the science-policy interface

[11] Cairney, P. & Kwiatkowski, R. Palgrave Communications.

[12] Biesbroek et al. Nature Climate Change, 5, 6, 493–494 (2015)

[13] Cairney, P. & Yamazaki, M. Journal of Comparative Policy Analysis

[14] Pielke Jr, R. originated the specific term The honest broker (Cambridge University Press, 2007) but this role is described more loosely by other commentators.

[15] Cairney, P. & Oliver, K. Health Research Policy and Systems 15:35 (2017)

7 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see this post on the meaning of ‘irrational’). Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Now go to:

What can you do when policymakers ignore your evidence?

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes how to be a ‘policy entrepreneur’.

See also the 750 Words series to explore the implications for policy analysis

19 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

The Politics of Evidence revisited

This is a guest post by Dr Justin Parkhurst, responding to a review of our books by Dr Joshua Newman, and my reply to that review.

I really like that Joshua Newman has done this synthesis of 3 recent books covering aspects of evidence use in policy. Too many book reviews these days just describe the content, so some critical comments are welcome, as is the comparative perspective.

I’m also honoured that my book was included in the shortlist (it is available here, free as an ebook: bit.ly/2gGSn0n for interested readers) and I’d like to follow on from Paul to add some discussion points to the debate here – with replies to both Joshua and Paul (hoping first names are acceptable).

Have we heard all this before?

Firstly, I agree with Paul that saying ‘we’ve heard this all before’ risks speaking about a small community of active researchers who study these issues, and not the wider community. But I’d also add that what we’ve heard before is a starting point to many of these books, not where they end up.

In terms of where we start: I’m sure many of us who work in this field are somewhat frustrated at meetings when we hear people making statements that are well established in the literature. Some examples include:

  • “There can be many types of evidence, not just scientific research…”
  • “In the legal field, ‘evidence’ means something different…”
  • “We need evidence-based policy, not policy-based evidence…”
  • “We need to know ‘what works’ to get evidence into policy…”

Thus, I do think there is still a need to cement the foundations of the field more strongly – in essence, to establish a disciplinary baseline that people weighing in on a subject should be expected to know about before providing additional opinions. One way to help do this is for scholars to continue to lay out the basic starting points in our books – typically in the first chapter or two.

Of course, other specialist fields and disciplines have managed to establish their expertise to a point that individuals with opinions on a subject typically have some awareness that there is a field of study out there which they don’t necessarily know about. This is most obvious in the natural sciences (and perhaps in economics). E.g. most people (current presidents of some large North American countries aside?) are aware that don’t know a lot about engineering, medicine, or quantum physics – so they won’t offer speculative or instinctive opinions about why airplanes stay in the air, how to do bypass surgery, or what was wrong with the ‘Ant-Man’ film. Or when individuals do offer views, they are typically expected to know the basics of the subject.

For the topic of evidence and policy, I often point people to Huw Davies, Isabel Walter, and Sandra Nutley’s book Using Evidence, which is a great introduction to much of this field, as well as Carol Weiss’ insights from the late 70s on the many meanings of research utilisation. I also routinely point people to read The Honest Broker by Roger Pielke Jr. (which I, myself, failed to read before writing my book and, as such, end up repeating many of his points – I’ve apologised to him personally).

So yes, I think there is space for work like ours to continue to establish a baseline, even if some of us know this, because the expertise of the field is not yet widely recognised or established. Yet I think is it not accurate for Joshua to argue we end up repeating what is known, considering our books diverge in key ways after laying out some of the core foundations.

Where do we go from there?

More interesting for this discussion, then, is to reflect on what our various books try to do beyond simply laying out the basics of what we know about evidence use and policy. It is here where I would disagree with Joshua’s point claiming we don’t give a clear picture about the ‘problem’ that ‘evidence-based policy’ (his term – one I reject) is meant to address. Speaking only for my own book, I lay out the problem of bias in evidence use as the key motivation driving both advocates of greater evidence use as well as policy scholars critical of (oversimplified) knowledge translation efforts. But I distinguish between two forms of bias: technical bias – whereby evidence is used in ways that do not adhere to scientific best practice and thus produce sub-optimal social outcomes; and issue bias – whereby pieces of evidence, or mechanisms of evidence use, can obscure the important political choices in decision making, skewing policy choices towards those things that have been measured, or are conducive to measurement. Both of these forms of bias are violations of widely held social values – values of scientific fidelity on the one hand, and of democratic representation on the other. As such, for me, these are the problems that I try to consider in my book, exploring the political and cognitive origins of both, in order to inform thinking on how to address them.

That said, I think Joshua is right in some of the distinctions he makes between our works in how we try to take this field forward, or move beyond current challenges in differing ways. Paul takes the position that researchers need to do something, and one thing they can do is better understand politics and policy making. I think Paul’s writings about policy studies for students is superb (see his book and blog posts about policy concepts). But in terms of applying these insights to evidence use, this is where we most often diverge. I feel that keeping the focus on researchers puts too much emphasis on achieving ‘uptake’ of researcher’s own findings. In my view, I would point to three potential (overlapping) problems with this.

  • First – I do not think it is the role or responsibility of researchers to do this, but rather a failure to establish the right system of evidence provision;
  • Second – I feel it leaves unstated the important but oft ignored normative question of how ‘should’ evidence be used to inform policy;
  • Third – I believe these calls rest on often unstated assumptions about the answer to the second point which we may wish to challenge.

In terms of the first point: I’m more of an institutionalist (as Joshua points out). My view is that the problems around non-use or misuse of evidence can be seen as resulting from a failure to establish appropriate systems that govern the use of evidence in policy processes. As such, the solution would have to lie with institutional development and changes (my final chapter advocates for this) that establish systems which serve to achieve the good governance of evidence.

Paul’s response to Joshua says that researchers are demanding action, so he speaks to them. He wants researchers to develop “useful knowledge of the policy process in which they might want to engage” (as he says above).  Yet while some researchers may wish to engage with policy processes, I think it needs to be clear that doing so is inherently a political act – and can take on a role of issue advocacy by promoting those things you researched or measured over other possible policy considerations (points made well by Roger Pielke Jr. in The Honest Broker). The alternative I point towards is to consider what good systems of evidence use would look like. This is the difference between arguing for more uptake of research, vs. arguing for systems through which all policy relevant evidence can be seen and considered in appropriate ways – regardless of the political savvy, networking, or activism of any given researcher (in my book I have chapters reflecting on what appropriate evidence for policy might be, and what a good process for its use might be, based on particular widely shared values).

In terms of the second and third points – my book might be the most explicit in its discussion of the normative values guiding efforts to improve evidence, and I am more critical than some about the assumption that getting researchers work ‘used’ by policymakers is a de-facto good thing. This is why I disagree with Joshua’s conclusion that my work frames the problem as ‘bridging the gap’. Rather I’d say I frame the problem as asking the question of ‘what does a better system of evidence use look like from a political perspective?’ My ‘good governance of evidence’ discussion presents an explicitly normative framework based the two sets of values mentioned above – those around democratic accountability and around fidelity to scientific good practice – both of which have been raised as important in discussions about evidence use in political processes.

Is the onus on researchers?

Finally, I also would argue against Joshua’s conclusion that my work places the burden of resolving the problems on researchers. Paul argues above that he does this but with good reason. I try not to do this. This is again because my book is not making an argument for more evidence to be ‘used’ per se. (and I don’t expect policy makers to just want to use it either). Rather I focus on identifying principles by which we can judge systems of evidence use, calling for guided incremental changes within national systems.

While I think academics can play an important role in establishing ‘best practice’ ideas, I explicitly argue that the mandate to establish, build, or incrementally change evidence advisory systems lies with the representatives of the people. Indeed, I include ‘stewardship’ as a core principle of my good governance of evidence framework to show that it should be those individuals who are accountable to the public that build these systems in different countries. Thus, the burden lies not with academics, but rather with our representatives – and, indirectly with all of us through the demands we make on them – to improve systems of evidence use.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Evidence based medicine provides a template for evidence based policy, but not in the way you expect

Guest post by Dr Kathryn Oliver and Dr Warren Pearce to celebrate the publication of their new Open Access article ‘Three lessons from evidence-based medicine and policy‘ in Palgrave Communications,

Part of the  Open Access series ‘politics of evidence based policymaking‘ (for which we still welcome submissions).

Evidence-based medicine (EBM) is often described as a ‘template’ for evidence-based policymaking (EBPM).

Critics of this idea would be 100% right if EBM lived up to its inaccurate caricature, in which there is an inflexible ‘hierarchy of evidence’ which dismisses too much useful knowledge and closes off the ability of practitioners to use their judgement.

In politics, this would be disastrous because there are many sources of legitimate knowledge and ‘the evidence’ cannot and should not become an alternative to political choice. And, of course, politicians must use their judgement, as – unlike medicine – there is no menu of possible answers to any problem.

Yet, modern forms of EBM – or, at least, sensible approaches to it – do not live up to this caricature. Instead, EBM began as a way to support individual decision-makers, and has evolved to reflect new ways of thinking about three main dilemmas. The answers to these dilemmas can help improve policymaking.

How to be more transparent

First, evidence-informed clinical practice guidelines lead the way in transparency. There’s a clear, transparent process to frame a problem, gather and assess evidence, and, through a deliberative discussion with relevant stakeholders, decide on clinical recommendations. Alongside other tools and processes, this demonstrates transparency which increases trust in the system.

How to balance research and practitioner knowledge

Second, dialogues in EBM help us understand how to balance research and practitioner knowledge. EBM has moved beyond the provision of research evidence, towards recognising and legitimising a negotiation between individual contexts, the expertise of decision-makers, and technical advice on interpreting research findings for different settings.

How to be more explicit about how you balance evidence, power, and values

Third, EBM helps us think about how to share power to co-produce policy and to think about how we combine evidence, values, and our ideas about who commands the most legitimate sources of power and accountability. We know that new structures for dialogue and decision-making can formalise and codify processes, but they do not necessarily lead to inclusion of a diverse set of voices. Power matters in dictating what knowledge is produced, for whom, and what is done with it. EBM has offered as many negative as positive lessons so far, particularly when sources of research expertise have been reluctant to let go enough to really co-produce knowledge or policy, but new studies and frameworks are at least keeping this debate alive.

Overall, our discussion of EBM challenges critics to identify its real-world application, not the old caricature. If so, it can help show us how one of the most active research agendas, on the relationship between high quality evidence and effective action, provides lessons for politics. In the main, the lesson is that our aim is not simply to maximise the use of evidence in policy, but to maximise the credibility of evidence and legitimacy of evidence advocates when so many other people have a legitimate claim to knowledge and authoritative action.

3 Comments

Filed under Evidence Based Policymaking (EBPM)

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

The role of ‘standards for evidence’ in ‘evidence informed policymaking’

Key points:

  • Maintaining strict adherence to evidence standards is like tying your hands behind your back
  • There is an inescapable trade-off between maintaining scientific distance for integrity and using evidence pragmatically to ensure its impact
  • So, we should not divorce discussions of evidence standards from evidence use

I once spoke with a policymaker from a health unit who described the unintended consequences of their self-imposed evidence standards. They held themselves to such a high standard of evidence that very few studies met their requirements. So, they often had a very strong sense of ‘what works’ but, by their own standards, could not express much confidence in their evidence base.

As a result, their policy recommendations were tentative and equivocal, and directed at a policymaker audience looking for strong and unequivocal support for (often controversial) policy solutions before putting their weight behind them. Even if evidence advocates had (what they thought to be) the best available evidence, they would not make enough of it. Instead, they value their reputations, based on their scientific integrity, producing the best evidence, and not making inflated claims about the policy implications. Let’s wait for more evidence, just to be sure. Let’s not use suboptimal evidence, even if it’s all we have.

Your competitors do not tie their own hands behind their backs in this way

I say this because I have attended many workshops, in the last year, in which we discuss principles for science advice and guidelines or standards for the evidence part of ‘evidence-based’ or ‘evidence-informed’ policymaking.

During such discussions, it is common for people to articulate the equivalent of crossing their fingers and hoping that they can produce rules for the highest evidence standards without the unintended consequences. If you are a fan of Field of Dreams, we can modify the slogan: if you build it (the evidence base), they will come (policymakers will use it sincerely, and we’ll all be happy).*

If you build it

Or, if you are more of a fan of Roger Pielke Jr, you can build the evidence base while remaining an ‘honest broker’, providing evidence without advocacy. Ideally, we’d want to maintain scientific integrity and have a major impact on policy (akin to me wanting to eat chips all day and lose weight) but, in the real world, may settle for the former.

If so, perhaps a more realistic way of phrasing the question would be: what rules for evidence should a small group of often-not-very-influential people agree among themselves? In doing so, we recognise that very few policy actors will follow these rules.

What happens when we don’t divorce a discussion of (a) standards of evidence from (b) the use of evidence for policy impact?

The latter depends on far more than evidence, such as the usual factors we discuss in these workshops, including trust in the messenger, and providing a ‘timely’ message.  Perhaps a high-standard evidence base helps the former (providing a Kite Mark for evidence) and one aspect of the latter (the evidence is there when you demand it). However, policy studies-inspired messages go much further, such as in Three habits of successful entrepreneurs which describes the strategies people use for impact:

  1. They tell simple and persuasive stories to generate demand for their evidence
  2. They have a technically and politically feasible (evidence-based) policy solution ready to chase policy problems
  3. They adapt their strategies to the scale of their policy environments, akin to surfers in large and competitive political systems, but more like Poseidon in less competitive ‘policy communities’ or subnational venues.

In such cases, the availability of evidence becomes secondary to:

  1. the way you use evidence to frame a policy problem, which is often more about the way you connect information to policymaker demand than the quality of the evidence.

Table 1

  1. your skills in being able to spot the right time to present evidence-based solutions, which is not about a mythical policy cycle, and not really about the availability of evidence or speed of delivery.

Table 2

So, when we talk about any guidance for evidence advocates, such as pursued by INGSA, I think you will always find these tensions between evidence quality and scientific integrity on the one hand, and ‘timeliness’ or impact on the other. You don’t address the need for timely evidence simply by making sure that the evidence exists in a database.

I discuss these tensions further on the INGSA website: Principles of science advice to government: key problems and feasible solutions

.

.

*Perhaps you’d like to point out that when Ray Kinsella built it (the baseball field in his cornfield), he did come (the ghost of Shoeless Joe Jackson appeared to play baseball there). I’m sorry to have to tell you this, but actually that was Ray Liotta pretending to be Jackson.

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?

“There is extensive health and public health literature on the ‘evidence-policy gap’, exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems”.

We use this table to describe how the policy process works, how effective actors respond, and the dilemmas that arise for advocates of scientific evidence: should they act this way too?

We summarise this argument in two posts for:

The Guardian If scientists want to influence policymaking, they need to understand it

Sax Institute The evidence policy gap: changing the research mindset is only the beginning

The article is part of a wider body of work in which one or both of us considers the relationship between evidence and policy in different ways, including:

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review PDF

Paul Cairney (2016) The Politics of Evidence-Based Policy Making (PDF)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Paul Cairney (2016) Evidence-based best practice is more political than it looks in Evidence and Policy

Many of my blog posts explore how people like scientists or researchers might understand and respond to the policy process:

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

‘Evidence-based Policymaking’ and the Study of Public Policy

How far should you go to secure academic ‘impact’ in policymaking?

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

What 10 questions should we put to evidence for policy experts?

Why doesn’t evidence win the day in policy and policymaking?

We all want ‘evidence based policy making’ but how do we do it?

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

The Politics of Evidence Based Policymaking:3 messages

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

There are more posts like this on my EBPM page

I am also guest editing a series of articles for the Open Access journal Palgrave Communications on the ‘politics of evidence-based policymaking’ and we are inviting submissions throughout 2017.

There are more details on that series here.

And finally ..

… if you’d like to read about the policy theories underpinning these arguments, see Key policy theories and concepts in 1000 words and 500 words.

 

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

15 Comments

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Why doesn’t evidence win the day in policy and policymaking?

cairney-southampton-evidence-win-the-dayPolitics has a profound influence on the use of evidence in policy, but we need to look ‘beyond the headlines’ for a sense of perspective on its impact.

It is tempting for scientists to identify the pathological effect of politics on policymaking, particularly after high profile events such as the ‘Brexit’ vote in the UK and the election of Donald Trump as US President. We have allegedly entered an era of ‘post-truth politics’ in which ideology and emotion trumps evidence and expertise (a story told many times at events like this), particularly when issues are salient.

Yet, most policy is processed out of this public spotlight, because the flip side of high attention to one issue is minimal attention to most others. Science has a crucial role in this more humdrum day-to-day business of policymaking which is far more important than visible. Indeed, this lack of public visibility can help many actors secure a privileged position in the policy process (and further exclude citizens).

In some cases, experts are consulted routinely. There is often a ‘logic’ of consultation with the ‘usual suspects’, including the actors most able to provide evidence-informed advice. In others, scientific evidence is often so taken for granted that it is part of the language in which policymakers identify problems and solutions.

In that context, we need better explanations of an ‘evidence-policy’ gap than the pathologies of politics and egregious biases of politicians.

To understand this process, and appearance of contradiction between excluded versus privileged experts, consider the role of evidence in politics and policymaking from three different perspectives.

The perspective of scientists involved primarily in the supply of evidence

Scientists produce high quality evidence only for politicians often ignore it or, even worse, distort its message to support their ideologically-driven policies. If they expect ‘evidence-based policymaking’ they soon become disenchanted and conclude that ‘policy-based evidence’ is more likely. This perspective has long been expressed in scientific journals and commentaries, but has taken on new significance following ‘Brexit’ and Trump.

The perspective of elected politicians

Elected politicians are involved primarily in managing government and maximising public and organisational support for policies. So, scientific evidence is one piece of a large puzzle. They may begin with a manifesto for government and, if elected, feel an obligation to carry it out. Evidence may play a part in that process but the search for evidence on policy solutions is not necessarily prompted by evidence of policy problems.

Further, ‘evidence based policy’ is one of many governance principles that politicians should feel the need to juggle. For example, in Westminster systems, ministers may try to delegate policymaking to foster ‘localism’ and/ or pragmatic policymaking, but also intervene to appear to be in control of policy, to foster a sense of accountability built on an electoral imperative. The likely mix of delegation and intervention seems almost impossible to predict, and this dynamic has a knock-on effect for evidence-informed policy. In some cases, central governments roll out the same basic policy intervention and limit local discretion; in others, it identifies broad outcomes and invites other bodies to gather evidence on how best to meet them. These differences in approach can have profound consequences on the models of evidence-informed policy available to us (see the example of Scottish policymaking).

Political science and policy studies provide a third perspective

Policy theories help us identify the relationship between evidence and policy by showing that a modern focus on ‘evidence-based policymaking’ (EBPM) is one of many versions of the same fairy tale – about ‘rational’ policymaking – that have developed in the post-war period. We talk about ‘bounded rationality’ to identify key ways in which policymakers or organisations could not achieve ‘comprehensive rationality’:

  1. They cannot separate values and facts.
  2. They have multiple, often unclear, objectives which are difficult to rank in any meaningful way.
  3. They have to use major shortcuts to gather a limited amount of information in a limited time.
  4. They can’t make policy from the ‘top down’ in a cycle of ordered and linear stages.

Limits to ‘rational’ policymaking: two shortcuts to make decisions

We can sum up the first three bullet points with one statement: policymakers have to try to evaluate and solve many problems without the ability to understand what they are, how they feel about them as a whole, and what effect their actions will have.

To do so, they use two shortcuts: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly.

Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing issues to produce or reinforce a dominant way to define policy problems. Successful actors combine evidence and emotional appeals or simple stories to capture policymaker attention, and/ or help policymakers interpret information through the lens of their strongly-held beliefs.

Scientific evidence plays its part, but scientists often make the mistake of trying to bombard policymakers with evidence when they should be trying to (a) understand how policymakers understand problems, so that they can anticipate their demand for evidence, and (b) frame their evidence according to the cognitive biases of their audience.

Policymaking in ‘complex systems’ or multi-level policymaking environments

Policymaking takes place in less ordered, less hierarchical, and less predictable environment than suggested by the image of the policy cycle. Such environments are made up of:

  1. a wide range of actors (individuals and organisations) influencing policy at many levels of government
  2. a proliferation of rules and norms followed by different levels or types of government
  3. close relationships (‘networks’) between policymakers and powerful actors
  4. a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

These five properties – plus a ‘model of the individual’ built on a discussion of ‘bounded rationality’ – make up the building blocks of policy theories (many of which I summarise in 1000 Word posts). I say this partly to aid interdisciplinary conversation: of course, each theory has its own literature and jargon, and it is difficult to compare and combine their insights, but if you are trained in a different discipline it’s unfair to ask you devote years of your life to studying policy theory to end up at this point.

To show that policy theories have a lot to offer, I have been trying to distil their collective insights into a handy guide – using this same basic format – that you can apply to a variety of different situations, from explaining painfully slow policy change in some areas but dramatic change in others, to highlighting ways in which you can respond effectively.

We can use this approach to help answer many kinds of questions. With my Southampton gig in mind, let’s use some examples from public health and prevention.

Why doesn’t evidence win the day in tobacco policy?

My colleagues and I try to explain why it takes so long for the evidence on smoking and health to have a proportionate impact on policy. Usually, at the back of my mind, is a public health professional audience trying to work out why policymakers don’t act quickly or effectively enough when presented with unequivocal scientific evidence. More recently, they wonder why there is such uneven implementation of a global agreement – the WHO Framework Convention on Tobacco Control – that almost every country in the world has signed.

We identify three conditions under which evidence will ‘win the day’:

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems. In leading countries, it took decades to command attention to the health effects of smoking, reframe tobacco primarily as a public health epidemic (not an economic good), and generate support for the most effective evidence-based solutions.
  2. The policy environment becomes conducive to policy change. A new and dominant frame helps give health departments (often in multiple venues) a greater role; health departments foster networks with public health and medical groups at the expense of the tobacco industry; and, they emphasise the socioeconomic conditions – reductions in smoking prevalence, opposition to tobacco control, and economic benefits to tobacco – supportive of tobacco control.
  3. Actors exploit ‘windows of opportunity’ successfully. A supportive frame and policy environment maximises the chances of high attention to a public health epidemic and provides the motive and opportunity of policymakers to select relatively restrictive policy instruments.

So, scientific evidence is a necessary but insufficient condition for major policy change. Key actors do not simply respond to new evidence: they use it as a resource to further their aims, to frame policy problems in ways that will generate policymaker attention, and underpin technically and politically feasible solutions that policymakers will have the motive and opportunity to select. This remains true even when the evidence seems unequivocal and when countries have signed up to an international agreement which commits them to major policy change. Such commitments can only be fulfilled over the long term, when actors help change the policy environment in which these decisions are made and implemented. So far, this change has not occurred in most countries (or, in other aspects of public health in the UK, such as alcohol policy).

Why doesn’t evidence win the day in prevention and early intervention policy?

UK and devolved governments draw on health and economic evidence to make a strong and highly visible commitment to preventive policymaking, in which the aim is to intervene earlier in people’s lives to improve wellbeing and reduce socioeconomic inequalities and/ or public sector costs. This agenda has existed in one form or another for decades without the same signs of progress we now associate with areas like tobacco control. Indeed, the comparison is instructive, since prevention policy rarely meets the three conditions outlined above:

  1. Prevention is a highly ambiguous term and many actors make sense of it in many different ways. There is no equivalent to a major shift in problem definition for prevention policy as a whole, and little agreement on how to determine the most effective or cost-effective solutions.
  2. A supportive policy environment is far harder to identify. Prevention policy cross-cuts many policymaking venues at many levels of government, with little evidence of ‘ownership’ by key venues. Consequently, there are many overlapping rules on how and from whom to seek evidence. Networks are diffuse and hard to manage. There is no dominant way of thinking across government (although the Treasury’s ‘value for money’ focus is key currency across departments). There are many socioeconomic indicators of policy problems but little agreement on how to measure or which measures to privilege (particularly when predicting future outcomes).
  3. The ‘window of opportunity’ was to adopt a vague solution to an ambiguous policy problem, providing a limited sense of policy direction. There have been several ‘windows’ for more specific initiatives, but their links to an overarching policy agenda are unclear.

These limitations help explain slow progress in key areas. The absence of an unequivocal frame, backed strongly by key actors, leaves policy change vulnerable to successful opposition, especially in areas where early intervention has major implications for redistribution (taking from existing services to invest in others) and personal freedom (encouraging or obliging behavioural change). The vagueness and long term nature of policy aims – to solve problems that often seem intractable – makes them uncompetitive, and often undermined by more specific short term aims with a measurable pay-off (as when, for example, funding for public health loses out to funding to shore up hospital management). It is too easy to reframe existing policy solutions as preventive if the definition of prevention remains slippery, and too difficult to demonstrate the population-wide success of measures generally applied to high risk groups.

What happens when attitudes to two key principles – evidence based policy and localism – play out at the same time?

A lot of discussion of the politics of EBPM assumes that there is something akin to a scientific consensus on which policymakers do not act proportionately. Yet, in many areas – such as social policy and social work – there is great disagreement on how to generate and evaluate the best evidence. Broadly speaking, a hierarchy of evidence built on ‘evidence based medicine’ – which has randomised control trials and their systematic review at the top, and practitioner knowledge and service user feedback at the bottom – may be completely subverted by other academics and practitioners. This disagreement helps produce a spectrum of ways in which we might roll-out evidence based interventions, from an RCT-driven roll-out of the same basic intervention to a storytelling driven pursuit of tailored responses built primarily on governance principles (such as to co-produce policy with users).

At the same time, governments may be wrestling with their own governance principles, including EBPM but also regarding the most appropriate balance between centralism and localism.

If you put both concerns together, you have a variety of possible outcomes (and a temptation to ‘let a thousand flowers bloom’) and a set of competing options (outlined in table 1), all under the banner of ‘evidence based’ policymaking.

Table 1 Three ideal types EBBP

What happens when a small amount of evidence goes a very long way?

So, even if you imagine a perfectly sincere policymaker committed to EBPM, you’d still not be quite sure what they took it to mean in practice. If you assume this commitment is a bit less sincere, and you add in the need to act quickly to use the available evidence and satisfy your electoral audience, you get all sorts of responses based in some part on a reference to evidence.

One fascinating case is of the UK Government’s ‘troubled families’ programme which combined bits and pieces of evidence with ideology and a Westminster-style-accountability imperative, to produce:

  • The argument that the London riots were caused by family breakdown and bad parenting.
  • The use of proxy measures to identify the most troubled families
  • The use of superficial performance management to justify notionally extra expenditure for local authorities
  • The use of evidence in a problematic way, from exaggerating the success of existing ‘family intervention projects’ to sensationalising neuroscientific images related to brain development in deprived children …

normal brain

…but also

In other words, some governments feel the need to dress up their evidence-informed policies in a language appropriate to Westminster politics. Unless we understand this language, and the incentives for elected policymakers to use it, we will fail to understand how to act effectively to influence those policymakers.

What can you do to maximise the use of evidence?

When you ask the generic question you can generate a set of transferable strategies to engage in policymaking:

how-to-be-heard

ebpm-5-things-to-do

Yet, as these case studies of public health and social policy suggest, the question lacks sufficient meaning when applied to real world settings. Would you expect the advice that I give to (primarily) natural scientists (primarily in the US) to be identical to advice for social scientists in specific fields (in, say, the UK)?

No, you’d expect me to end with a call for more research! See for example this special issue in which many scholars from many disciplines suggest insights on how to maximise the use of evidence in policy.

Palgrave C special

13 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy

The Science of Evidence-based Policymaking: How to Be Heard

I was interviewed in Science, on the topic of evidence-based policymaking, and we discussed some top tips for people seeking to maximise the use of evidence in a complex policy process (or, perhaps, feel less dispirited about the lack of EBPM in many cases). If it sparks your interest, I have some other work on this topic:

I am editing a series of forthcoming articles on maximising the use of scientific evidence in policy, and the idea is that health and environmental scientists can learn from many other disciplines about how to, for example, anticipate policymaker psychology, find the right policymaking venue, understand its rules and ‘currency’ (the language people use, to reflect dominant ways of thinking about problems), and tell effective stories to the right people.

Palgrave C special

I have also completed a book, some journal articles (PAR, E&P), and some blog posts on the ‘politics of evidence-based policymaking’.

Pivot cover

Two posts appear in the Guardian political science blog (me, me and Kathryn Oliver).

One post, for practitioners, has ‘5 things you need to know’, and it links to presentations on the same theme to different audiences (Scotland, US, EU).

ebpm-5-things-to-do

In this post, I’m trying to think through in more detail what we do with such insights.

The insights I describe come from policy theory, and I have produced 25 posts which introduce each of them in 1000 words (or, if you are super busy, 500 words). For example, the Science interview mentions a spirograph of many cycles, which is a reference to the idea of a policy cycle. Also look out for the 1000-word posts on framing and narrative and think about how they relate to the use of storytelling in policy.

If you like what you see, and want to see more, have a look at my general list of offerings (home page) or list of books and articles with links to theirs PDFs (CV).

how-to-be-heard

5 Comments

Filed under Evidence Based Policymaking (EBPM), public policy, Storytelling

We all want ‘evidence based policy making’ but how do we do it?

Here are some notes for my talk to the Scottish Government on Thursday as part of its ‘inaugural ‘evidence in policy week’. The advertised abstract is as follows:

A key aim in government is to produce ‘evidence based’ (or ‘informed’) policy and policymaking, but it is easier said than done. It involves two key choices about (1) what evidence counts and how you should gather it, and (2) the extent to which central governments should encourage subnational policymakers to act on that evidence. Ideally, the principles we use to decide on the best evidence should be consistent with the governance principles we adopt to use evidence to make policy, but what happens when they seem to collide? Cairney provides three main ways in which to combine evidence and governance-based principles to help clarify those choices.

I plan to use the same basic structure of the talks I gave to the OSF (New York) and EUI-EP (Florence) in which I argue that every aspect of ‘evidence based policy making’ is riddled with the necessity to make political choices (even when we define EBPM):

ebpm-5-things-to-do

I’ll then ‘zoom in’ on points 4 and 5 regarding the relationship between EBPM and governance principles. They are going to videotape the whole discussion to use for internal discussions, but I can post the initial talk here when it becomes available. Please don’t expect a TED talk (especially the E part of TED).

EBPM and good governance principles

The Scottish Government has a reputation for taking certain governance principles seriously, to promote high stakeholder ‘ownership’ and ‘localism’ on policy, and produce the image of a:

  1. Consensual consultation style in which it works closely with interest groups, public bodies, local government organisations, voluntary sector and professional bodies, and unions when making policy.
  2. Trust-based implementation style indicating a relative ability or willingness to devolve the delivery of policy to public bodies, including local authorities, in a meaningful way

Many aspects of this image were cultivated by former Permanent Secretaries: Sir John Elvidge described a ‘Scottish Model’ focused on joined-up government and outcomes-based approaches to policymaking and delivery, and Sir Peter Housden labelled the ‘Scottish Approach to Policymaking’ (SATP) as an alternative to the UK’s command-and-control model of government, focusing on the ‘co-production’ of policy with local communities and citizens.

The ‘Scottish Approach’ has implications for evidence based policy making

Note the major implication for our definition of EBPM. One possible definition, derived from ‘evidence based medicine’, refers to a hierarchy of evidence in which randomised control trials and their systematic review are at the top, while expertise, professional experience and service user feedback are close to the bottom. An uncompromising use of RCTs in policy requires that we maintain a uniform model, with the same basic intervention adopted and rolled out within many areas. The focus is on identifying an intervention’s ‘active ingredient’, applying the correct dosage, and evaluating its success continuously.

This approach seems to challenge the commitment to localism and ‘co-production’.

At the other end of the spectrum is a storytelling approach to the use of evidence in policy. In this case, we begin with key governance principles – such as valuing the ‘assets’ of individuals and communities – and inviting people to help make and deliver policy. Practitioners and service users share stories of their experiences and invite others to learn from them. There is no model of delivery and no ‘active ingredient’.

This approach seems to challenge the commitment to ‘evidence based policy’

The Goldilocks approach to evidence based policy making: the improvement method

We can understand the Scottish Government’s often-preferred method in that context. It has made a commitment to:

Service performance and improvement underpinned by data, evidence and the application of improvement methodologies

So, policymakers use many sources of evidence to identify promising, make broad recommendations to practitioners about the outcomes they seek, and they train practitioners in the improvement method (a form of continuous learning summed up by a ‘Plan-Do-Study-Act’ cycle).

Table 1 Three ideal types EBBP

This approach appears to offer the best of both worlds; just the right mix of central direction and local discretion, with the promise of combining well-established evidence from sources including RCTs with evidence from local experimentation and experience.

Four unresolved issues in decentralised evidence-based policy making

Not surprisingly, our story does not end there. I think there are four unresolved issues in this process:

  1. The Scottish Government often indicates a preference for improvement methods but actually supports all three of the methods I describe. This might reflect an explicit decision to ‘let a thousand flowers bloom’ or the inability to establish a favoured approach.
  2. There is not a single way of understanding ‘improvement methodology’. I describe something akin to a localist model here, but other people describe a far more research-led and centrally coordinated process.
  3. Anecdotally, I hear regularly that key stakeholders do not like the improvement method. One could interpret this as a temporary problem, before people really get it and it starts to work, or a fundamental difference between some people in government and many of the local stakeholders so important to the ‘Scottish approach’.

4. The spectre of democratic accountability and the politics of EBPM

The fourth unresolved issue is the biggest: it’s difficult to know how this approach connects with the most important reference in Scottish politics: the need to maintain Westminster-style democratic accountability, through periodic elections and more regular reports by ministers to the Scottish Parliament. This requires a strong sense of central government and ministerial control – if you know who is in charge, you know who to hold to account or reward or punish in the next election.

In principle, the ‘Scottish approach’ provides a way to bring together key aims into a single narrative. An open and accessible consultation style maximises the gathering of information and advice and fosters group ownership. A national strategic framework, with cross-cutting aims, reduces departmental silos and balances an image of democratic accountability with the pursuit of administrative devolution, through partnership agreements with local authorities, the formation of community planning partnerships, and the encouragement of community and user-driven design of public services. The formation of relationships with public bodies and other organisations delivering services, based on trust, fosters the production of common aims across the public sector, and reduces the need for top-down policymaking. An outcomes-focus provides space for evidence-based and continuous learning about what works.

In practice, a government often needs to appear to take quick and decisive action from the centre, demonstrate policy progress and its role in that progress, and intervene when things go wrong. So, alongside localism it maintains a legislative, financial, and performance management framework which limits localism.

How far do you go to ensure EBPM?

So, when I describe the ‘5 things to do’, usually the fifth element is about how far scientists may want to go, to insist on one model of EBPM when it has the potential to contradict important governance principles relating to consultation and localism. For a central government, the question is starker:

Do you have much choice about your model of EBPM when the democratic imperative is so striking?

I’ll leave it there on a cliff hanger, since these are largely questions to prompt discussion in specific workshops. If you can’t attend, there is further reading on the EBPM and EVIDENCE tabs on this blog, and specific papers on the Scottish dimension

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

Paul Cairney, Siabhainn Russell and Emily St Denny (2016) “The ‘Scottish approach’ to policy and policymaking: what issues are territorial and what are universal?” Policy and Politics, 44, 3, 333-50

The politics of evidence-based best practice: 4 messages

 

 

4 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy, Scottish politics, Storytelling

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

These notes are for my brief panel talk at the European Parliament-European University Institute ‘Policy Roundtable’: Evidence and Analysis in EU Policy-Making: Concepts, Practice and Governance. As you can see from the programme description, the broader theme is about how EU institutions demonstrate their legitimacy through initiatives such as stakeholder participation and evidence-based policymaking (EBPM). So, part of my talk is about what happens when EBPM does not exist.

The post is a slightly modified version of my (recorded) talk for Open Society Foundations (New York) but different audiences make sense of these same basic points in very different ways.

  1. Recognise that the phrase ‘evidence-based policy-making’ means everything and nothing

The main limitation to ‘evidence-based policy-making’ is that no-one really knows what it is or what the phrase means. So, each actor makes sense of EBPM in different ways and you can tell a lot about each actor by the way in which they answer these questions:

  • Should you use restrictive criteria to determine what counts as ‘evidence’? Some actors equate evidence with scientific evidence and adhere to specific criteria – such as evidence-based medicine’s hierarchy of evidence – to determine what is scientific. Others have more respect for expertise, professional experience, and stakeholder and service user feedback as sources of evidence.
  • Which metaphor, evidence based or informed is best? ‘Evidence based’ is often rejected by experienced policy participants as unrealistic, preferring ‘informed’ to reflect pragmatism about mixing evidence and political calculations.
  • How far do you go to pursue EBPM? It is unrealistic to treat ‘policy’ as a one-off statement of intent by a single authoritative actor. Instead, it is made and delivered by many actors in a continuous policymaking process within a complicated policy environment (outlined in point 3). This is relevant to EU institutions with limited resources: the Commission often makes key decisions but relies on Member States to make and deliver, and the Parliament may only have the ability to monitor ‘key decisions’. It is also relevant to stakeholders trying to ensure the use of evidence throughout the process, from supranational to local action.
  • Which actors count as policymakers? Policymaking is done by ‘policymakers’, but many are unelected and the division between policymaker/ influencer is often unclear. The study of policymaking involves identifying networks of decision-making by elected and unelected policymakers and their stakeholders, while the actual practice is about deciding where to draw the line between influence and action.
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

For stakeholders, an effective engagement strategy is not straightforward: it takes time to know ‘where the action is’, how and where to engage with policymakers, and with whom to form coalitions. For the Commission, it is difficult to know what will happen to policy after it is made (although we know the end point will not resemble the starting point). For the Parliament, it is difficult even to know where to look.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected national and local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from stakeholders, professional groups, service user and local practitioner experience. This principle seems to rule out the use of RCTs, at least as a source of a uniform model to be rolled out and evaluated. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach to EBPM or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking

  • If policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals?
  • If policymaking systems are so complex, should stakeholders devote huge amounts of resources to make sure they’re effective at each stage?
  • Should proponents of scientific evidence go to great lengths to make sure that EBPM is based on a hierarch of evidence? There is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
  • Should policymakers try to direct the use of evidence in policy as well as policy itself?

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), public policy