Tag Archives: hierarchy of evidence

Evidence Based Policy Making: 5 things you need to know and do

These are some opening remarks for my talk on EBPM at Open Society Foundations (New York), 24th October 2016. The OSF recorded the talk, so you can listen below, externally, or by right clicking and saving. Please note that it was a lunchtime talk, so the background noises are plates and glasses.

Evidence based policy making’ is a good political slogan, but not a good description of the policy process. If you expect to see it, you will be disappointed. If you seek more thoughtful ways to understand and act within political systems, you need to understand five key points then decide how to respond.

  1. Decide what it means.

EBPM looks like a valence issue in which most of us agree that policy and policymaking should be ‘evidence based’ (perhaps like ‘evidence based medicine’). Yet, valence issues only command broad agreement on vague proposals. By defining each term we highlight ambiguity and the need to make political choices to make sense of key terms:

  • Should you use restrictive criteria to determine what counts as ‘evidence’ and scientific evidence?
  • Which metaphor, evidence based or informed, describes how pragmatic you will be?
  • The unclear meaning of ‘policy’ prompts you to consider how far you’d go to pursue EBPM, from a one-off statement of intent by a key actor, to delivery by many actors, to the sense of continuous policymaking requiring us to be always engaged.
  • Policymaking is done by policymakers, but many are unelected and the division between policy maker/ influencer is often unclear. So, should you seek to influence policy by influencing influencers?
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

These factors suggest that an effective engagement strategy is not straightforward: our instinct may be to influence elected policymakers at the ‘centre’ making authoritative choices, but the ‘return on investment’ is not clear. So, you need to decide how and where to engage, but it takes time to know ‘where the action is’ and with whom to form coalitions.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from service user and local practitioner experience. This principle seems to rule out the use of RCTs. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking. For example, if policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals? If policymaking systems are so complex, should we devote huge amounts of resources to make sure we’re effective? Kathryn Oliver and I also explore the implications for proponents of scientific evidence, and there is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

ebpm pic

5 Comments

Filed under Evidence Based Policymaking (EBPM)

Realistic ‘realist’ reviews: why do you need them and what might they look like?

This discussion is based on my impressions so far of realist reviews and the potential for policy studies to play a role in their effectiveness. The objectives section formed one part of a recent team bid for external funding (so, I acknowledge the influence of colleagues on this discussion, but not enough to blame them personally). We didn’t get the funding, but at least I got a lengthy blog post and a dozen hits out of it.

I like the idea of a ‘realistic’ review of evidence to inform policy, alongside a promising uptake in the use of ‘realist review’. The latter doesn’t mean realistic: it refers to a specific method or approach – realist evaluation, realist synthesis.

The agenda of the realist review already takes us along a useful path towards policy relevance, driven partly by the idea that many policy and practice ‘interventions’ are too complex to be subject to meaningful ‘systematic review’.

The latter’s aim – which we should be careful not to caricature – may be to identify something as close as possible to a general law: if you do X, the result will generally be Y, and you can be reasonably sure because the studies (such as randomised control trials) meet the ‘gold standard’ of research.

The former’s aim is to focus extensively on the context in which interventions take place: if you do X, the result will be Y under these conditions. So, for example, you identify the outcome that you want, the mechanism that causes it, and the context in which the mechanism causes the outcome. Maybe you’ll even include a few more studies, not meeting the ‘gold standard’, if they meet other criteria of high quality research (I declare that I am a qualitative researcher, so you call tell who I’m rooting for).

Realist reviews come increasingly with guide books and discussions on how to do them systematically. However, my impression is that when people do them, they find that there is an art to applying discretion to identify what exactly is going on. It is often difficult to identify or describe the mechanism fully (often because source reports are not clear on that point), say for sure it caused the outcome even in particular circumstances, and separate the mechanism from the context.

I italicised the last point because it is super-important. I think that it is often difficult to separate mechanism from context because (a) the context is often associated with a particular country’s political system and governing arrangements, and (b) it might be better to treat governing context as another mechanism in a notional chain of causality.

In other words, my impression is that realist reviews focus on the mechanism at the point of delivery; the last link in the chain in which the delivery of an intervention causes an outcome. It may be wise to also identify the governance mechanism that causes the final mechanism to work.

Why would you complicate an already complicated review?

I aim to complicate things then simplify them heroically at the end.

Here are five objectives that I maybe think we should pursue in an evidence review for policymakers (I can’t say for sure until we all agree on the principles of science advice):

  1. Focus on ways to turn evidence into feasible political action, identifying a clear set of policy conditions and mechanisms necessary to produce intended outcomes.
  2. Produce a manageable number of simple lessons and heuristics for policymakers, practitioners, and communities.
  3. Review a wider range of evidence sources than in traditional systematic reviews, to recognise the potential trade-offs between measures of high quality and high impact evidence.
  4. Identify a complex policymaking environment in which there is a need to connect the disparate evidence on each part of the ‘causal chain’.
  5. Recognise the need to understand individual countries and their political systems in depth, to know how the same evidence will be interpreted and used very differently by actors in different contexts.

Objective 1: evidence into action by addressing the politics of evidence-based policymaking

There is no shortage of scientific evidence of policy problems. Yet, we lack a way to use evidence to produce politically feasible action. The ‘politics of evidence-based policymaking’ produces scientists frustrated with the gap between their evidence and a proportionate policy response, and politicians frustrated that evidence is not available in a usable form when they pay attention to a problem and need to solve it quickly. The most common responses in key fields, such as environmental and health studies, do not solve this problem. The literature on ‘barriers’ between evidence and policy recommend initiatives such as: clearer scientific messages, knowledge brokerage and academic-practitioner workshops, timely engagement in politics, scientific training for politicians, and participation to combine evidence and community engagement.

This literature makes limited reference to policy theory and has two limitations. First, studies focus on reducing empirical uncertainty, not ‘framing’ issues to reduce ambiguity. Too many scientific publications go unread in the absence of a process of persuasion to influence policymaker demand for that information (particularly when more politically relevant and paywall-free evidence is available elsewhere). Second, few studies appreciate the multi-level nature of political systems or understand the strategies actors use to influence policy. This involves experience and cultural awareness to help learn: where key decisions are made, including in networks between policymakers and influential actors; the ‘rules of the game’ of networks; how to form coalitions with key actors; and, that these processes unfold over years or decades.

The solution is to produce knowledge that will be used by policymakers, community leaders, and ‘street level’ actors. It requires a (23%) shift in focus from the quality of scientific evidence to (a) who is involved in policymaking and the extent to which there is a ‘delivery chain’ from national to local, and (b) how actors demand, interpret, and use evidence to make decisions. For example, simple qualitative stories with a clear moral may be more effective than highly sophisticated decision-making models or quantitative evidence presented without enough translation.

Objective 2: produce simple lessons and heuristics

We know that the world is too complex to fully comprehend, yet people need to act despite uncertainty. They rely on ‘rational’ methods to gather evidence from sources they trust, and ‘irrational’ means to draw on gut feeling, emotion, and beliefs as short cuts to action (or system 1 and 2 thinking). Scientific evidence can help reduce some uncertainty, but not tell people how to behave. Scientific information strategies can be ineffective, by expecting audiences to appreciate the detail and scale of evidence, understand the methods used to gather it, and possess the skills to interpret and act on it. The unintended consequence is that key actors fall back on familiar heuristics and pay minimal attention to inaccessible scientific information. The solution is to tailor evidence reviews to audiences: examining their practices and ways of thinking; identifying the heuristics they use; and, describing simple lessons and new heuristics and practices.

Objective 3: produce a pragmatic review of the evidence

To review a wider range of evidence sources than in traditional systematic reviews is to recognise the trade-offs between measures of high quality (based on a hierarchy of methods and journal quality) and high impact (based on familiarity and availability). If scientists reject and refuse to analyse evidence that policymakers routinely take more seriously (such as the ‘grey’ literature), they have little influence on key parts of policy analysis. Instead, provide a framework that recognises complexity but produces research that is manageable at scale and translatable into key messages:

  • Context. Identify the role of factors described routinely by policy theories as the key parts of policy environments: the actors involved in multiple policymaking venues at many levels of government; the role of informal and formal rules of each venue; networks between policymakers and influential actors; socio-economic conditions; and, the ‘paradigms’ or ways of thinking that underpin the consideration of policy problems and solutions.
  • Mechanisms. Focus on the connection between three mechanisms: the cause of outcomes at the point of policy delivery (intervention); the cause of ‘community’ or individual ‘ownership’ of effective interventions; and, the governance arrangements that support high levels of community ownership and the effective delivery of the most effective interventions. These connections are not linear. For example, community ownership and effective interventions may develop more usefully from the ‘bottom up’, scientists may convince national but not local policymakers of the value of interventions (or vice versa), or political support for long term strategies may only be temporary or conditional on short term measures of success.
  • Outcomes. Identify key indicators of good policy outcomes in partnership with the people you need to make policy work. Work with those audiences to identify a small number of specific positive outcomes, and synthesise the best available evidence to explain which mechanisms produce those outcomes under the conditions associated with your region of study.

This narrow focus is crucial to the development of a research question, limiting analysis to the most relevant studies to produce a rigorous review in a challenging timeframe. Then, the idea from realist reviews is that you ‘test’ your hypotheses and clarify the theories that underpin this analysis. This should involve a test for political as well as technical feasibility: speak regularly with key actors i to gauge the likelihood that the mechanisms you recommend will be acted upon, and the extent to which the context of policy delivery is stable and predictable and if mechanism will work consistently under those conditions.

Objective 4: identify key links in the ‘causal chain’ via interdisciplinary study

We all talk about combining perspectives from multiple disciplines but I totally mean it, especially if it boosts the role of political scientists who can’t predict elections. For example, health or environmental scientists can identify the most effective interventions to produce good health or environmental outcomes, but not how to work with and influence key people. Policy scholars can identify how the policy process works and how to maximise the use of scientific evidence within it. Social science scholars can identify mechanisms to encourage community participation and the ownership of policies. Anthropologists can provide insights on the particular cultural practices and beliefs underpinning the ways in which people understand and act according to scientific evidence.

Perhaps more importantly, interdisciplinarity provides political cover: we got the best minds in many disciplines and locked them in a room until they produced an answer.

We need this cover for something I’ll call ‘informed extrapolation’ and justify with reference to pragmatism: if we do not provide well-informed analyses of the links between each mechanism, other less-informed actors will fill the gap without appreciating key aspects of causality. For example, if we identify a mechanism for the delivery of successful interventions – e.g. high levels of understanding and implementation of key procedures – there is still uncertainty: do these mechanisms develop organically through ‘bottom up’ collaboration or can they be introduced quickly from the ‘top’ to address an urgent issue? A simple heuristic for central governments could be to introduce training immediately or to resist the temptation for a quick fix.

Relatively-informed analysis, to recommend one of those choices, may only be used if we can back it up with interdisciplinary weight and produce recommendations that are unequivocal (although, again, other approaches are available).

Objective 5: focus intensively on one region, and one key issue, not ‘one size fits all’

We need to understand individual countries or regions – their political systems, communities, and cultural practices – and specific issues in depth, to know how abstract mechanisms work in concrete contexts, and how the same evidence will be interpreted and used differently by actors in those contexts. We need to avoid politically insensitive approaches based on the assumption that a policy that works in countries like (say) the UK will work in countries that are not (say) the UK, and/ or that actors in each country will understand policy problems in the same way.

But why?

It all looks incredibly complicated, doesn’t it? There’s no time to do all that, is there? It will end up as a bit of a too-rushed jumble of high-and-low quality evidence and advice, won’t it?

My argument is that these problems are actually virtues because they provide more insight into how busy policymakers will gather and use evidence. Most policymakers will not know how to do a systematic review or understand why you are so attached to them. Maybe you’ll impress them enough to get them to trust your evidence, but have you put yourself into a position to know what they’ll do with it? Have you thought about the connection between the evidence you’ve gathered, what people need to do, who needs to do it, and who you need to speak to about getting them to do it? Maybe you don’t have to, if you want to be no more than a ‘neutral scientist’ or ‘honest broker’ – but you do if you want to give science advice to policymakers that policymakers can use.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

The Politics of Evidence-based Policymaking in 2500 words

Here is a 2500 word draft of an entry to the Oxford Research Encyclopaedia (public administration and policy) on EBPM. It brings together some thoughts in previous posts and articles

Evidence-based Policymaking (EBPM) has become one of many valence terms that seem difficult to oppose: who would not want policy to be evidence based? It appears to  be the most recent incarnation of a focus on ‘rational’ policymaking, in which we could ask the same question in a more classic way: who would not want policymaking to be based on reason and collecting all of the facts necessary to make good decisions?

Yet, as we know from classic discussions, there are three main issues with such an optimistic starting point. The first is definitional: valence terms only seem so appealing because they are vague. When we define key terms, and produce one definition at the expense of others, we see differences of approach and unresolved issues. The second is descriptive: ‘rational’ policymaking does not exist in the real world. Instead, we treat ‘comprehensive’ or ‘synoptic’ rationality as an ideal-type, to help us think about the consequences of ‘bounded rationality’ (Simon, 1976). Most contemporary policy theories have bounded rationality as a key starting point for explanation (Cairney and Heikkila, 2014). The third is prescriptive. Like EBPM, comprehensive rationality seems – initially – to be unequivocally good. Yet, when we identify its necessary conditions, or what we would have to do to secure this aim, we begin to question EBPM and comprehensive rationality as an ideal scenario.

What is ‘evidence-based policymaking?’ is a lot like ‘what is policy?’ but more so!

Trying to define EBPM is like magnifying the problem of defining policy. As the entries in this encyclopaedia suggest, it is difficult to say what policy is and measure how much it has changed. I use the working definition, ‘the sum total of government action, from signals of intent to the final outcomes’ (Cairney, 2012: 5) not to provide something definitive, but to raise important qualifications, including: there is a difference between what people say they will do, what they actually do, and the outcome; and, policymaking is also about the power not to do something.

So, the idea of a ‘sum total’ of policy sounds intuitively appealing, but masks the difficulty of identifying the many policy instruments that make up ‘policy’ (and the absence of others), including: the level of spending; the use of economic incentives/ penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organisational change; and, the levels of resources/ methods dedicated to policy implementation and evaluation (2012: 26). In that context, we are trying to capture a process in which actors make and deliver ‘policy’ continuously, not identify a set-piece event providing a single opportunity to use a piece of scientific evidence to prompt a policymaker response.

Similarly, for the sake of simplicity, we refer to ‘policymakers’ but in the knowledge that it leads to further qualifications and distinctions, such as: (1) between elected and unelected participants, since people such as civil servants also make important decisions; and (2) between people and organisations, with the latter used as a shorthand to refer to a group of people making decisions collectively and subject to rules of collective engagement (see ‘institutions’). There are blurry dividing lines between the people who make and influence policy and decisions are made by a collection of people with formal responsibility and informal influence (see ‘networks’). Consequently, we need to make clear what we mean by ‘policymakers’ when we identify how they use evidence.

A reference to EBPM provides two further definitional problems (Cairney, 2016: 3-4). The first is to define evidence beyond the vague idea of an argument backed by information. Advocates of EBPM are often talking about scientific evidence which describes information produced in a particular way. Some describe ‘scientific’ broadly, to refer to information gathered systematically using recognised methods, while others refer to a specific hierarchy of methods. The latter has an important reference point – evidence based medicine (EBM) – in which the aim is to generate the best evidence of the best interventions and exhort clinicians to use it. At the top of the methodological hierarchy are randomized control trials (RCTs) to determine the evidence, and the systematic review of RCTs to demonstrate the replicated success of interventions in multiple contexts, published in the top scientific journals (Oliver et al, 2014a; 2014b).

This reference to EBM is crucial in two main ways. First, it highlights a basic difference in attitude between the scientists proposing a hierarchy and the policymakers using a wider range of sources from a far less exclusive list of publications: ‘The tools and programs of evidence-based medicine … are of little relevance to civil servants trying to incorporate evidence in policy advice’ (Lomas and Brown 2009: 906).  Instead, their focus is on finding as much information as possible in a short space of time – including from the ‘grey’ or unpublished/non-peer reviewed literature, and incorporating evidence on factors such as public opinion – to generate policy analysis and make policy quickly. Therefore, second, EBM provides an ideal that is difficult to match in politics, proposing: “that policymakers adhere to the same hierarchy of scientific evidence; that ‘the evidence’ has a direct effect on policy and practice; and that the scientific profession, which identifies problems, is in the best place to identify the most appropriate solutions, based on scientific and professionally driven criteria” (Cairney, 2016: 52; Stoker 2010: 53).

These differences are summed up in the metaphor ‘evidence-based’ which, for proponents of EBM suggests that scientific evidence comes first and acts as the primary reference point for a decision: how do we translate this evidence of a problem into a proportionate response, or how do we make sure that the evidence of an intervention’s success is reflected in policy? The more pragmatic phrase ‘evidence-informed’ sums up a more rounded view of scientific evidence, in which policymakers know that they have to take into account a wider range of factors (Nutley et al, 2007).

Overall, the phrases ‘evidence-based policy’ and ‘evidence-based policymaking’ are less clear than ‘policy’. This problem puts an onus on advocates of EBPM to state what they mean, and to clarify if they are referring to an ideal-type to aid description of the real world, or advocating a process that, to all intents and purposes, would be devoid of politics (see below). The latter tends to accompany often fruitless discussions about ‘policy based evidence’, which seems to describe a range of mistakes by policymakers – including ignoring evidence, using the wrong kinds, ‘cherry picking’ evidence to suit their agendas, and/ or producing a disproportionate response to evidence – without describing a realistic standard to which to hold them.

For example, Haskins and Margolis (2015) provide a pie chart of ‘factors that influence legislation’ in the US, to suggest that research contributes 1% to a final decision compared to, for example, ‘the public’ (16%), the ‘administration’ (11%), political parties (8%) and the budget (8%). Theirs is a ‘whimsical’ exercise to lampoon the lack of EBPM in government (compare with Prewitt et al’s 2012 account built more on social science studies), but it sums up a sense in some scientific circles about their frustrations with the inability of the policymaking world to keep up with science.

Indeed, there is an extensive literature in health science (Oliver, 2014a; 2014b), emulated largely in environmental studies (Cairney, 2016: 85; Cairney et al, 2016), which bemoans the ‘barriers’ between evidence and policy. Some identify problems with the supply of evidence, recommending the need to simplify reports and key messages. Others note the difficulties in providing timely evidence in a chaotic-looking process in which the demand for information is unpredictable and fleeting. A final main category relates to a sense of different ‘cultures’ in science and policymaking which can be addressed in academic-practitioner workshops (to learn about each other’s perspectives) and more scientific training for policymakers. The latter recommendation is often based on practitioner experiences and a superficial analysis of policy studies (Oliver et al, 2014b; Embrett and Randall’s, 2014).

EBPM as a misleading description

Consequently, such analysis tends to introduce reference points that policy scholars would describe as ideal-types. Many accounts refer to the notion of a policy cycle, in which there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, breaking down their task into clearly defined and well-ordered stages (Cairney, 2016: 16-18). The hope may be that scientists can help policymakers make good decisions by getting them as close as possible to ‘comprehensive rationality’ in which they have the best information available to inform all options and consequences. In that context, policy studies provides two key insights (2016; Cairney et al, 2016).

  1. The role of multi-level policymaking environments, not cycles

Policymaking takes place in less ordered and predictable policy environments, exhibiting:

  • a wide range of actors (individuals and organisations) influencing policy in many levels and types of government
  • a proliferation of rules and norms followed in different venues
  • close relationships (‘networks’) between policymakers and powerful actors
  • a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  • policy conditions and events that can prompt policymaker attention to lurch at short notice.

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multilevel policy process. It shows scientists that they are competing with many actors to present evidence in a particular way to secure a policymaker audience. Support for particular solutions varies according to which organisation takes the lead and how it understands the problem. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift – but major policy change is rare.

  1. Policymakers use two ‘shortcuts’ to deal with bounded rationality and make decisions

Policymakers deal with ‘bounded rationality’ by employing two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, beliefs, habits, and familiar reference points to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing.

Framing refers to the ways in which we understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), and responsible for policy, how much attention they pay, and what kind of solution they favour. Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with evidence. Rather, policy theories signal the strategies that actors use to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (True, Jones, and Baumgartner 2007)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (Jones, Shanahan, and McBeth 2014)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (Weible, Heikkila, and Sabatier 2012)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (Kingdon 1984).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, it can take years to produce support for an ‘evidence-based’ policy solution, built on its technical and political feasibility (will it work as intended, and do policymakers have the motive and opportunity to select it?).

EBPM as a problematic prescription

A pragmatic solution to the policy process would involve: identifying the key venues in which the ‘action’ takes place; learning the ‘rules of the game’ within key networks and institutions; developing framing and persuasion techniques; forming coalitions with allies; and engaging for the long term (Cairney, 2016: 124; Weible et al, 2012: 9-15). The alternative is to seek reforms to make EBPM in practice more like the EBM ideal.

Yet, EBM is defendable because the actors involved agree to make primary reference to scientific evidence and be guided by what works (combined with their clinical expertise and judgement). In politics, there are other – and generally more defendable – principles of ‘good’ policymaking (Cairney, 2016: 125-6). They include the need to legitimise policy: to be accountable to the public in free and fair elections, consult far and wide to generate evidence from multiple perspectives, and negotiate policy across political parties and multiple venues with a legitimate role in policymaking. In that context, we may want scientific evidence to play a major role in policy and policymaking, but pause to reflect on how far we would go to secure a primary role for unelected experts and evidence that few can understand.

Conclusion: the inescapable and desirable politics of evidence-informed policymaking

Many contemporary discussions of policymaking begin with the naïve belief in the possibility and desirability of an evidence-based policy process free from the pathologies of politics. The buzz phrase for any complaint about politicians not living up to this ideal is ‘policy based evidence’: biased politicians decide first what they want to do, then cherry pick any evidence that backs up their case. Yet, without additional thought, they put in its place a technocratic process in which unelected experts are in charge, deciding on the best evidence of a problem and its best solution.

In other words, new discussions of EBPM raise old discussions of rationality that have occupied policy scholars for many decades. The difference since the days of Simon and Lindblom (1959) is that we now have the scientific technology and methods to gather information in ways beyond the dreams of our predecessors. Yet, such advances in technology and knowledge have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policymaking at multiple levels (partly to uphold a principled commitment, in many political system, to devolve or share power).  Policy theories do not tell us what to do about these limits to EBPM, but they help us to separate pragmatism from often-misplaced idealism.

References

Cairney, Paul (2012) Understanding Public Policy (Basingstoke: Palgrave)

Cairney, Paul (2016) The Politics of Evidence-based Policy Making (Basingstoke: Palgrave)

Cairney, Paul and Heikkila, Tanya (2014) ‘A Comparison of Theories of the Policy Process’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early view, DOI:10.1111/puar.12555

Embrett, M. and Randall, G. (2014) ‘Social determinants of health and health equity policy research: Exploring the use, misuse, and nonuse of policy analysis theory’, Social Science and Medicine, 108, 147-55

Haskins, Ron and Margolis, Greg (2015) Show Me the Evidence: Obama’s fight for rigor and results in social policy (Washington DC: Brookings Institution Press)

Kingdon, J. (1984) Agendas, Alternatives and Public Policies 1st ed. (New York, NY: Harper Collins)

Lindblom, C. (1959) ‘The Science of Muddling Through’, Public Administration Review, 19: 79–88

Lomas J. and Brown A. (2009) ‘Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health’, Milbank Quarterly, 87, 4, 903–926

McBeth, M., Jones, M. and Shanahan, E. (2014) ‘The Narrative Policy Framework’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Nutley, S., Walter, I. and Davies, H. (2007) Using evidence: how research can inform public services (Bristol: The Policy Press)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Kenneth Prewitt, Thomas A. Schwandt, and Miron L. Straf, (Editors) (2012) Using Science as Evidence in Public Policy http://www.nap.edu/catalog.php?record_id=13460

Simon, H. (1976) Administrative Behavior, 3rd ed. (London: Macmillan)

Stoker, G. (2010) ‘Translating experiments into policy’, The ANNALS of the American Academy of Political and Social Science, 628, 1, 47-58

True, J. L., Jones, B. D. and Baumgartner, F. R. (2007) Punctuated Equilibrium Theory’ in P. Sabatier (ed.) Theories of the Policy Process, 2nd ed (Cambridge, MA: Westview Press)

Weible, C., Heikkila, T., deLeon, P. and Sabatier, P. (2012) ‘Understanding and influencing the policy process’, Policy Sciences, 45, 1, 1–21

 

6 Comments

Filed under Evidence Based Policymaking (EBPM)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

A key theme of some of the early analysis of Brexit is that many voters followed their feelings rather than paying attention to facts*.

For some people, this is just a part of life: to describe decision-making as ‘rational’ is to deny the inevitable use of heuristics, gut feelings, emotions, and deeply held beliefs.

For others, it is indicative of a worrying ‘post-truth politics’, or a new world in which campaigners play fast and loose with evidence and say anything to win, while experts are mistrusted and ignored or excluded from debates, and voters don’t get the facts they need to make informed decisions.

One solution, proposed largely by academics (many of whom are highly critical of the campaigns) is largely institutional: let’s investigate the abuse of facts during the referendum to help us produce new rules of engagement.

Another is more pragmatic: let’s work out how to maximise the effectiveness of experts and evidence in political debate. So far, we know more about what doesn’t work. For example:

  • Don’t simply supply people with more information when you think they are not paying enough attention to it. Instead, try to work out how they think, to examine how they are likely to demand and interpret information.
  • Don’t just bemoan the tendency of people to accept simple stories that reinforce their biases. Instead, try to work out how to produce evidence-based stories that can compete for attention with those of campaigners.
  • Don’t stop at providing simpler and more accessible information. People might be more likely to read a blog post than a book or lengthy report, but most people are likely to remain blissfully unaware of most academic blogs.

I’m honestly not sure how to tell good stories to capture the public imagination (beyond that time I put the word ‘shite’ in a title) but, for example, we have a lot to learn from traditional media (and from some of the most effective academics who write for them) and from scholars who study story-telling and discourse (although, ironically, discourse analysis is often one of the most jargon-filled areas in the Academy).

We have been here before (in policy studies)

This issue of agenda setting is a key feature in current discussions of (the alleged lack of) evidence-based policymaking. Many academics, in areas such as health and environmental policy, bemoan the inevitability of ‘policy based evidence’. Some express the naïve view that policymakers should think like scientists and/ or that evidence-based policymaking should be more like the idea of evidence-based medicine in which there is a hierarchy of evidence. Others try to work out how they can improve the supply of evidence or set up new institutions to get policymakers to pay more attention to facts.

Yet, a more pragmatic solution is to work out how and why policymakers demand information, and the policymaking context in which they operate. Only then can we produce evidence-based strategies based on how the world works rather than how we would like it to work.

See also:

The Politics of Evidence Based Policymaking:3 messages

Evidence-based policymaking: lecture and Q&A

‘Evidence-based Policymaking’ and the Study of Public Policy

Paul Cairney (2016) The Politics of Evidence-based Policymaking (London: Palgrave Pivot) PDF

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

* Then, many people on twitter vented their negative feelings about other people expressing their feelings.

13 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

There is no blueprint for evidence-based policy, so what do you do?

In my speech to COPOLAD I began by stating that, although we talk about our hopes for evidence-based policy and policymaking (EBP and EBPM), we don’t really know what it is.

I also argued that EBPM is not like our image of evidence-based medicine (EBM), in which there is a clear idea of: (a) which methods/ evidence counts, and (b) the main aim, to replace bad interventions with good.

In other words, in EBPM there is no blueprint for action, either in the abstract or in specific cases of learning from good practice.

To me, this point is underappreciated in the study of EBPM: we identify the politics of EBPM, to highlight the pathologies of/ ‘irrational’ side to policymaking, but we don’t appreciate the more humdrum limits to EBPM even when the political process is healthy and policymakers are fully committed to something more ‘rational’.

Examples from best practice

The examples from our next panel session* demonstrated these limitations to EBPM very well.

The panel contained four examples of impressive policy developments with the potential to outline good practice on the application of public health and harm reduction approaches to drugs policy (including the much-praised Portuguese model).

However, it quickly became apparent that no country-level experience translated into a blueprint for action, for some of the following reasons:

  • It is not always clear what problems policymakers have been trying to solve.
  • It is not always clear how their solutions, in this case, interact with all other relevant policy solutions in related fields.
  • It is difficult to demonstrate clear evidence of success, either before or after the introduction of policies. Instead, most policies are built on initial deductions from relevant evidence, followed by trial-and-error and some evaluations.

In other words, we note routinely the high-level political obstacles to policy emulation, but these examples demonstrate the problems that would still exist even if those initial obstacles were overcome.

A key solution is easier said than done: if providing lessons to others, describe it systematically, in a form that describes the steps to take to turn this model into action (and in a form that we can compare with other experiences). To that end, providers of lessons might note:

  • The problem they were trying to solve (and how they framed it to generate attention, support, and action, within their political systems)
  • The detailed nature of the solution they selected (and the conditions under which it became possible to select that intervention)
  • The evidence they used to guide their initial policies (and how they gathered it)
  • The evidence they collected to monitor the delivery of the intervention, evaluate its impact (was it successful?), and identify cause and effect (why was it successful?)

Realistically this is when the process least resembles (the ideal of) EBM because few evaluations of success will be based on a randomised control trial or some equivalent (and other policymakers may not draw primarily on RCT evidence even when it exists).

Instead, as with much harm reduction and prevention policy, a lot of the justification for success will be based on a counterfactual (what would have happened if we did not intervene?), which is itself based on:

(a) the belief that our object of policy is a complex environment containing many ‘wicked problems’, in which the effects of one intervention cannot be separated easily from that of another (which makes it difficult, and perhaps even inappropriate, to rely on RCTs)

(b) an assessment of the unintended consequence of previous (generally more punitive) policies.

So, the first step to ‘evidence-based policymaking’ is to make a commitment to it. The second is to work out what it is. The third is to do it in a systematic way that allows others to learn from your experience.

The latter may be more political than it looks: few countries (or, at least, the people seeking re-election within them) will want to tell the rest of the world: we innovated and we don’t think it worked.

*I also discuss this problem of evidence-based best practice within single countries

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco policy

The politics of implementing evidence-based policies

This post by me and Kathryn Oliver appeared in the Guardian political science blog on 27.4.16: If scientists want to influence policymaking, they need to understand it . It builds on this discussion of ‘evidence based best practice’ in Evidence and Policy. There is further reading at the end of the post.

Three things to remember when you are trying to close the ‘evidence-policy gap’

Last week, a new major report on The Science of Using Science: Researching the Use of Research Evidence in Decision-Making suggested that there is very limited evidence of ‘what works’ to turn scientific evidence into policy. There are many publications out there on how to influence policy, but few are proven to work.

This is because scientists think about how to produce the best possible evidence rather than how different policymakers use evidence differently in complex policymaking systems (what the report describes as the ‘capability, motivation, and opportunity’ to use evidence). For example, scientists identify, from their perspective, a cultural gap between them and policymakers. This story tells us that we need to overcome differences in the languages used to communicate findings, the timescales to produce recommendations, and the incentives to engage.

This scientist perspective tends to assume that there is one arena in which policymakers and scientists might engage. Yet, the action takes place in many venues at many levels involving many types of policymaker. So, if we view the process from many different perspectives we see new ways in which to understand the use of evidence.

Examples from the delivery of health and social care interventions show us why we need to understand policymaker perspectives. We identify three main issues to bear in mind.

First, we must choose what counts as ‘the evidence’. In some academic disciplines there is a strong belief that some kinds of evidence are better than others: the best evidence is gathered using randomised control trials and accumulated in systematic reviews. In others, these ideas have limited appeal or are rejected outright, in favour of (say) practitioner experience and service user-based feedback as the knowledge on which to base policies. Most importantly, policymakers may not care about these debates; they tend to beg, borrow, or steal information from readily available sources.

Second, we must choose the lengths to which we are prepared to go ensure that scientific evidence is the primary influence on policy delivery. When we open up the ‘black box’ of policymaking we find a tendency of central governments to juggle many models of government – sometimes directing policy from the centre but often delegating delivery to public, third, and private sector bodies. Those bodies can retain some degree of autonomy during service delivery, often based on governance principles such as ‘localism’ and the need to include service users in the design of public services.

This presents a major dilemma for scientists because policy solutions based on RCTs are likely to come with conditions that limit local discretion. For example, a condition of the UK government’s license of the ‘family nurse partnership’ is that there is ‘fidelity’ to the model, to ensure the correct ‘dosage’ and that an RCT can establish its effect. It contrasts with approaches that focus on governance principles, such as ‘my home life’, in which evidence – as practitioner stories – may or may not be used by new audiences. Policymakers may not care about the profound differences underpinning these approaches, preferring to use a variety of models in different settings rather than use scientific principles to choose between them.

Third, scientists must recognise that these choices are not ours to make. We have our own ideas about the balance between maintaining evidential hierarchies and governance principles, but have no ability to impose these choices on policymakers.

This point has profound consequences for the ways in which we engage in strategies to create impact. A research design to combine scientific evidence and governance seems like a good idea that few pragmatic scientists would oppose. However, this decision does not come close to settling the matter because these compromises look very different when designed by scientists or policymakers.

Take for example the case of ‘improvement science’ in which local practitioners are trained to use evidence to experiment with local pilots and learn and adapt to their experiences. Improvement science-inspired approaches have become very common in health sciences, but in many examples the research agenda is set by research leads and it focuses on how to optimise delivery of evidence-based practice.

In contrast, models such as the Early Years Collaborative reverse this emphasis, using scholarship as one of many sources of information (based partly on scepticism about the practical value of RCTs) and focusing primarily on the assets of practitioners and service users.

Consequently, improvement science appears to offer pragmatic solutions to the gap between divergent approaches, but only because they mean different things to different people. Its adoption is only one step towards negotiating the trade-offs between RCT-driven and story-telling approaches.

These examples help explain why we know so little about how to influence policy. They take us beyond the bland statement – there is a gap between evidence and policy – trotted out whenever scientists try and maximise their own impact. The alternative is to try to understand the policy process, and the likely demand for and uptake of evidence, before working out how to produce evidence that would fit into the process. This different mind-set requires a far more sophisticated knowledge of the policy process than we see in most studies of the evidence-policy gap.  Before trying to influence policymaking, we should try to understand it.

Further reading

The initial further reading uses this table to explore three ways in which policymakers, scientists, and other groups have tried to resolve the problems we discuss:

Table 1 Three ideal types EBBP

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer

You can also explore these links to discussions of EBPM, policy theory, and specific policy fields such as prevention

  1. My academic articles on these topics
  2. The Politics of Evidence Based Policymaking
  3. Key policy theories and concepts in 1000 words
  4. Prevention policy

 

2 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

Paul Cairney is Professor of Politics and Public Policy, University of Stirling p.a.cairney@stir.ac.uk. This post will appear in The Guardian’s Political Science blog. It is based on his book The Politics of Evidence Based Policymaking, launched by the Alliance for Useful Evidence and developed on his EBPM webpage.

‘Evidence-based policymaking’ is now central to the agenda of scientists: academics need to demonstrate that they are making an ‘impact’ on policy, and scientists want to close the ‘evidence-policy gap’. The live debate on energy policy is one of many examples in which scientists bemoan a tendency for policymakers to produce  ideological rather than ‘evidence based’ decisions, and seek ways to change their minds.

Yet, they will fail if they do not understand how the policy process works. To do so requires us to reject two romantic notions: (1) that policymakers will ever think like scientists; and, (2) that there is a clearly identifiable point of decision at which scientists can contribute evidence to key policymakers to make a demonstrable impact.

To better understand how policymakers think, we need a full account of ‘bounded rationality’. This phrase partly describes the fact that policymakers can only gather limited information before they make decisions quickly. They will have made a choice before you have a chance to say ‘more research is needed’! To do so, they use two short cuts: ‘rational’ ways to gather quickly the best evidence on solutions to meet their goals, and ‘irrational’ ways – including drawing on emotions and gut feeling – to identify problems even more quickly.

This insight shows us one potential flaw in academic strategies. The most common response to bounded rationality in scientific articles is to focus on the supply of evidence: develop a hierarchy of evidence which privileges the systematic review of randomised control trials, generate knowledge, and present it in a form that is understandable to policymakers. We need to pay more attention to the demand for evidence, following lurches of policymaker attention, often driven by quick and emotional decisions. For example, there is no point in taking the time to make evidence-based solutions easier to understand if policymakers are not (or no longer) interested. Instead, successful advocates recognize the value of emotional appeals and simple stories to generate attention to a problem.

To identify when and how to contribute evidence, we need to understand the complicated environment in which policy takes place. There is no ‘policy cycle’ in which to inject scientific evidence at the point of decision. Rather, the policy process is messy and often unpredictable, and better described as a complex system in which, for example, the same injection of evidence can have no effect or a major effect. It contains: many actors presenting evidence to influence policymakers in many levels and types of government; networks which are often close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others; and, a language within policymaking institutions indicating what ways of thinking are in good ‘currency’ (such as ‘value for money’). Social or economic ‘crises’ can prompt lurches of attention from one issue to another, or even prompt policymakers to change completely the ways in which they understand a policy problem. However, while lurches of attention are common, changes to well-established ways of thinking in government are rare, or take place only in the long term.

This insight shows us a second potential flaw in academic strategies: the idea that research ‘impact’ can be described as a set-piece event, separable from the policy process as a whole. It compares with the kind of advice – develop a long-term strategy – that we would generate from policy studies: invest in the time to find out (a) where the ‘action is’, and (b) how you can boost your influence as part of a coalition of like-minded actors looking of opportunities to raise attention to problems and push your solutions.

Unfortunately, these insights mostly help us identify what not to do. Further, the alternatives may be difficult to accept (how many scientists would make manipulative or emotional appeals to generate attention to their research?) or deliver (who has the time to conduct research and seek meaningful influence?). However, by engaging with these practical and ethical dilemmas, that the policy process creates for advocates of scientific evidence, we can help produce strategies better suited to the complex real world than a simple process that we wish existed.

Pivot cover

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

(podcast download)

We can generate new insights on policymaking by connecting the dots between many separate concepts. However, don’t underestimate some major obstacles or how hard these dot-connecting exercises are to understand. They may seem clear in your head, but describing them (and getting people to go along with your description) is another matter. You need to set out these links clearly and in a set of logical steps. I give one example – of the links between evidence and policy transfer – which I have been struggling with for some time.

In this post, I combine three concepts – policy transfer, bounded rationality, and ‘evidence-based policymaking’ – to identify the major dilemmas faced by central government policymakers when they use evidence to identify a successful policy solution and consider how to import it and ‘scale it up’ within their jurisdiction. For example, do they use randomised control trials (RCTs) to establish the effectiveness of interventions and require uniform national delivery (to ensure the correct ‘dosage’), or tell stories of good practice and invite people to learn and adapt to local circumstances? I use these examples to demonstrate that our judgement of good evidence influences our judgement on the mode of policy transfer.

Insights from each concept

From studies of policy transfer, we know that central governments (a) import policies from other countries and/ or (b) encourage the spread (‘diffusion’) of successful policies which originated in regions within their country: but how do they use evidence to identify success and decide how to deliver programs?

From studies of ‘evidence-based policymaking’ (EBPM), we know that providers of scientific evidence identify an ‘evidence-policy gap’ in which policymakers ignore the evidence of a problem and/ or do not select the best evidence-based solution: but can policymakers simply identify the ‘best’ evidence and ‘roll-out’ the ‘best’ evidence-based solutions?

From studies of bounded rationality and the policy cycle (compared with alternative theories, such as multiple streams analysis or the advocacy coalition framework), we know that it is unrealistic to think that a policymaker at the heart of government can simply identify then select a perfect solution, click their fingers, and see it carried out. This limitation is more pronounced when we identify multi-level governance, or the diffusion of policymaking power across many levels and types of government. Even if they were not limited by bounded rationality, they would still face: (a) practical limits to their control of the policy process, and (b) a normative dilemma about how far you should seek to control subnational policymaking to ensure the delivery of policy solutions.

The evidence-based policy transfer dilemma

If we combine these insights we can identify a major policy transfer dilemma for central government policymakers:

  1. If subject to bounded rationality, they need to use short cuts to identify (what they perceive to be) the best sources of evidence on the policy problem and its solution.
  2. At the same time, they need to determine if there is convincing evidence of success elsewhere, to allow them to: (a) import policy from another country, and/ or (b) ‘scale up’ a solution that seems to be successful in one of its regions.
  3. Then they need to decide how to ‘spread success’, either by (a) ensuring that the best policy is adopted by all regions within its jurisdiction, or (b) accepting that their role in policy transfer is limited: they identify ‘best practice’ and merely encourage subnational governments to adopt particular policies.

Note how closely connected these concerns are: our judgement of the ‘best evidence’ can produce a judgement on how to ‘scale up’ success

Here are three ideal-type approaches to using evidence to transfer or ‘scale up’ successful interventions. In at least two cases, the choice of ‘best evidence’ seems linked inextricably to the choice of transfer strategy:

3 ideal types EBPM

With approach 1, you gather evidence of effectiveness with reference to a hierarchy of evidence, with systematic reviews and RCTs at the top (see pages 4, 15, 33). This has a knock-on effect for ‘scaling up’: you introduce the same model in each area, requiring ‘fidelity’ to the model to ensure you administer the correct ‘dosage’ and measure its effectiveness with RCTs.

With approach 2, you reject this hierarchy and place greater value on practitioner and service user testimony. You do not necessarily ‘scale up’. Instead, you identify good practice (or good governance principles) by telling stories based on your experience and inviting other people to learn from them.

With approach 3, you gather evidence of effectiveness based on a mix of evidence. You seek to ‘scale up’ best practice through local experimentation and continuous data gathering (by practitioners trained in ‘improvement methods’).

The comparisons between approaches 1 and 2 (in particular) show us the strong link between a judgement on evidence and transfer. Approach 1 requires particular methods to gather evidence and high policy uniformity when you transfer solutions, while approach 2 places more faith in the knowledge and judgement of practitioners.

Therefore, our choice of what counts as EBPM can determine our policy transfer strategy. Or, a different transfer strategy may – if you adhere to an evidential hierarchy – preclude EBPM.

Further reading

I describe these issues, with concrete examples of each approach here, and in far more depth here:

Evidence-based best practice is more political than it looks: ‘National governments use evidence selectively to argue that a successful policy intervention in one local area should be emulated in others (‘evidence-based best practice’). However, the value of such evidence is always limited because there is: disagreement on the best way to gather evidence of policy success, uncertainty regarding the extent to which we can draw general conclusions from specific evidence, and local policymaker opposition to interventions not developed in local areas. How do governments respond to this dilemma? This article identifies the Scottish Government response: it supports three potentially contradictory ways to gather evidence and encourage emulation’.

Both articles relate to ‘prevention policy’ and the examples (so far) are from my research in Scotland, but in a future paper I’ll try to convince you that the issues are ‘universal’

 

 

9 Comments

Filed under 1000 words, Evidence Based Policymaking (EBPM), Prevention policy, public policy