All going well, it will be out in November 2019. We are now at the proofing stage.
The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?
The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).
However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:
I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.
Example 1: the policy cycle endures despite its descriptive inaccuracy
The policy cycle does not describe and explain the policy process well:
Policy theories provide more descriptive and explanatory usefulness. Their insights include:
However, the cycle metaphor endures because:
In that context, we may want to be pragmatic about our advice:
Further reading (blog posts):
Example 2: how to deal with a lack of ‘evidence based policymaking’
I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.
I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.
Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.
But how exactly do we turn empirical descriptions of policymaking into practical advice?
For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:
We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?
Example 3: how to encourage realistic evidence-informed policy transfer
This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.
I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!
For me, the context is potentially overwhelming:
So, when we start to focus on sharing lessons, we will have three things to discover:
To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.
The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018
I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.
This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.
The event’s description sums up key conclusions in the literature on policy learning and policy transfer:
In this post, I connect these conclusions to broader themes in policy studies, which suggest that:
As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.
Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?
Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:
Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.
Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:
3 reasons why ‘evidence based’ does not describe policymaking
Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.
The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.
We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.
The literature on ‘policy learning’ tells a similar story
Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.
We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:
1.It is collective and rule-bound
Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.
2.’Evidence based’ is one of several types of policy learning
3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.
Their analysis can produce relevant take-home points such as:
What does the literature on transfer tell us?
‘Policy transfer’ can describe a spectrum of activity:
In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:
However, other studies highlight some major qualifications:
The use of evidence to spread policy innovation requires a combination of profound political and governance choices
When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.
For example, consider these ideal-types or models in table 1:
In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.
In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.
In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.
Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer
In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.
Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2) ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.
4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.
5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?
6. How do we account for the role of scale, and the different cultures and expectations in each policy field?
Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.
7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?
8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?
9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?
10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?
Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.
11. What will be the relationship between evidence and governance?
Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?
Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.
This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.
In my speech to COPOLAD I began by stating that, although we talk about our hopes for evidence-based policy and policymaking (EBP and EBPM), we don’t really know what it is.
I also argued that EBPM is not like our image of evidence-based medicine (EBM), in which there is a clear idea of: (a) which methods/ evidence counts, and (b) the main aim, to replace bad interventions with good.
In other words, in EBPM there is no blueprint for action, either in the abstract or in specific cases of learning from good practice.
To me, this point is underappreciated in the study of EBPM: we identify the politics of EBPM, to highlight the pathologies of/ ‘irrational’ side to policymaking, but we don’t appreciate the more humdrum limits to EBPM even when the political process is healthy and policymakers are fully committed to something more ‘rational’.
Examples from best practice
The examples from our next panel session* demonstrated these limitations to EBPM very well.
The panel contained four examples of impressive policy developments with the potential to outline good practice on the application of public health and harm reduction approaches to drugs policy (including the much-praised Portuguese model).
However, it quickly became apparent that no country-level experience translated into a blueprint for action, for some of the following reasons:
In other words, we note routinely the high-level political obstacles to policy emulation, but these examples demonstrate the problems that would still exist even if those initial obstacles were overcome.
A key solution is easier said than done: if providing lessons to others, describe it systematically, in a form that describes the steps to take to turn this model into action (and in a form that we can compare with other experiences). To that end, providers of lessons might note:
Realistically this is when the process least resembles (the ideal of) EBM because few evaluations of success will be based on a randomised control trial or some equivalent (and other policymakers may not draw primarily on RCT evidence even when it exists).
Instead, as with much harm reduction and prevention policy, a lot of the justification for success will be based on a counterfactual (what would have happened if we did not intervene?), which is itself based on:
(a) the belief that our object of policy is a complex environment containing many ‘wicked problems’, in which the effects of one intervention cannot be separated easily from that of another (which makes it difficult, and perhaps even inappropriate, to rely on RCTs)
(b) an assessment of the unintended consequence of previous (generally more punitive) policies.
So, the first step to ‘evidence-based policymaking’ is to make a commitment to it. The second is to work out what it is. The third is to do it in a systematic way that allows others to learn from your experience.
The latter may be more political than it looks: few countries (or, at least, the people seeking re-election within them) will want to tell the rest of the world: we innovated and we don’t think it worked.
*I also discuss this problem of evidence-based best practice within single countries
Well, it’s really a set of messages, geared towards slightly different audiences, and summed up by this table:
Further reading (links):
We can generate new insights on policymaking by connecting the dots between many separate concepts. However, don’t underestimate some major obstacles or how hard these dot-connecting exercises are to understand. They may seem clear in your head, but describing them (and getting people to go along with your description) is another matter. You need to set out these links clearly and in a set of logical steps. I give one example – of the links between evidence and policy transfer – which I have been struggling with for some time.
In this post, I combine three concepts – policy transfer, bounded rationality, and ‘evidence-based policymaking’ – to identify the major dilemmas faced by central government policymakers when they use evidence to identify a successful policy solution and consider how to import it and ‘scale it up’ within their jurisdiction. For example, do they use randomised control trials (RCTs) to establish the effectiveness of interventions and require uniform national delivery (to ensure the correct ‘dosage’), or tell stories of good practice and invite people to learn and adapt to local circumstances? I use these examples to demonstrate that our judgement of good evidence influences our judgement on the mode of policy transfer.
Insights from each concept
From studies of policy transfer, we know that central governments (a) import policies from other countries and/ or (b) encourage the spread (‘diffusion’) of successful policies which originated in regions within their country: but how do they use evidence to identify success and decide how to deliver programs?
From studies of ‘evidence-based policymaking’ (EBPM), we know that providers of scientific evidence identify an ‘evidence-policy gap’ in which policymakers ignore the evidence of a problem and/ or do not select the best evidence-based solution: but can policymakers simply identify the ‘best’ evidence and ‘roll-out’ the ‘best’ evidence-based solutions?
From studies of bounded rationality and the policy cycle (compared with alternative theories, such as multiple streams analysis or the advocacy coalition framework), we know that it is unrealistic to think that a policymaker at the heart of government can simply identify then select a perfect solution, click their fingers, and see it carried out. This limitation is more pronounced when we identify multi-level governance, or the diffusion of policymaking power across many levels and types of government. Even if they were not limited by bounded rationality, they would still face: (a) practical limits to their control of the policy process, and (b) a normative dilemma about how far you should seek to control subnational policymaking to ensure the delivery of policy solutions.
The evidence-based policy transfer dilemma
If we combine these insights we can identify a major policy transfer dilemma for central government policymakers:
Note how closely connected these concerns are: our judgement of the ‘best evidence’ can produce a judgement on how to ‘scale up’ success
Here are three ideal-type approaches to using evidence to transfer or ‘scale up’ successful interventions. In at least two cases, the choice of ‘best evidence’ seems linked inextricably to the choice of transfer strategy:
With approach 1, you gather evidence of effectiveness with reference to a hierarchy of evidence, with systematic reviews and RCTs at the top (see pages 4, 15, 33). This has a knock-on effect for ‘scaling up’: you introduce the same model in each area, requiring ‘fidelity’ to the model to ensure you administer the correct ‘dosage’ and measure its effectiveness with RCTs.
With approach 2, you reject this hierarchy and place greater value on practitioner and service user testimony. You do not necessarily ‘scale up’. Instead, you identify good practice (or good governance principles) by telling stories based on your experience and inviting other people to learn from them.
With approach 3, you gather evidence of effectiveness based on a mix of evidence. You seek to ‘scale up’ best practice through local experimentation and continuous data gathering (by practitioners trained in ‘improvement methods’).
The comparisons between approaches 1 and 2 (in particular) show us the strong link between a judgement on evidence and transfer. Approach 1 requires particular methods to gather evidence and high policy uniformity when you transfer solutions, while approach 2 places more faith in the knowledge and judgement of practitioners.
Therefore, our choice of what counts as EBPM can determine our policy transfer strategy. Or, a different transfer strategy may – if you adhere to an evidential hierarchy – preclude EBPM.
I describe these issues, with concrete examples of each approach here, and in far more depth here:
Evidence-based best practice is more political than it looks: ‘National governments use evidence selectively to argue that a successful policy intervention in one local area should be emulated in others (‘evidence-based best practice’). However, the value of such evidence is always limited because there is: disagreement on the best way to gather evidence of policy success, uncertainty regarding the extent to which we can draw general conclusions from specific evidence, and local policymaker opposition to interventions not developed in local areas. How do governments respond to this dilemma? This article identifies the Scottish Government response: it supports three potentially contradictory ways to gather evidence and encourage emulation’.
Both articles relate to ‘prevention policy’ and the examples (so far) are from my research in Scotland, but in a future paper I’ll try to convince you that the issues are ‘universal’
‘Policy learning’ describes the use of knowledge to inform policy decisions. That knowledge can be based on information regarding the current problem, lessons from the past or lessons from the experience of others. This is a political, not technical or objective, process (for example, see the ACF post). ‘Policy transfer’ describes the transfer of policy solutions or ideas from one place to another, such as by one government importing the policy in another country (note related terms such as ‘lesson-drawing’, ‘policy diffusion’ and ‘policy convergence’ – transfer is a catch-all, umbrella, term). Although these terms can be very closely related (one would hope that a government learns from the experiences of another before transferring policy) they can also operate relatively independently. For example, a government may decide not to transfer policy after learning from the experience of another, or it may transfer (or ‘emulate’) without really understanding why the exporting country had a successful experience (see the post on bounded rationality). Here are some major examples:
It is a topic that lends itself well to practical advice; the ‘how to’ of policymaking. For example, Richard Rose’s ‘practical guide’ explores 10 steps:
The descriptive/ empirical side asks these sorts of questions:
From where are lessons drawn? In the US, the diffusion literature examines which states tend to innovate or emulate. Some countries are also known as innovators in certain fields – such as Sweden and the social democratic state, Germany on inflation control and the UK on privatization. The US (or its states) tends to be a major exporter of ideas. Some countries often learn consistently from the same source (such as the UK from the US). Studies tend to highlight the reasons for borrowing from certain countries – for example, they share an ideology, common problems or policy conditions. ‘Globalization’ has also reduced practical barriers to learning between countries.
Who is involved? Apart from the usual suspects (elected officials, civil servants, interest groups), we can identify the role of federal governments (for states), international organizations (for countries), ‘policy entrepreneurs’ (who use their experience in one country to sell that policy to another – such as the Harvard Business School professor travelling the world selling ‘new public management’), international networks of experts (who feed up ideas to their national governments), multinational corporations (who encourage the ‘race to the bottom’, or the reduction of taxes and regulations in many countries), and other countries (such as the US).
Why transfer? Is transfer voluntary? The Dolowitz/ Marsh continuum sums up the idea that some forms of transfer are more voluntary than others. ‘Lesson-drawing’ is about learning from another country’s experience without much pressure (see the book to explain why I scribbled out some of the text!). At the other end is coercion. They place ‘conditionality’ near that end of the spectrum, since the idea is that countries who are so desperate to borrow money from the International Monetary Fund will feel they have no choice but to accept the IMF’s conditions – which usually involves reducing the role/ size of the state (although note the difference between agreeing to those conditions and meeting them). ‘Obligated transfer’ is further to the left because, for example, member states sign up to be influenced by EU institutions. Indirect coercion describes countries who feel they have to follow the lead of others, simply to ‘keep up’ or to respond to the ‘externalities’ or ‘spillovers’ of the policies of the other country (they are often felt most by small countries which share a border with larger countries).
What is transferred? How much is transferred? Transfer can range from the decision to completely duplicate the substantive aims and institutions associated with a major policy change, taking decades to complete, to the vague inspiration (or the very quick decision not to emulate and, instead, to learn ‘negative lessons’). It can also be a cover for something you planned to do anyway – ‘international experience’ is a great selling point.
What determines the likelihood and success of policy transfer? For an importing government to be successful, it should study the exporting country’s policy – and political system – enough to know what made it a success and if that success is transferable. Often, this is not done (governments may emulate without being particularly diligent) or it is not possible, since the policy may only work under particular circumstances (and we may not always know what those circumstances are). Much also depends on the implementation of policy, particularly when the transfer is encouraged by one organization and accepted reluctantly by another (such as when the EU, with limited enforcement powers, puts pressure on recalcitrant member states).
These questions are best asked alongside the general questions we explore in policymaking studies, including:
I explore these issues (and Rose’s advice) in a paper examining what Japan can learn from the UK’s experience of regionalism. It includes a discussion (summarised from Keating et al – Paywall Green) of the extent to which policy converges in a devolved UK and how much of that we can attribute to transfer and/ or learning: