All going well, it will be out in November 2019. We are now at the proofing stage.
This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.
The event’s description sums up key conclusions in the literature on policy learning and policy transfer:
In this post, I connect these conclusions to broader themes in policy studies, which suggest that:
As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.
Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?
Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:
Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.
Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:
3 reasons why ‘evidence based’ does not describe policymaking
Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.
The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.
We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.
The literature on ‘policy learning’ tells a similar story
Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.
We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:
1.It is collective and rule-bound
Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.
2.’Evidence based’ is one of several types of policy learning
3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.
Their analysis can produce relevant take-home points such as:
What does the literature on transfer tell us?
‘Policy transfer’ can describe a spectrum of activity:
In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:
However, other studies highlight some major qualifications:
The use of evidence to spread policy innovation requires a combination of profound political and governance choices
When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.
For example, consider these ideal-types or models in table 1:
In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.
In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.
In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.
Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer
In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.
Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2) ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.
4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.
5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?
6. How do we account for the role of scale, and the different cultures and expectations in each policy field?
Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.
7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?
8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?
9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?
10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?
Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.
11. What will be the relationship between evidence and governance?
Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?
Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.
This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.
This is a guest post by Claire A. Dunlop (left) and Claudio M. Radaelli (right), discussing how to use insights from the Policy Learning literature to think about how to learn effectively or adapt to the processes of ‘learning’ in policymaking that are more about politics than education. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.
We often hear that university researchers are ‘all brains but no common sense’. There is often some truth to this stereotype. The literature on policy learning is an archetypal example of being high in IQ but low on street smarts. Researchers have generated a huge amount of ‘policy learning’ taxonomies, concepts and methods without showing what learning can offer policy-makers, citizens and societies.
This is odd because there is a substantive demand and need for practical insights on how to learn. Issues include economic growth, the control of corruption, and improvement in schools and health. Learning organisations range from ‘street level bureaucracies’ to international regulators like the European Union and the World Trade Organization.
To help develop a more practical agenda, we distil three major lessons from the policy learning literature.
There is usually no clear incentive for political actors to learn how to improve public policy. Learning is often the by-product of bargaining, the effort to secure compliance with the law and rules, social participation, or problem-solving when there is radical uncertainty. This means that in politics we should not assume that politicians, bureaucrats, civil society organizations, experts interact to improve public policy. Consensus, participation, formal procedures, and social certification are more important.
Therefore, we have to learn how to design incentives so that the by-product of learning is actually generated. Otherwise, few actors will play the game of the policy-making process with learning as their first goal. Learning is all around us, but it appears in different forms, depending on whether the context is (a) bargaining, (b) compliance, (c) participation or (d) problem-solving under conditions of high uncertainty.
(a) Bargaining requires repeated interaction, low barriers to contract and mechanisms of preference aggregation.
(b) Compliance without trust in institutions is stymied.
(c) Participation needs its own deliberative spaces and a type of participant willing to go beyond the ‘dialogue of the deaf’. Without these two triggers, participation is chaotic, highly conflictual and inefficient.
(d) Expertise is key to problem-solving, but governments should design their advisory committees and special commissions of inquiry by recruiting a broad range of experts. The risk of excluding the next Galileo Galilei in a Ptolemaic committee is always there.
At the same time, there are specific hindrances:
(a) Bargaining stops when the winners are always the same (if you are thinking of Germany and Greece in the European Union you are spot-on).
(b) Hierarchy does not produce efficient compliance unless those at the top know exactly the solution to enforce.
(c) Incommensurable beliefs spoil participatory policy processes. If so, it’s better to switch to open democratic conflict, by counting votes in elections and referenda for example.
(d) Scientific scepticism and low policy capacity mar the work of experts in governmental bodies.
These triggers and hindrances have important lessons for design, perhaps prompting authorities (governments, regulators, public bodies) to switch from one context to another. For example, one can re-design the work of expert committees by including producers and consumers organizations or by allowing bargaining on the implementation of budgetary rules.
We may get this precious by-product and avoid hindrances and traps, but still… learn the wrong lessons.
Latin America and Africa offer too many examples of diligent pupils who did exactly what they were supposed to do, but in the end implemented wrong policies. Perfect compliance does not provide breathing spaces to a policy and impairs the quality of innovation. We have to balance lay and professional knowledge. Bargaining does not allow us to learn about radical innovations; in some cases only a new participant can really change the nature of the game being played by the usual suspects.
So, whether the problem is learning how to fight organized crime and corruption, or to re-launch growth in Europe and development in Africa, the design of the policy process is crucial. For social actors, our analysis shows when and how they should try to change the nature of the game, or lobby for a re-design of the process. This lesson is often forgotten because social actors fight for a given policy objective, not for the parameters that define who does what and how in the policy process.