Tag Archives: policy learning

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Policy in 500 Words: The advocacy coalition framework

Here is the ACF story.

People engage in politics to turn their beliefs into policy. They form advocacy coalitions with people who share their beliefs, and compete with other coalitions. The action takes place within a subsystem devoted to a policy issue, and a wider policymaking process that provides constraints and opportunities to coalitions.

The policy process contains multiple actors and levels of government. It displays a mixture of intensely politicized disputes and routine activity. There is much uncertainty about the nature and severity of policy problems. The full effects of policy may be unclear for over a decade. The ACF sums it up in the following diagram:

acf

Policy actors use their beliefs to understand, and seek influence in, this world. Beliefs about how to interpret the cause of and solution to policy problems, and the role of government in solving them, act as a glue to bind actors together within coalitions.

If the policy issue is technical and humdrum, there may be room for routine cooperation. If the issue is highly charged, then people romanticise their own cause and demonise their opponents.

The outcome is often long-term policymaking stability and policy continuity because the ‘core’ beliefs of coalitions are unlikely to shift and one coalition may dominate the subsystem for long periods.

There are two main sources of change.

  1. Coalitions engage in policy learning to remain competitive and adapt to new information about policy. This process often produces minor change because coalitions learn on their own terms. They learn how to retain their coalition’s strategic advantage and use the information they deem most relevant.
  2. ‘Shocks’ affect the positions of coalitions within subsystems. Shocks are the combination of events and coalition responses. External shocks are prompted by events including the election of a new government with different ideas, or the effect of socioeconomic change. Internal shocks are prompted by policy failure. Both may prompt major change as members of one coalition question their beliefs in the light of new evidence. Or, another coalition may adapt more readily to its new policy environment and exploit events to gain competitive advantage.

The ACF began as the study of US policymaking, focusing largely on environmental issues. It has changed markedly to reflect the widening of ACF scholarship to new policy areas, political systems, and methods.

For example, the flow diagram’s reference to the political system’s long term coalition opportunity structures is largely the response to insights from comparative international studies:

  • A focus on the ‘degree of consensus needed for major policy change’ reflects applications in Europe that highlighted the important of proportional electoral systems
  • A focus on the ‘openness of the political system’ partly reflects applications to countries without free and fair elections, and/ or systems that do not allow people to come together easily as coalitions to promote policy change.

As such, like all theories in this series, the ACF discusses elements that it would treat as (a) universally applicable, such as the use of beliefs to address bounded rationality, and (b) context-specific, such as the motive and opportunity of specific people to organize collectively to translate their beliefs into policy.

See also:

The 500 and 1000 Words series

Why Advocacy Coalitions Matter and How to Think about Them

Three lessons from a comparison of fracking policy in the UK and Switzerland

Bonus material

Scottish Independence and the Devil Shift

Image source: Weible, Heikkila, Ingold, and Fischer (2016: 6)

 

 

 

7 Comments

Filed under 500 words, public policy

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.

The event’s description sums up key conclusions in the literature on policy learning and policy transfer:

  1. technology and ‘entrepreneurs’ help ideas spread internationally, and domestic policymakers can use them to be more informed about global policy innovation, but
  2. there can be major unintended consequences to importing ideas, such as the adoption of policy solutions with poorly-evidenced success, or a broader sense of failed transportation caused by factors such as a poor fit between the aims of the exporter/importer.

In this post, I connect these conclusions to broader themes in policy studies, which suggest that:

  1. policy learning and policy transfer are political processes, not ‘rational’ or technical searches for information
  2. the use of evidence to spread policy innovation requires two interconnected choices: what counts as good evidence, and what role central governments should play.
  3. the following ’11 question guide’ to evidence based policy transfer serves more as a way to reflect than a blueprint for action.

As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.

anzog auckland transfer ad

Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?

Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:

  1. ‘Evidence based’ is a highly misleading description of the use of information in policy.
  2. To transfer a policy blueprint completely, in this manner, would require all places and contexts to be the same, and for the policy process to be technocratic and apolitical.
  3. There are general academic guides on how to learn lessons from others systematically – such as Richard Rose’s ‘practical guide’  – but most academic work on learning and transfer does not suggest that policymakers follow this kind of advice.

Rose 10 lessons rotated

Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.

Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:

3 reasons why ‘evidence based’ does not describe policymaking

In a series of ANZSOG talks on ‘evidence based policymaking’ (EBPM), I describe three main factors, all of which are broadly relevant to transfer:

  1. There are many forms of policy-relevant evidence and few policymakers adhere to a strict ‘hierarchy’ of knowledge.

Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.

  1. Policymakers must find ways to ignore most evidence – such as by combining ‘rational’ and ‘irrational’ cognitive shortcuts – to be able to act quickly.

The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.

  1. They do not control the policy process in which they engage.

We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.

The literature on ‘policy learning’ tells a similar story

Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.

We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:

1.It is collective and rule-bound

Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.

2.’Evidence based’ is one of several types of policy learning

  • Epistemic. Primarily by scientific experts transmitting knowledge to policymakers.
  • Reflection. Open dialogue to incorporate diverse forms of knowledge and encourage cooperation.
  • Bargaining. Actors learn how to cooperate and compete effectively.
  • Hierarchy. Actors with authority learn how to impose their aims; others learn the limits to their discretion.

3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.

Their analysis can produce relevant take-home points such as:

  • Experts will be ineffective if they assume that policy learning is epistemic. The assumption will leave them ill-prepared to deal with bargaining.
  • There is more than one legitimate way to learn, such as via deliberative processes that incorporate more perspectives and forms of knowledge.

What does the literature on transfer tell us?

‘Policy transfer’ can describe a spectrum of activity:

  • driven voluntarily, by a desire to learn from the story of another government’s policy’s success. In such cases, importers use shortcuts to learning, such as by restricting their search to systems with which they have something in common (such as geography or ideology), learning via intermediaries such as ‘entrepreneurs’, or limiting their searches for evidence of success.
  • driven by various forms of pressure, including encouragement by central (or supranational) governments, international norms or agreements, ‘spillover’ effects causing one system to respond to innovation by another, or demands by businesses to minimise the cost of doing business.

In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:

  • Failing to generate or use enough evidence on what made the initial policy successful
  • Failing to adapt that policy to local circumstances
  • Failing to back policy change with sufficient resources

However, other studies highlight some major qualifications:

  • If the process is about using ideas about one system to inform another, our attention may shift from ‘transfer’ to ‘translation’ or ‘transformation’, and the idea of ‘successful transfer’ makes less sense
  • Transfer success is not the same as implementation success, which depends on a wider range of factors
  • Nor is it the same as ‘policy success’, which can be assessed by a mix of questions to reflect political reality: did it make the government more re-electable, was the process of change relatively manageable, and did it produce intended outcomes?

The use of evidence to spread policy innovation requires a combination of profound political and governance choices

When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.

For example, consider these ideal-types or models in table 1:

Table 1 3 ideal types of EBBP

In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.

In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.

In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.

Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer  

In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.

  1. What problem did policymakers say they were trying to solve, and why?
  2. What solution(s) did they produce?
  3. Why?

Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2)  ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.

4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.

5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?

6. How do we account for the role of scale, and the different cultures and expectations in each policy field?

Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.

7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?

8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?

9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?

10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?

Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.

11. What will be the relationship between evidence and governance?

Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?

In conclusion

Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.

This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.

Paul Cairney Auckland Policy Transfer 12.10.18

 

 

6 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

Epistemic versus bargaining-driven policy learning

There is an excellent article by Professor Claire Dunlop called “The irony of epistemic learning: epistemic communities, policy learning and the case of Europe’s hormones saga” (Open Access). It uses the language of ‘policy learning’ rather than ‘evidence based policymaking’, but these descriptions are closely related. I describe it below, in the form I’ll use in the 2nd ed of Understanding Public Policy (it will be Box 12.2).

Dunlop (2017c) uses a case study – EU policy on the supply of growth hormones to cattle – to describe the ‘irony of epistemic learning’. It occurs in two initial steps.

First, a period of epistemic learning allowed scientists to teach policymakers the key facts on a newly emerging policy issue. The scientists, trusted to assess risk, engaged in the usual processes associated with scientific work: gathering evidence to reduce uncertainty, but always expressing the need to produce continuous research to address inevitable uncertainty in some cases.  The ‘Lamming’ committee of experts commissioned and analysed scientific evidence comprehensively before reporting (a) that the use of ‘naturally occurring’ hormones in livestock was low risk for human consumers if administered according to regulations and guidance, but (b) it wanted more time  to analyse the carcinogenic effects of two ‘synthetic compounds’ (2017c: 224).

Second, a period of bargaining changed the context. EU officials (in DG Agriculture) responded to European Parliament concerns, fuelled by campaigning from consumer groups, which focused on uncertainty and worst-case scenarios. Officials suspended the committee’s deliberations before it was due to report and banned the use of growth hormones in the EU (and the importation of relevant meat).

The irony is two-fold.

First, it results from the combination of processes: scientists, operating in epistemic mode, described low risk but some uncertainty; and policymakers, operating in bargaining mode, used this sense of uncertainty to reject scientific advice.

Second, scientists were there to help policymakers learn about the evidence, but were themselves unable to learn about how to communicate and form wider networks within a political system characterised by periods of bargaining-driven policy learning.

dunlop 2017c picture

1 Comment

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

Policy concepts in 1000 words: Institutional memory

Guest post by Jack Corbett, Dennis Grube, Heather Lovell and Rodney Scott

Democratic governance is defined by the regular rotation of elected leaders. Amidst the churn, the civil service is expected to act as the repository of received wisdom about past policies, including assessments of what works and what doesn’t. The claim is that to avoid repeating the same mistakes we need to know what happened last time and what were the effects. Institutional memory is thus central to the pragmatic task of governing.

What is institutional memory? And, how is it different to policy learning?

Despite increasing recognition of the role that memory can or should play in the policy process, the concept has defied easy scholarly definition.

In the classic account, institutional memory is the sum total of files, procedures and knowledge held by an organisation. Christopher Pollitt, who has pioneered the study of institutional memory, refers to the accumulated knowledge and experience of staff, technical systems, including electronic databases and various kinds of paper records, the management system, and the norms and values of the organizational culture, when talking about institutional memory. In this view, which is based on the key principles of the new institutionalism, memory is essentially an archive.

The problem with this definition is that it is hard to distinguish the concept from policy learning (see also here). If policy learning is in part about increasing knowledge about policy, including correcting for past mistakes, then we could perhaps conceive of a continuum from learning to memory with an inflection point where one starts and the other stops. But, this is easier to imagine than it is to measure empirically. It also doesn’t acknowledge the forms memories take and the ways memories are contested, suppressed and actively forgotten.

In our recent contribution to this debate (see here and here) we define memories as ‘representations of the past’ that actors draw on to narrate what has been learned when developing and implementing policy. When these narratives are embedded in processes they become ‘institutionalised’. It is this emphasis on embedded narratives that distinguishes institutional memory from policy learning. Institutional memory may facilitate policy learning but equally some memories may prohibit genuine adaptation and innovation. As a result, while there is an obvious affinity between the two concepts it is imperative that they remain distinct avenues of inquiry. Policy learning has unequivocally positive connotations that are echoed in some conceptualisations of institutional memory (i.e. Pollitt). But, equally, memory (at least in a ‘static’ form) can be said to provide administrative agents with an advantage over political principals (think of the satirical Sir Humphrey of Yes Minister fame). The below table seeks to distinguish between these two conceptualisations of institutional memory:

Key debates: Is institutional memory declining?

The scholar who has done the most to advance our understanding of institutional memory in government is Christopher Pollitt. His main contention is that institutional memory has declined over recent decades due to: the high rotation of staff in the civil service, changes in IT systems which prevent proper archiving, regular organisational restructuring, rewarding management skills above all others, and adopting new management ‘fads’ that favour constant change as they become popular. This combination of factors has proven to be a perfect recipe for the loss of institutional memory within organisations.  The result is a contempt for the past that leads to repeated policy failure.

We came to a different view. Our argument is that one of the key reasons why institutional memory is said to have declined is that it has been conceptualised in a ‘static’ manner more in keeping with an older way of doing government. This practice has assumed that knowledge on a given topic is held centrally (by government departments) and can be made explicit for the purpose of archiving. But, if government doesn’t actually work this way (see relevant posts on networks here) then we shouldn’t expect it to remember this way either. Instead of static repositories of summative documents holding a singular ‘objective’ memory, we propose a more ‘dynamic’ people-centred conceptualisation that sees institutional memory as a composite of intersubjective memories open to change. This draws to the fore the role of actors as crucial interpreters of memory, combining the documentary record with their own perspectives to create a story about the past. In this view, institutional memory has not declined, it is simply being captured in a fundamentally different way.

Corbett et al memory

Key debates: How can an institution improve how it remembers?

How an institution might improve its memory is intrinsically linked to how memory is defined and whether or not it is actually in decline. If we follow Pollitt’s view that memory is about the archive of accumulated knowledge that is being ignored or deliberately dismantled by managerialism then the answer involves returning to an older way of doing government that placed a higher value on experience. By putting a higher value on the past as a resource institutions would reduce staff turnover, stop regular restructures and changes in IT systems, etc. For those of us who work in an institution where restructuring and IT changes are the norm, this solution has obvious attractions. But, would it actually improve memory? Or would it simply make it easier to preserve the status quo (a process that involves actively forgetting disruptive but generative innovations)?

Our definition, relying as it does on a more dynamic conceptualisation of memory, is sceptical about the need to improve practices of remembering. But, if an institution did want to remember better we would favour increasing the opportunity for actors within an institution to reflect on and narrate the past. One example of this might be a ‘Wikipedia’ model of memory in which the story of a policy, it success and failure, is constructed by those involved, highlighting points of consensus and conjecture.

Additional reading:

 Corbett J, Grube D, Lovell H, Scott R. “Singular memory or institutional memories? Toward a dynamic approach”. Governance. 2018;00:1–19. https://doi.org/10.1111/gove.12340

 Pollitt, C. 2009. “Bureaucracies Remember, Post‐Bureaucratic Organizations Forget?” Public Administration 87 (2): 198-218.

Pollitt, C. 2000. “Institutional Amnesia: A Paradox of the ‘Information Age’?” Prometheus 18 (1): 5-16.

 

Leave a comment

Filed under 1000 words, public policy, Uncategorized

Three ways to encourage policy learning

Claire Claudio

This is a guest post by  Claire A. Dunlop (left) and Claudio M. Radaelli (right), discussing how to use insights from the Policy Learning literature to think about how to learn effectively or adapt to the processes of ‘learning’ in policymaking that are more about politics than education. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

We often hear that university researchers are ‘all brains but no common sense’. There is often some truth to this stereotype. The literature on policy learning is an archetypal example of being high in IQ but low on street smarts. Researchers have generated a huge amount of ‘policy learning’ taxonomies, concepts and methods without showing what learning can offer policy-makers, citizens and societies.

This is odd because there is a substantive demand and need for practical insights on how to learn. Issues include economic growth, the control of corruption, and improvement in schools and health. Learning organisations range from ‘street level bureaucracies’ to international regulators like the European Union and the World Trade Organization.

To help develop a more practical agenda, we distil three major lessons from the policy learning literature.

1. Learning is often the by-product of politics, not the primary goal of policymakers

There is usually no clear incentive for political actors to learn how to improve public policy. Learning is often the by-product of bargaining, the effort to secure compliance with the law and rules, social participation, or problem-solving when there is radical uncertainty. This means that in politics we should not assume that politicians, bureaucrats, civil society organizations, experts interact to improve public policy. Consensus, participation, formal procedures, and social certification are more important.

Therefore, we have to learn how to design incentives so that the by-product of learning is actually generated. Otherwise, few actors will play the game of the policy-making process with learning as their first goal. Learning is all around us, but it appears in different forms, depending on whether the context is (a) bargaining, (b) compliance, (c) participation or (d) problem-solving under conditions of high uncertainty.

2. Each mode of learning has its triggers or hindrances

(a) Bargaining requires repeated interaction, low barriers to contract and mechanisms of preference aggregation.

(b) Compliance without trust in institutions is stymied.

(c) Participation needs its own deliberative spaces and a type of participant willing to go beyond the ‘dialogue of the deaf’. Without these two triggers, participation is chaotic, highly conflictual and inefficient.

(d) Expertise is key to problem-solving, but governments should design their advisory committees and special commissions of inquiry by recruiting a broad range of experts. The risk of excluding the next Galileo Galilei in a Ptolemaic committee is always there.

At the same time, there are specific hindrances:

(a) Bargaining stops when the winners are always the same (if you are thinking of Germany and Greece in the European Union you are spot-on).

(b) Hierarchy does not produce efficient compliance unless those at the top know exactly the solution to enforce.

(c) Incommensurable beliefs spoil participatory policy processes. If so, it’s better to switch to open democratic conflict, by counting votes in elections and referenda for example.

(d) Scientific scepticism and low policy capacity mar the work of experts in governmental bodies.

These triggers and hindrances have important lessons for design, perhaps prompting authorities (governments, regulators, public bodies) to switch from one context to another. For example, one can re-design the work of expert committees by including producers and consumers organizations or by allowing bargaining on the implementation of budgetary rules.

3. Beware the limitations of learning

We may get this precious by-product and avoid hindrances and traps, but still… learn the wrong lessons.

Latin America and Africa offer too many examples of diligent pupils who did exactly what they were supposed to do, but in the end implemented wrong policies. Perfect compliance does not provide breathing spaces to a policy and impairs the quality of innovation. We have to balance lay and professional knowledge. Bargaining does not allow us to learn about radical innovations; in some cases only a new participant can really change the nature of the game being played by the usual suspects.

So, whether the problem is learning how to fight organized crime and corruption, or to re-launch growth in Europe and development in Africa, the design of the policy process is crucial. For social actors, our analysis shows when and how they should try to change the nature of the game, or lobby for a re-design of the process. This lesson is often forgotten because social actors fight for a given policy objective, not for the parameters that define who does what and how in the policy process.

6 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

There is no blueprint for evidence-based policy, so what do you do?

In my speech to COPOLAD I began by stating that, although we talk about our hopes for evidence-based policy and policymaking (EBP and EBPM), we don’t really know what it is.

I also argued that EBPM is not like our image of evidence-based medicine (EBM), in which there is a clear idea of: (a) which methods/ evidence counts, and (b) the main aim, to replace bad interventions with good.

In other words, in EBPM there is no blueprint for action, either in the abstract or in specific cases of learning from good practice.

To me, this point is underappreciated in the study of EBPM: we identify the politics of EBPM, to highlight the pathologies of/ ‘irrational’ side to policymaking, but we don’t appreciate the more humdrum limits to EBPM even when the political process is healthy and policymakers are fully committed to something more ‘rational’.

Examples from best practice

The examples from our next panel session* demonstrated these limitations to EBPM very well.

The panel contained four examples of impressive policy developments with the potential to outline good practice on the application of public health and harm reduction approaches to drugs policy (including the much-praised Portuguese model).

However, it quickly became apparent that no country-level experience translated into a blueprint for action, for some of the following reasons:

  • It is not always clear what problems policymakers have been trying to solve.
  • It is not always clear how their solutions, in this case, interact with all other relevant policy solutions in related fields.
  • It is difficult to demonstrate clear evidence of success, either before or after the introduction of policies. Instead, most policies are built on initial deductions from relevant evidence, followed by trial-and-error and some evaluations.

In other words, we note routinely the high-level political obstacles to policy emulation, but these examples demonstrate the problems that would still exist even if those initial obstacles were overcome.

A key solution is easier said than done: if providing lessons to others, describe it systematically, in a form that describes the steps to take to turn this model into action (and in a form that we can compare with other experiences). To that end, providers of lessons might note:

  • The problem they were trying to solve (and how they framed it to generate attention, support, and action, within their political systems)
  • The detailed nature of the solution they selected (and the conditions under which it became possible to select that intervention)
  • The evidence they used to guide their initial policies (and how they gathered it)
  • The evidence they collected to monitor the delivery of the intervention, evaluate its impact (was it successful?), and identify cause and effect (why was it successful?)

Realistically this is when the process least resembles (the ideal of) EBM because few evaluations of success will be based on a randomised control trial or some equivalent (and other policymakers may not draw primarily on RCT evidence even when it exists).

Instead, as with much harm reduction and prevention policy, a lot of the justification for success will be based on a counterfactual (what would have happened if we did not intervene?), which is itself based on:

(a) the belief that our object of policy is a complex environment containing many ‘wicked problems’, in which the effects of one intervention cannot be separated easily from that of another (which makes it difficult, and perhaps even inappropriate, to rely on RCTs)

(b) an assessment of the unintended consequence of previous (generally more punitive) policies.

So, the first step to ‘evidence-based policymaking’ is to make a commitment to it. The second is to work out what it is. The third is to do it in a systematic way that allows others to learn from your experience.

The latter may be more political than it looks: few countries (or, at least, the people seeking re-election within them) will want to tell the rest of the world: we innovated and we don’t think it worked.

*I also discuss this problem of evidence-based best practice within single countries

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco policy