Tag Archives: policy learning

Policy Analysis in 750 Words: Two approaches to policy learning and transfer

This post forms one part of the Policy Analysis in 750 words series. It draws on work for an in-progress book on learning to reduce inequalities. Some of the text will seem familiar if you have read other posts. Think of it as an adventure game in which the beginning is the same but you don’t know the end.

Policy learning is the use of new information to update policy-relevant knowledge. Policy transfer involves the use of knowledge about policy and policymaking in one government to inform policy and policymaking in another.

These processes may seem to relate primarily to research and expertise, but they require many kinds of political choices (explored in this series). They take place in complex policymaking systems over which no single government has full knowledge or control.

Therefore, while the agency of policy analysts and policymakers still matters, they engage with a policymaking context that constrains or facilitates their action.

Two approaches to policy learning: agency and context-driven stories

Policy analysis textbooks focus on learning and transfer as an agent-driven process with well-established  guidance (often with five main steps). They form part of a functionalist analysis where analysts identify the steps required to turn comparative analysis into policy solutions, or part of a toolkit to manage stages of the policy process.

Agency is less central to policy process research, which describes learning and transfer as contingent on context. Key factors include:

Analysts compete to define problems and determine the manner and sources of learning, in a multi-centric environment where different contexts will constrain and facilitate action in different ways. For example, varying structural factors – such as socioeconomic conditions – influence the feasibility of proposed policy change, and each centre’s institutions provide different rules for gathering, interpreting, and using evidence.

The result is a mixture of processes in which:

  1.  Learning from experts is one of many possibilities. For example, Dunlop and Radaelli also describe ‘reflexive learning’, ‘learning through bargaining’, and ‘learning in the shadow hierarchy’
  2.  Transfer takes many forms.

How should analysts respond?

Think of two different ways to respond to this description of the policy process with this lovely blue summary of concepts. One is your agency-centred strategic response. The other is me telling you why it won’t be straightforward.

An image of the policy process (see 5 images)

There are many policy makers and influencers spread across many policymaking ‘centres’

  1. Find out where the action is and tailor your analysis to different audiences.
  2. There is no straightforward way to influence policymaking if multiple venues contribute to policy change and you don’t know who does what.

Each centre has its own ‘institutions’

  1. Learn the rules of evidence gathering in each centre: who takes the lead, how do they understand the problem, and how do they use evidence?
  2. There is no straightforward way to foster policy learning between political systems if each is unaware of each other’s unwritten rules. Researchers could try to learn their rules to facilitate mutual learning, but with no guarantee of success.

Each centre has its own networks

  1. Form alliances with policymakers and influencers in each relevant venue.
  2. The pervasiveness of policy communities complicates policy learning because the boundary between formal power and informal influence is not clear.

Well-established ‘ideas’ tend to dominate discussion

  1. Learn which ideas are in good currency. Tailor your advice to your audience’s beliefs.
  2. The dominance of different ideas precludes many forms of policy learning or transfer. A popular solution in one context may be unthinkable in another.

Many policy conditions (historic-geographic, technological, social and economic factors) command the attention of policymakers and are out of their control. Routine events and non-routine crises prompt policymaker attention to lurch unpredictably.

  1. Learn from studies of leadership in complex systems or the policy entrepreneurs who find the right time to exploit events and windows of opportunity to propose solutions.
  2. The policy conditions may be so different in each system that policy learning is limited and transfer would be inappropriate. Events can prompt policymakers to pay disproportionately low or high attention to lessons from elsewhere, and this attention relates weakly to evidence from analysts.

Feel free to choose one or both forms of advice. One is useful for people who see analysts and researchers as essential to major policy change. The other is useful if it serves as a source of cautionary tales rather than fatalistic responses.

See also:

Policy Concepts in 1000 Words: Policy Transfer and Learning

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Policy learning to reduce inequalities: a practical framework

Three ways to encourage policy learning

Epistemic versus bargaining-driven policy learning

The ‘evidence-based policymaking’ page explores these issues in more depth

1 Comment

Filed under 750 word policy analysis, IMAJINE, Policy learning and transfer, public policy

Policy Analysis in 750 Words: How to deal with ambiguity

This post forms one part of the Policy Analysis in 750 words series. It draws on this 500 Words post, then my interpretation of co-authored work with Drs Emily St Denny and John Boswell (which I would be delighted to share if it gets published). It trails off at the end.

In policy studies, ambiguity describes the ability to entertain more than one interpretation of a policy problem. There are many ways to frame issues as problems. However, only some frames receive high policymaker attention, and policy change relates strongly to that attention. Resolving ambiguity in your favour is the prize.

Policy studies focus on different aspects of this dynamic, including:

  1. The exercise of power, such as of the narrator to tell stories and the audience to engage with or ignore them.
  2. Policy learning, in which people collaborate (and compete) to assign concrete meaning to abstract aims.
  3. A complex process in which many policymakers and influencers are cooperating/ competing to define problems in many policymaking centres.

They suggest that resolving ambiguity affects policy in different ways, to influence the:

The latter descriptions, reflecting multi-centric policymaking, seem particularly relevant to major contemporary policy problems – such as global public health and climate crises – in which cooperation across (and outside of) many levels and types of government is essential.

Resolving ambiguity in policy analysis texts

This context helps us to interpret common (Step 1) advice in policy analysis textbooks: define a policy problem for your client, using your skills of research and persuasion but tailoring your advice to your client’s interests and beliefs. Yet, gone are the mythical days of elite analysts communicating to a single core executive in charge of formulating and implementing all policy instruments. Many analysts engage with many centres producing (or co-producing) many instruments. Resolving ambiguity in one centre does not guarantee the delivery of your aims across many.

Two ways to resolve ambiguity in policy analysis

Classic debates would highlight two different responses:

  • ‘Top down’ accounts see this issue through the lens of a single central government, examining how to reassert central control by minimising implementation gaps.

Policy analysis may focus on (a) defining the policy problem, and (b) ensuring the implementation of its solution.

  • ‘Bottom up’ accounts identify the inevitability (and legitimacy) of policy influence in multiple centres. Policy analysis may focus on how to define the problem in cooperation with other centres, or to set a strategic direction and encourage other centres to make sense of it in their context.

This terminology went out of fashion, but note the existence of each tendency in two ideal-type approaches to contemporary policy problems:

1. Centralised and formalised approaches.

Seek clarity and order to address urgent policy problems. Define the policy problem clearly, translate that definition into strategies for each centre, and develop a common set of effective ‘tools’ to ensure cooperation and delivery.

Policy analysis may focus on technical aspects, such as how to create a fine-detail blueprint for action, backed by performance management and accountability measures that tie actors to specific commitments.

The tagline may be: ambiguity is a problem to be solved, to direct policy actors towards a common goal.

2. Decentralised, informal, collaborative approaches.

Seek collaboration to make sense of, and address, problems. Reject a single definition of the problem, encourage actors in each centre (or in concert) to deliberate to make sense of problems together, and co-create the rules to guide a continuous process of collective behaviour.

Policy analysis may focus on how to contribute to a collaborative process of sense-making and rule-making.

The tagline may be: ambiguity presents an opportunity to energise policy actors, to harness the potential for innovation arising from deliberation.

Pick one approach and stick with it?

Describing these approaches in such binary terms makes the situation – and choice between approaches – look relatively straightforward. However, note the following issues:

  • Many policy sectors (and intersectoral agendas) are characterised by intense disagreement on which choice to make. These disagreements intersect with others (such as when people seek not only transformative policy change to solve global problems, but also equitable process and outcomes).
  • Some sectors seem to involve actors seeking the best of both worlds (centralise and localise, formalise and deliberate) without recognising the trade-offs and dilemmas that arise.
  • I have described these options as choices, but did not establish if anyone is in the position to make or contribute to that choice.

In that context, resolving ambiguity in your favour may still be the prize, but where would you even begin?

Further reading

Well, that was an unsatisfying end to the post, eh? Maybe I’ll write a better one when some things are published. In the meantime, some of these papers and posts explore some of these issues:

Leave a comment

Filed under Uncategorized

The COVID-19 exams fiasco across the UK: why did policymaking go so wrong?

This post first appeared on the LSE British Politics and Policy blog, and it summarises our new article: Sean Kippin and Paul Cairney (2021) ‘The COVID-19 exams fiasco across the UK: four nations and two windows of opportunity’, British Politics, PDF Annex. The focus on inequalities of attainment is part of the IMAJINE project on spatial justice and territorial inequalities.

In the summer of 2020, after cancelling exams, the UK and devolved governments sought teacher estimates on students’ grades, but supported an algorithm to standardise the results. When the results produced a public outcry over unfair consequences, they initially defended their decision but reverted quickly to teacher assessment. These experiences, argue Sean Kippin and Paul Cairney, highlight the confluence of events and choices in which an imperfect and rejected policy solution became a ‘lifeline’ for four beleaguered governments. 

In 2020, the UK and devolved governments performed a ‘U-turn’ on their COVID-19 school exams replacement policies. The experience was embarrassing for education ministers and damaging to students. There are significant differences between (and often within) the four nations in terms of the structure, timing, weight, and relationship between the different examinations. However, in general, the A-level (England, Northern Ireland, Wales) and Higher/ Advanced Higher (Scotland) examinations have similar policy implications, dictating entry to further and higher education, and influencing employment opportunities. The Priestley review, commissioned by the Scottish Government after their U-turn, described this as an ‘impossible task’.

Initially, each government defined the new policy problem in relation to the need to ‘credibly’ replicate the purpose of exams to allow students to progress to tertiary education or employment. All four quickly announced their intentions to allocate in some form grades to students, rather than replace the assessments with, for example, remote examinations. However, mindful of the long-term credibility of the examinations system and of ensuring fairness, each government opted to maintain the qualifications and seek a similar distribution of grades to previous years. A key consideration was that UK universities accept large numbers of students from across the UK.

One potential solution open to policymakers was to rely solely on teacher grading (CAG). CAGs are ‘based on a range of evidence including mock exams, non-exam assessment, homework assignments and any other record of student performance over the course of study’. Potential problems included the risk of high variation and discrepancies between different centres, the potential overload of the higher education system, and the tendency for teacher predicted grades to reward already privileged students and punish disabled, non-white, and economically deprived children.

A second option was to take CAGs as a starting point, then use an algorithm to produce ‘standardisation’, which was potentially attractive to each government as it allowed students to complete secondary education and to progress to the next level in similar ways to previous (and future) cohorts. Further, an emphasis on the technical nature of this standardisation, with qualifications agencies taking the lead in designing the process by which grades would be allocated, and opting not share the details of its algorithm were a key part of its (temporary) viability. Each government then made similar claims when defending the problem and selecting the solution. Yet this approach reduced both the debate on the unequal impact of this process on students, and the chance for other experts to examine if the algorithm would produce the desired effect. Policymakers in all four governments assured students that the grading would be accurate and fair, with teacher discretion playing a large role in the calculation of grades.

To these governments, it appeared at first that they had found a fair and efficient (or at least defendable) way to allocate grades, and public opinion did not respond negatively to its announcement. However, these appearances proved to be profoundly deceptive and vanished on each day of each exam result. The Scottish national mood shifted so intensely that, after a few days, pursuing standardisation no longer seemed politically feasible. The intense criticism centred on the unequal level of reductions of grades after standardisation, rather than the unequal overall rise in grade performance after teacher assessment and standardisation (which advantaged poorer students).

Despite some recognition that similar problems were afoot elsewhere, this shift of problem definition did not happen in the rest of the UK until (a) their published exam results highlighted similar problems regarding the role of previous school performance on standardised results, and (b) the Scottish Government had already changed course. Upon the release of grades outside Scotland, it became clear that downgrades were also concentrated in more deprived areas. For instance, in Wales, 42% of students saw their A-Level results lowered from their Centre Assessed Grades, with the figure close to a third for Northern Ireland.

Each government thus faced similar choices between defending the original system by challenging the emerging consensus around its apparent unfairness; modifying the system by changing the appeal system; or abandoning it altogether and reverting to solely teacher assessed grades. Ultimately, all three governments followed the same path. Initially, they opted to defend their original policy choice. However, by 17 August, the UK, Welsh, and Northern education secretaries announced (separately) that examination grades would be based solely on CAGs – unless the standardisation process had generated a higher grade (students would receive whichever was highest).

Scotland’s initial experience was instructive to the rest of the UK and its example provided the UK government with a blueprint to follow (eventually). It began with a new policy choice – reverting to teacher assessed grades – sold as fairer to victims of the standardisation process. Once this precedent had been set, a different course for policymakers at the UK level became difficult to resist, particularly when faced with a similar backlash. The UK’s government’s decision in turn influenced the Welsh and Northern Irish governments.

In short, we can see that the particular ordering of choices created a cascading effect across the four governments which created initially one policy solution, before triggering a U-turn. This focus on order and timing should not be lost during the inevitable inquiries and reports on the examinations systems. The take-home message is to not ignore the policy process when evaluating the long-term effect of these policies. Focus on why the standardisation processes went wrong is welcome, but we should also focus on why the policymaking process malfunctioned, to produce a wildly inconsistent approach to the same policy choice in such a short space of time. Examining both aspects of this fiasco will be crucial to the grading process in 2021, given that governments will be seeking an alternative to exams for a second year.

__________________________

Note: the above draws on the authors’ published work in British Politics.

Leave a comment

Filed under IMAJINE, Policy learning and transfer, public policy, UK politics and policy

What have we learned so far from the UK government’s COVID-19 policy?

This post first appeared on LSE British Politics and Policy (27.11.20) and is based on this article in British Politics.

Paul Cairney assesses government policy in the first half of 2020. He identifies the intense criticism of its response so far, encouraging more systematic assessments grounded in policy research.

In March 2020, COVID-19 prompted policy change in the UK at a speed and scale only seen during wartime. According to the UK government, policy was informed heavily by science advice. Prime Minister Boris Johnson argued that, ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’. Further, key scientific advisers such as Sir Patrick Vallance emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term.

Both ministers and advisors emphasised the need for individual behavioural change, supplemented by government action, in a liberal democracy in which direct imposition is unusual and unsustainable. However, for its critics, the government experience has quickly become an exemplar of policy failure.

Initial criticisms include that ministers did not take COVID-19 seriously enough in relation to existing evidence, when its devastating effect was apparent in China in January and Italy from February; act as quickly as other countries to test for infection to limit its spread; or introduce swift-enough measures to close schools, businesses, and major social events. Subsequent criticisms highlight problems in securing personal protective equipment (PPE), testing capacity, and an effective test-trace-and-isolate system. Some suggest that the UK government was responding to the ‘wrong pandemic’, assuming that COVID-19 could be treated like influenza. Others blame ministers for not pursuing an elimination strategy to minimise its spread until a vaccine could be developed. Some criticise their over-reliance on models which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown. Many describe these problems and delays as the contributors to the UK’s internationally high number of excess deaths.

How can we hold ministers to account in a meaningful way?

I argue that these debates are often fruitless and too narrow because they do not involve systematic policy analysis, take into account what policymakers can actually do, or widen debate to consider whose lives matter to policymakers. Drawing on three policy analysis perspectives, I explore the questions that we should ask to hold ministers to account in a way that encourages meaningful learning from early experience.

These questions include:

Was the government’s definition of the problem appropriate?
Much analysis of UK government competence relates to specific deficiencies in preparation (such as shortages in PPE), immediate action (such as to discharge people from hospitals to care homes without testing them for COVID-19), and implementation (such as an imperfect test-trace-and-isolate system). The broader issue relates to its focus on intervening in late March to protect healthcare capacity during a peak of infection, rather than taking a quicker and more precautionary approach. This judgment relates largely to its definition of the policy problem which underpins every subsequent policy intervention.

Did the government select the right policy mix at the right time? Who benefits most from its choices?

Most debates focus on the ‘lock down or not?’ question without exploring fully the unequal impact of any action. The government initially relied on exhortation, based on voluntarism and an appeal to social responsibility. Initial policy inaction had unequal consequences on social groups, including people with underlying health conditions, black and ethnic minority populations more susceptible to mortality at work or discrimination by public services, care home residents, disabled people unable to receive services, non-UK citizens obliged to pay more to live and work while less able to access public funds, and populations (such as prisoners and drug users) that receive minimal public sympathy. Then, in March, its ‘stay at home’ requirement initiated a major new policy and different unequal impacts in relation to the income, employment, and wellbeing of different groups. These inequalities are list in more general discussions of impacts on the whole population.

Did the UK government make the right choices on the trade-offs between values, and what impacts could the government have reasonably predicted?

Initially, the most high-profile value judgment related to freedom from state coercion to reduce infection versus freedom from the harm of infection caused by others. Then, values underpinned choices on the equitable distribution of measures to mitigate the economic and wellbeing consequences of lockdown. A tendency for the UK government to project centralised and ‘guided by the science’ policymaking has undermined public deliberation on these trade-offs between policies. The latter will be crucial to ongoing debates on the trade-offs associated with national and regional lockdowns.

Did the UK government combine good policy with good policymaking?

A problem like COVID-19 requires trial-and-error policymaking on a scale that seems incomparable to previous experiences. It requires further reflection on how to foster transparent and adaptive policymaking and widespread public ownership for unprecedented policy measures, in a political system characterised by (a) accountability focused incorrectly on strong central government control and (b) adversarial politics that is not conducive to consensus seeking and cooperation.

These additional perspectives and questions show that too-narrow questions – such as was the UK government ‘following the science’ – do not help us understand the longer term development and wider consequences of UK COVID-19 policy. Indeed, such a narrow focus on science marginalises wider discussions of values and the populations that are most disadvantaged by government policy.

_____________________

2 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, Public health, public policy, UK politics and policy

Policy learning to reduce inequalities: a practical framework

This post first appeared on LSE BPP on 16.11.2020 and it describes the authors’ published work in Territory, Politics, Governance (for IMAJINE)

While policymakers often want to learn how other governments have responded to certain policies, policy learning is characterized by contestation. Policymakers compete to define the problem, set the parameters for learning, and determine which governments should take the lead. Emily St.DennyPaul Cairney, and Sean Kippin discuss a framework that would encourage policy learning in multilevel systems.

Governments face similar policy problems and there is great potential for mutual learning and policy transfer. Yet, most policy research highlights the political obstacles to learning and the weak link between research and transfer. One solution may be to combine academic insights from policy research with practical insights from people with experience of learning in political environments. In that context, our role is to work with policy actors to produce pragmatic strategies to encourage realistic research-informed learning.

Pragmatic policy learning

Producing concepts, research questions, and methods that are interesting to both academics and practitioners is challenging. It requires balancing different approaches to gathering and considering ‘evidence’ when seeking to solve a policy problem. Practitioners need to gather evidence quickly, focusing on ‘what works’ or positive experiences from a small number of relevant countries. Policy scholars may seek more comprehensive research and warn against simple solutions. Further, they may do so without offering a feasible alternative to their audience.

To bridge these differences and facilitate policy learning, we encourage a pragmatic approach to policy learning that requires:

  • Seeing policy learning through the eyes of participants, to understand how they define and seek to solve this problem;
  • Incorporating insights from policy research to construct a feasible approach;
  • Reflecting on this experience to inform research.

Our aim is not ‘evidence-based policymaking’. Rather, it is to incorporate the fact that researchers and evidence form only one small component of a policymaking system characterized by complexity. Additionally, policy actors enjoy less control over these systems than we might like to admit. Learning is therefore best understood as a contested process in which actors combine evidence and beliefs to define policy problems, identify technically and politically feasible solutions, and negotiate who should be responsible for their adoption and delivery in multilevel policymaking systems. Taking seriously the contested, context-specific, and political nature of policymaking is crucial for producing effective advice from which to learn.

Policy learning to reduce inequalities

We apply these insights as part of the EU Horizon 2020 project Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe (IMAJINE). Its overall aim is to research how national and territorial governments across the European Union pursue ‘spatial justice’ and try to reduce inequalities.

Our role is to facilitate policy learning and consider the transfer of policy solutions from successful experiences. Yet, we are confronted by the usual challenges. They include the need to: identify appropriate exemplars from where to draw lessons; help policy practitioners control for differences in context; and translate between academic and practitioner communities.

Additionally, we work on an issue – inequality – which is notoriously ambiguous and contested. It involves not only scientific information about the lives and experiences of people, but also political disagreement about the legitimate role of the state in intervening in people’s lives or redistributing of resources. Developing a policy learning framework that is able to generate practically useful insights for policy actors is difficult but key to ensuring policy effectiveness and coherence.

Drawing on work we carried out for the Scottish Government’s National Advisory Council on Women and Girls on approaches to reducing inequalities in relation to gender mainstreaming, we apply the IMAJINE framework to support policy learning. The IMAJINE framework guides such academic–practitioner analysis in four steps:

Step 1: Define the nature of policy learning in political systems.

Preparing for learning requires taking into account the interaction between:

  • Politics, in which actors contest the nature of problems and the feasibility of solutions;
  • Bounded rationality, which requires actors to use organizational and cognitive shortcuts to gather and use evidence;
  • ‘Multi-centric’ policymaking systems, which limit a single central government’s control over choices and outcomes.

These dynamics play out in different ways in each territory, which means that the importers and exporters of lessons are operating in different contexts and addressing inequalities in different ways. Therefore, we must ask how the importers and exporters of lessons: define the problem, decide what policies are feasible, establish which level of government should be responsible for policy and identify criteria to evaluate policy success.

Step 2: Map policymaking responsibilities for the selection of policy instruments.

The Council of Europe defines gender mainstreaming as ‘the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages’.

Such definitions help explain why mainstreaming approaches often appear to be incoherent. To map the sheer weight of possible measures, and the spread of responsibility across many levels of government (such as local, Scottish, UK and EU), is to identify a potentially overwhelming scale of policymaking ambition. Further, governments tend to address this potential by breaking policymaking into manageable sectors. Each sector has its own rules and logics, producing coherent policymaking in each ‘silo’ but a sense of incoherence overall, particularly if the overarching aim is a low priority in government. Mapping these dynamics and responsibilities is necessary to ensure lessons learned can be effectively applied in similarly complex domestic systems.

Step 3: Learn from experience.

Policy actors want to draw lessons from the most relevant exemplars. Often, they will have implicit or explicit ideas concerning which countries they would like to learn more from. Negotiating which cases to explore, so that it takes into consideration both policy actors’ interests and the need to generate appropriate and useful lessons, is vital.

In the case of mainstreaming, we focused on three exemplar approaches, selected by members of our audience according to perceived levels of ambition: maximal (Sweden), medial (Canada) and minimal (the UK, which controls aspects of Scottish policy). These cases were also justified with reference to the academic literature which often uses these countries as exemplars of different approaches to policy design and implementation.

Step 4: Deliberate and reflect.

Work directly with policy participants to reflect on the implications for policy in their context. Research has many important insights on the challenges to and limitations of policy learning in complex systems. In particular, it suggests that learning cannot be comprehensive and does not lead to the importation of a well-defined package of measures. Bringing these sorts of insights to bear on policy actors’ practical discussions of how lessons can be drawn and applied from elsewhere is necessary, though ultimately insufficient. In our experience so far, step 4 is the biggest obstacle to our impact.

___________________

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), feminism, IMAJINE, Policy learning and transfer, public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Policy in 500 Words: The advocacy coalition framework

Here is the ACF story.

People engage in politics to turn their beliefs into policy. They form advocacy coalitions with people who share their beliefs, and compete with other coalitions. The action takes place within a subsystem devoted to a policy issue, and a wider policymaking process that provides constraints and opportunities to coalitions.

The policy process contains multiple actors and levels of government. It displays a mixture of intensely politicized disputes and routine activity. There is much uncertainty about the nature and severity of policy problems. The full effects of policy may be unclear for over a decade. The ACF sums it up in the following diagram:

acf

Policy actors use their beliefs to understand, and seek influence in, this world. Beliefs about how to interpret the cause of and solution to policy problems, and the role of government in solving them, act as a glue to bind actors together within coalitions.

If the policy issue is technical and humdrum, there may be room for routine cooperation. If the issue is highly charged, then people romanticise their own cause and demonise their opponents.

The outcome is often long-term policymaking stability and policy continuity because the ‘core’ beliefs of coalitions are unlikely to shift and one coalition may dominate the subsystem for long periods.

There are two main sources of change.

  1. Coalitions engage in policy learning to remain competitive and adapt to new information about policy. This process often produces minor change because coalitions learn on their own terms. They learn how to retain their coalition’s strategic advantage and use the information they deem most relevant.
  2. ‘Shocks’ affect the positions of coalitions within subsystems. Shocks are the combination of events and coalition responses. External shocks are prompted by events including the election of a new government with different ideas, or the effect of socioeconomic change. Internal shocks are prompted by policy failure. Both may prompt major change as members of one coalition question their beliefs in the light of new evidence. Or, another coalition may adapt more readily to its new policy environment and exploit events to gain competitive advantage.

The ACF began as the study of US policymaking, focusing largely on environmental issues. It has changed markedly to reflect the widening of ACF scholarship to new policy areas, political systems, and methods.

For example, the flow diagram’s reference to the political system’s long term coalition opportunity structures is largely the response to insights from comparative international studies:

  • A focus on the ‘degree of consensus needed for major policy change’ reflects applications in Europe that highlighted the important of proportional electoral systems
  • A focus on the ‘openness of the political system’ partly reflects applications to countries without free and fair elections, and/ or systems that do not allow people to come together easily as coalitions to promote policy change.

As such, like all theories in this series, the ACF discusses elements that it would treat as (a) universally applicable, such as the use of beliefs to address bounded rationality, and (b) context-specific, such as the motive and opportunity of specific people to organize collectively to translate their beliefs into policy.

See also:

The 500 and 1000 Words series

Why Advocacy Coalitions Matter and How to Think about Them

Three lessons from a comparison of fracking policy in the UK and Switzerland

Bonus material

Scottish Independence and the Devil Shift

Image source: Weible, Heikkila, Ingold, and Fischer (2016: 6)

https://twitter.com/amwellstead/status/1095011852915011586

 

 

 

20 Comments

Filed under 500 words, public policy

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.

The event’s description sums up key conclusions in the literature on policy learning and policy transfer:

  1. technology and ‘entrepreneurs’ help ideas spread internationally, and domestic policymakers can use them to be more informed about global policy innovation, but
  2. there can be major unintended consequences to importing ideas, such as the adoption of policy solutions with poorly-evidenced success, or a broader sense of failed transportation caused by factors such as a poor fit between the aims of the exporter/importer.

In this post, I connect these conclusions to broader themes in policy studies, which suggest that:

  1. policy learning and policy transfer are political processes, not ‘rational’ or technical searches for information
  2. the use of evidence to spread policy innovation requires two interconnected choices: what counts as good evidence, and what role central governments should play.
  3. the following ’11 question guide’ to evidence based policy transfer serves more as a way to reflect than a blueprint for action.

As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.

anzog auckland transfer ad

Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?

Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:

  1. ‘Evidence based’ is a highly misleading description of the use of information in policy.
  2. To transfer a policy blueprint completely, in this manner, would require all places and contexts to be the same, and for the policy process to be technocratic and apolitical.
  3. There are general academic guides on how to learn lessons from others systematically – such as Richard Rose’s ‘practical guide’  – but most academic work on learning and transfer does not suggest that policymakers follow this kind of advice.

Rose 10 lessons rotated

Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.

Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:

3 reasons why ‘evidence based’ does not describe policymaking

In a series of ANZSOG talks on ‘evidence based policymaking’ (EBPM), I describe three main factors, all of which are broadly relevant to transfer:

  1. There are many forms of policy-relevant evidence and few policymakers adhere to a strict ‘hierarchy’ of knowledge.

Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.

  1. Policymakers must find ways to ignore most evidence – such as by combining ‘rational’ and ‘irrational’ cognitive shortcuts – to be able to act quickly.

The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.

  1. They do not control the policy process in which they engage.

We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.

The literature on ‘policy learning’ tells a similar story

Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.

We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:

1.It is collective and rule-bound

Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.

2.’Evidence based’ is one of several types of policy learning

  • Epistemic. Primarily by scientific experts transmitting knowledge to policymakers.
  • Reflection. Open dialogue to incorporate diverse forms of knowledge and encourage cooperation.
  • Bargaining. Actors learn how to cooperate and compete effectively.
  • Hierarchy. Actors with authority learn how to impose their aims; others learn the limits to their discretion.

3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.

Their analysis can produce relevant take-home points such as:

  • Experts will be ineffective if they assume that policy learning is epistemic. The assumption will leave them ill-prepared to deal with bargaining.
  • There is more than one legitimate way to learn, such as via deliberative processes that incorporate more perspectives and forms of knowledge.

What does the literature on transfer tell us?

‘Policy transfer’ can describe a spectrum of activity:

  • driven voluntarily, by a desire to learn from the story of another government’s policy’s success. In such cases, importers use shortcuts to learning, such as by restricting their search to systems with which they have something in common (such as geography or ideology), learning via intermediaries such as ‘entrepreneurs’, or limiting their searches for evidence of success.
  • driven by various forms of pressure, including encouragement by central (or supranational) governments, international norms or agreements, ‘spillover’ effects causing one system to respond to innovation by another, or demands by businesses to minimise the cost of doing business.

In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:

  • Failing to generate or use enough evidence on what made the initial policy successful
  • Failing to adapt that policy to local circumstances
  • Failing to back policy change with sufficient resources

However, other studies highlight some major qualifications:

  • If the process is about using ideas about one system to inform another, our attention may shift from ‘transfer’ to ‘translation’ or ‘transformation’, and the idea of ‘successful transfer’ makes less sense
  • Transfer success is not the same as implementation success, which depends on a wider range of factors
  • Nor is it the same as ‘policy success’, which can be assessed by a mix of questions to reflect political reality: did it make the government more re-electable, was the process of change relatively manageable, and did it produce intended outcomes?

The use of evidence to spread policy innovation requires a combination of profound political and governance choices

When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.

For example, consider these ideal-types or models in table 1:

Table 1 3 ideal types of EBBP

In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.

In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.

In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.

Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer  

In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.

  1. What problem did policymakers say they were trying to solve, and why?
  2. What solution(s) did they produce?
  3. Why?

Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2)  ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.

4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.

5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?

6. How do we account for the role of scale, and the different cultures and expectations in each policy field?

Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.

7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?

8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?

9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?

10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?

Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.

11. What will be the relationship between evidence and governance?

Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?

In conclusion

Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.

This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.

Paul Cairney Auckland Policy Transfer 12.10.18

 

 

8 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

Epistemic versus bargaining-driven policy learning

There is an excellent article by Professor Claire Dunlop called “The irony of epistemic learning: epistemic communities, policy learning and the case of Europe’s hormones saga” (Open Access). It uses the language of ‘policy learning’ rather than ‘evidence based policymaking’, but these descriptions are closely related. I describe it below, in the form I’ll use in the 2nd ed of Understanding Public Policy (it will be Box 12.2).

Dunlop (2017c) uses a case study – EU policy on the supply of growth hormones to cattle – to describe the ‘irony of epistemic learning’. It occurs in two initial steps.

First, a period of epistemic learning allowed scientists to teach policymakers the key facts on a newly emerging policy issue. The scientists, trusted to assess risk, engaged in the usual processes associated with scientific work: gathering evidence to reduce uncertainty, but always expressing the need to produce continuous research to address inevitable uncertainty in some cases.  The ‘Lamming’ committee of experts commissioned and analysed scientific evidence comprehensively before reporting (a) that the use of ‘naturally occurring’ hormones in livestock was low risk for human consumers if administered according to regulations and guidance, but (b) it wanted more time  to analyse the carcinogenic effects of two ‘synthetic compounds’ (2017c: 224).

Second, a period of bargaining changed the context. EU officials (in DG Agriculture) responded to European Parliament concerns, fuelled by campaigning from consumer groups, which focused on uncertainty and worst-case scenarios. Officials suspended the committee’s deliberations before it was due to report and banned the use of growth hormones in the EU (and the importation of relevant meat).

The irony is two-fold.

First, it results from the combination of processes: scientists, operating in epistemic mode, described low risk but some uncertainty; and policymakers, operating in bargaining mode, used this sense of uncertainty to reject scientific advice.

Second, scientists were there to help policymakers learn about the evidence, but were themselves unable to learn about how to communicate and form wider networks within a political system characterised by periods of bargaining-driven policy learning.

dunlop 2017c picture

4 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

Policy concepts in 1000 words: Institutional memory

Guest post by Jack Corbett, Dennis Grube, Heather Lovell and Rodney Scott

Democratic governance is defined by the regular rotation of elected leaders. Amidst the churn, the civil service is expected to act as the repository of received wisdom about past policies, including assessments of what works and what doesn’t. The claim is that to avoid repeating the same mistakes we need to know what happened last time and what were the effects. Institutional memory is thus central to the pragmatic task of governing.

What is institutional memory? And, how is it different to policy learning?

Despite increasing recognition of the role that memory can or should play in the policy process, the concept has defied easy scholarly definition.

In the classic account, institutional memory is the sum total of files, procedures and knowledge held by an organisation. Christopher Pollitt, who has pioneered the study of institutional memory, refers to the accumulated knowledge and experience of staff, technical systems, including electronic databases and various kinds of paper records, the management system, and the norms and values of the organizational culture, when talking about institutional memory. In this view, which is based on the key principles of the new institutionalism, memory is essentially an archive.

The problem with this definition is that it is hard to distinguish the concept from policy learning (see also here). If policy learning is in part about increasing knowledge about policy, including correcting for past mistakes, then we could perhaps conceive of a continuum from learning to memory with an inflection point where one starts and the other stops. But, this is easier to imagine than it is to measure empirically. It also doesn’t acknowledge the forms memories take and the ways memories are contested, suppressed and actively forgotten.

In our recent contribution to this debate (see here and here) we define memories as ‘representations of the past’ that actors draw on to narrate what has been learned when developing and implementing policy. When these narratives are embedded in processes they become ‘institutionalised’. It is this emphasis on embedded narratives that distinguishes institutional memory from policy learning. Institutional memory may facilitate policy learning but equally some memories may prohibit genuine adaptation and innovation. As a result, while there is an obvious affinity between the two concepts it is imperative that they remain distinct avenues of inquiry. Policy learning has unequivocally positive connotations that are echoed in some conceptualisations of institutional memory (i.e. Pollitt). But, equally, memory (at least in a ‘static’ form) can be said to provide administrative agents with an advantage over political principals (think of the satirical Sir Humphrey of Yes Minister fame). The below table seeks to distinguish between these two conceptualisations of institutional memory:

Key debates: Is institutional memory declining?

The scholar who has done the most to advance our understanding of institutional memory in government is Christopher Pollitt. His main contention is that institutional memory has declined over recent decades due to: the high rotation of staff in the civil service, changes in IT systems which prevent proper archiving, regular organisational restructuring, rewarding management skills above all others, and adopting new management ‘fads’ that favour constant change as they become popular. This combination of factors has proven to be a perfect recipe for the loss of institutional memory within organisations.  The result is a contempt for the past that leads to repeated policy failure.

We came to a different view. Our argument is that one of the key reasons why institutional memory is said to have declined is that it has been conceptualised in a ‘static’ manner more in keeping with an older way of doing government. This practice has assumed that knowledge on a given topic is held centrally (by government departments) and can be made explicit for the purpose of archiving. But, if government doesn’t actually work this way (see relevant posts on networks here) then we shouldn’t expect it to remember this way either. Instead of static repositories of summative documents holding a singular ‘objective’ memory, we propose a more ‘dynamic’ people-centred conceptualisation that sees institutional memory as a composite of intersubjective memories open to change. This draws to the fore the role of actors as crucial interpreters of memory, combining the documentary record with their own perspectives to create a story about the past. In this view, institutional memory has not declined, it is simply being captured in a fundamentally different way.

Corbett et al memory

Key debates: How can an institution improve how it remembers?

How an institution might improve its memory is intrinsically linked to how memory is defined and whether or not it is actually in decline. If we follow Pollitt’s view that memory is about the archive of accumulated knowledge that is being ignored or deliberately dismantled by managerialism then the answer involves returning to an older way of doing government that placed a higher value on experience. By putting a higher value on the past as a resource institutions would reduce staff turnover, stop regular restructures and changes in IT systems, etc. For those of us who work in an institution where restructuring and IT changes are the norm, this solution has obvious attractions. But, would it actually improve memory? Or would it simply make it easier to preserve the status quo (a process that involves actively forgetting disruptive but generative innovations)?

Our definition, relying as it does on a more dynamic conceptualisation of memory, is sceptical about the need to improve practices of remembering. But, if an institution did want to remember better we would favour increasing the opportunity for actors within an institution to reflect on and narrate the past. One example of this might be a ‘Wikipedia’ model of memory in which the story of a policy, it success and failure, is constructed by those involved, highlighting points of consensus and conjecture.

Additional reading:

 Corbett J, Grube D, Lovell H, Scott R. “Singular memory or institutional memories? Toward a dynamic approach”. Governance. 2018;00:1–19. https://doi.org/10.1111/gove.12340

 Pollitt, C. 2009. “Bureaucracies Remember, Post‐Bureaucratic Organizations Forget?” Public Administration 87 (2): 198-218.

Pollitt, C. 2000. “Institutional Amnesia: A Paradox of the ‘Information Age’?” Prometheus 18 (1): 5-16.

 

1 Comment

Filed under 1000 words, public policy, Uncategorized

Three ways to encourage policy learning

Claire Claudio

This is a guest post by  Claire A. Dunlop (left) and Claudio M. Radaelli (right), discussing how to use insights from the Policy Learning literature to think about how to learn effectively or adapt to the processes of ‘learning’ in policymaking that are more about politics than education. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

We often hear that university researchers are ‘all brains but no common sense’. There is often some truth to this stereotype. The literature on policy learning is an archetypal example of being high in IQ but low on street smarts. Researchers have generated a huge amount of ‘policy learning’ taxonomies, concepts and methods without showing what learning can offer policy-makers, citizens and societies.

This is odd because there is a substantive demand and need for practical insights on how to learn. Issues include economic growth, the control of corruption, and improvement in schools and health. Learning organisations range from ‘street level bureaucracies’ to international regulators like the European Union and the World Trade Organization.

To help develop a more practical agenda, we distil three major lessons from the policy learning literature.

1. Learning is often the by-product of politics, not the primary goal of policymakers

There is usually no clear incentive for political actors to learn how to improve public policy. Learning is often the by-product of bargaining, the effort to secure compliance with the law and rules, social participation, or problem-solving when there is radical uncertainty. This means that in politics we should not assume that politicians, bureaucrats, civil society organizations, experts interact to improve public policy. Consensus, participation, formal procedures, and social certification are more important.

Therefore, we have to learn how to design incentives so that the by-product of learning is actually generated. Otherwise, few actors will play the game of the policy-making process with learning as their first goal. Learning is all around us, but it appears in different forms, depending on whether the context is (a) bargaining, (b) compliance, (c) participation or (d) problem-solving under conditions of high uncertainty.

2. Each mode of learning has its triggers or hindrances

(a) Bargaining requires repeated interaction, low barriers to contract and mechanisms of preference aggregation.

(b) Compliance without trust in institutions is stymied.

(c) Participation needs its own deliberative spaces and a type of participant willing to go beyond the ‘dialogue of the deaf’. Without these two triggers, participation is chaotic, highly conflictual and inefficient.

(d) Expertise is key to problem-solving, but governments should design their advisory committees and special commissions of inquiry by recruiting a broad range of experts. The risk of excluding the next Galileo Galilei in a Ptolemaic committee is always there.

At the same time, there are specific hindrances:

(a) Bargaining stops when the winners are always the same (if you are thinking of Germany and Greece in the European Union you are spot-on).

(b) Hierarchy does not produce efficient compliance unless those at the top know exactly the solution to enforce.

(c) Incommensurable beliefs spoil participatory policy processes. If so, it’s better to switch to open democratic conflict, by counting votes in elections and referenda for example.

(d) Scientific scepticism and low policy capacity mar the work of experts in governmental bodies.

These triggers and hindrances have important lessons for design, perhaps prompting authorities (governments, regulators, public bodies) to switch from one context to another. For example, one can re-design the work of expert committees by including producers and consumers organizations or by allowing bargaining on the implementation of budgetary rules.

3. Beware the limitations of learning

We may get this precious by-product and avoid hindrances and traps, but still… learn the wrong lessons.

Latin America and Africa offer too many examples of diligent pupils who did exactly what they were supposed to do, but in the end implemented wrong policies. Perfect compliance does not provide breathing spaces to a policy and impairs the quality of innovation. We have to balance lay and professional knowledge. Bargaining does not allow us to learn about radical innovations; in some cases only a new participant can really change the nature of the game being played by the usual suspects.

So, whether the problem is learning how to fight organized crime and corruption, or to re-launch growth in Europe and development in Africa, the design of the policy process is crucial. For social actors, our analysis shows when and how they should try to change the nature of the game, or lobby for a re-design of the process. This lesson is often forgotten because social actors fight for a given policy objective, not for the parameters that define who does what and how in the policy process.

12 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

There is no blueprint for evidence-based policy, so what do you do?

In my speech to COPOLAD I began by stating that, although we talk about our hopes for evidence-based policy and policymaking (EBP and EBPM), we don’t really know what it is.

I also argued that EBPM is not like our image of evidence-based medicine (EBM), in which there is a clear idea of: (a) which methods/ evidence counts, and (b) the main aim, to replace bad interventions with good.

In other words, in EBPM there is no blueprint for action, either in the abstract or in specific cases of learning from good practice.

To me, this point is underappreciated in the study of EBPM: we identify the politics of EBPM, to highlight the pathologies of/ ‘irrational’ side to policymaking, but we don’t appreciate the more humdrum limits to EBPM even when the political process is healthy and policymakers are fully committed to something more ‘rational’.

Examples from best practice

The examples from our next panel session* demonstrated these limitations to EBPM very well.

The panel contained four examples of impressive policy developments with the potential to outline good practice on the application of public health and harm reduction approaches to drugs policy (including the much-praised Portuguese model).

However, it quickly became apparent that no country-level experience translated into a blueprint for action, for some of the following reasons:

  • It is not always clear what problems policymakers have been trying to solve.
  • It is not always clear how their solutions, in this case, interact with all other relevant policy solutions in related fields.
  • It is difficult to demonstrate clear evidence of success, either before or after the introduction of policies. Instead, most policies are built on initial deductions from relevant evidence, followed by trial-and-error and some evaluations.

In other words, we note routinely the high-level political obstacles to policy emulation, but these examples demonstrate the problems that would still exist even if those initial obstacles were overcome.

A key solution is easier said than done: if providing lessons to others, describe it systematically, in a form that describes the steps to take to turn this model into action (and in a form that we can compare with other experiences). To that end, providers of lessons might note:

  • The problem they were trying to solve (and how they framed it to generate attention, support, and action, within their political systems)
  • The detailed nature of the solution they selected (and the conditions under which it became possible to select that intervention)
  • The evidence they used to guide their initial policies (and how they gathered it)
  • The evidence they collected to monitor the delivery of the intervention, evaluate its impact (was it successful?), and identify cause and effect (why was it successful?)

Realistically this is when the process least resembles (the ideal of) EBM because few evaluations of success will be based on a randomised control trial or some equivalent (and other policymakers may not draw primarily on RCT evidence even when it exists).

Instead, as with much harm reduction and prevention policy, a lot of the justification for success will be based on a counterfactual (what would have happened if we did not intervene?), which is itself based on:

(a) the belief that our object of policy is a complex environment containing many ‘wicked problems’, in which the effects of one intervention cannot be separated easily from that of another (which makes it difficult, and perhaps even inappropriate, to rely on RCTs)

(b) an assessment of the unintended consequence of previous (generally more punitive) policies.

So, the first step to ‘evidence-based policymaking’ is to make a commitment to it. The second is to work out what it is. The third is to do it in a systematic way that allows others to learn from your experience.

The latter may be more political than it looks: few countries (or, at least, the people seeking re-election within them) will want to tell the rest of the world: we innovated and we don’t think it worked.

*I also discuss this problem of evidence-based best practice within single countries

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco policy