Prevention is better than cure, so why aren’t we doing more of it?

This post provides a generous amount of background for my ANZSOG talk Prevention is better than cure, so why aren’t we doing more of it? If you read all of it, it’s a long read. If not, it’s a short read before the long read. Here is the talk’s description:

‘Does this sound familiar? A new government comes into office, promising to shift the balance in social and health policy from expensive remedial, high dependency care to prevention and early intervention. They commit to better policy-making; they say they will join up policy and program delivery, devolving responsibility to the local level and focusing on long term outcomes rather than short term widgets; and that they will ensure policy is evidence-based.  And then it all gets too hard, and the cycle begins again, leaving some exhausted and disillusioned practitioners in its wake. Why does this happen repeatedly, across different countries and with governments of different persuasions, even with the best will in the world?’ 

  • You’ll see from the question that I am not suggesting that all prevention or early intervention policies fail. Rather, I use policy theories to provide a general explanation for a major gap between the (realistic) expectations expressed in prevention strategies and the actual outcomes. We can then talk about how to close that gap.
  • You’ll also see the phrase ‘even with the best will in the world’, which I think is key to this talk. No-one needs me to rehearse the usually-vague and often-stated ways to explain failed prevention policies, including the ‘wickedness’ of policy problems, or the ‘pathology’ of public policy. Rather, I show that such policies may ‘fail’ even when there is wide and sincere cross-party agreement about the need to shift from reactive to more prevention policy design. I also suggest that the general explanation for failure – low ‘political will’ – is often damaging to the chances for future success.
  • Let’s start by defining prevention policy and policymaking.

When engaged in ‘prevention’, governments seek to:

  1. Reform policy.

Prevention policy is really a collection of policies designed to intervene as early as possible in people’s lives to improve their wellbeing and reduce inequalities and/or demand for acute services. The aim is to move from reactive to preventive public services, intervening earlier in people’s lives to address a wide range of longstanding problems – including crime and anti-social behaviour, ill health and unhealthy behaviour, low educational attainment, unemployment and low employability – before they become too severe.

  1. Reform policymaking.

Preventive policymaking describes the ways in which governments reform their practices to support prevention policy, including a commitment to:

  • ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area
  • give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users produce long term aims for outcomes, and
  • reduce short term performance targets in favour of long term outcomes agreements.
  1. Ensure that policy is ‘evidence based’.

Three general reasons why ‘prevention’ policies never seem to succeed.

  1. Policymakers don’t know what prevention means.

They express a commitment to prevention before defining it fully. When they start to make sense of prevention, they find out how difficult it is to pursue, and how many controversial choices it involves (see also uncertainty versus ambiguity)

  1. They engage in a policymaking system that is too complex to control.

They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes.

Yet, they also need to demonstrate to the electorate that they are in control, and find out how difficult it is to localise and centralise policy.

  1. They are unable and unwilling to produce ‘evidence based policymaking’.

Policymakers seek cognitive shortcuts (and their organisational equivalents) to gather enough information to make ‘good enough’ decisions. When they seek evidence on prevention, they find that it is patchy, inconclusive, often counter to their beliefs, and not a ‘magic bullet’ to help justify choices.

Throughout this process, their commitment to prevention policy can be sincere but unfulfilled. They do not articulate fully what prevention means or appreciate the scale of their task. When they try to deliver prevention strategies, they face several problems that, on their own, would seem daunting. Many of the problems they seek to ‘prevent’ are ‘wicked’, or difficult to define and seemingly impossible to solve, such as poverty, unemployment, low quality housing and homelessness, crime, and health and education inequalities. They face stark choices on how far they should go to shift the balance between state and market, redistribute wealth and income, distribute public resources, and intervene in people’s lives to change their behaviour and ways of thinking. Their focus on the long term faces major competition from more salient short-term policy issues that prompt them to maintain ‘reactive’ public services. Their often-sincere desire to ‘localise’ policymaking often gives way to national electoral politics, in which central governments face pressure to make policy from the ‘top’ and be decisive. Their pursuit of ‘evidence based’ policymaking often reveals a lack of evidence about which policy interventions work and the extent to which they can be ‘scaled up’ successfully.

These problems will not be overcome if policy makers and influencers misdiagnose them

  • If policy influencers make the simplistic assumption that this problem is caused by low political they will provide bad advice.
  • If new policymakers truly think that the problem was the low commitment and competence of their predecessors, they will begin with the same high hopes about the impact they can make, only to become disenchanted when they see the difference between their abstract aims and real world outcomes.
  • Poor explanation of limited success contributes to the high potential for (a) an initial period of enthusiasm and activity, replaced by (b) disenchantment and inactivity, and (c) for this cycle to be repeated without resolution.

Let’s add more detail to these general explanations:

1. What makes prevention so difficult to define?

When viewed as a simple slogan, ‘prevention’ seems like an intuitively appealing aim. It can generate cross-party consensus, bringing together groups on the ‘left’, seeking to reduce inequalities, and on the ‘right’, seeking to reduce economic inactivity and the cost of services.

Such consensus is superficial and illusory. When making a detailed strategy, prevention is open to many interpretations by many policymakers. Imagine the many types of prevention policy and policymaking that we could produce:

  1. What problem are we trying to solve?

Prevention policymaking represents a heroic solution to several crises: major inequalities, underfunded public services, and dysfunctional government.

  1. On what measures should we focus?

On which inequalities should we focus primarily? Wealth, occupation, income, race, ethnicity, gender, sexuality, disability, mental health.

On which measures of inequality? Economic, health, healthy behaviour, education attainment, wellbeing, punishment.

  1. On what solution should we focus?

To reduce poverty and socioeconomic inequalities, improve national quality of life, reduce public service costs, or increase value for money

  1. Which ‘tools’ or policy instruments should we use?

Redistributive policies to address ‘structural’ causes of poverty and inequality?

Or, individual-focused policies to: (a) boost the mental ‘resilience’ of public service users, (b) oblige, or (c) exhort people to change behaviour.

  1. How do we intervene as early as possible in people’s lives?

Primary prevention. Focus on the whole population to stop a problem occurring by investing early and/or modifying the social or physical environment. Akin to whole-population immunizations.

Secondary prevention. Focus on at-risk groups to identify a problem at a very early stage to minimise harm.

Tertiary prevention. Focus on affected groups to stop a problem getting worse.

  1. How do we pursue ‘evidence based policymaking’? 3 ideal-types

Using randomised control trials and systematic review to identify the best interventions?

Storytelling to share best governance practice?

‘Improvement’ methods to experiment on a small scale and share best practice?

  1. How does evidence gathering connect to long-term policymaking?

Does a national strategy drive long-term outcomes?

Does central government produce agreements with or targets for local authorities?

  1. Is preventive policymaking a philosophy or a profound reform process?

How serious are national governments – about localism, service user-driven public services, and joined up or holistic policymaking – when their elected policymakers are held to account for outcomes?

  1. What is the nature of state intervention?

It may be punitive or supportive. See: How would Lisa Simpson and Monty Burns make progressive social policy?

2.     Making ‘hard choices’: what problems arise when politics meets policymaking?

When policymakers move from idiom and broad philosophy towards specific policies and practices, they find a range of obstacles, including:

The scale of the task becomes overwhelming, and not suited to electoral cycles.

Developing policy and reforming policymaking takes time, and the effect may take a generation to see.

There is competition for policymaking resources such as attention and money.

Prevention is general, long-term, and low salience. It competes with salient short-term problems that politicians feel compelled to solve first.

Prevention is akin to capital investment with no guarantee of a return. Reductions in funding ‘fire-fighting’, ‘frontline’ services to pay for prevention initiatives, are hard to sell. Governments invest in small steps, and investment is vulnerable when money is needed quickly to fund public service crises.

The benefits are difficult to measure and see.

Short-term impacts are hard to measure, long-term impacts are hard to attribute to a single intervention, and prevention does not necessarily save money (or provide ‘cashable’ savings’).

Reactive policies have a more visible impact, such as to reduce hospital waiting times or increase the number of teachers or police officers.

Problems are ‘wicked’.

Getting to the ‘root causes’ of problems is not straightforward; policymakers often have no clear sense of the cause of problems or effect of solutions. Few aspects of prevention in social policy resemble disease prevention, in which we know the cause of many diseases, how to screen for them, and how to prevent them in a population.

Performance management is not conducive to prevention.

Performance management systems encourage public sector managers to focus on their services’ short-term and measurable targets over shared aims with public service partners or the wellbeing of their local populations.

Performance management is about setting priorities when governments have too many aims to fulfil. When central governments encourage local governing bodies to form long-term partnerships to address inequalities and meet short-term targets, the latter come first.

Governments face major ethical dilemmas.

Political choices co-exist with normative judgements concerning the role of the state and personal responsibility, often undermining cross-party agreement.

One aspect of prevention may undermine the other.

A cynical view of prevention initiatives is that they represent a quick political fix rather than a meaningful long-term solution:

  • Central governments describe prevention as the solution to public sector costs while also delegating policymaking responsibility to, and reducing the budgets of, local public bodies.
  • Then, public bodies prioritise their most pressing statutory responsibilities.

Someone must be held to account.

If everybody is involved in making and shaping policy, it becomes unclear who can be held to account over the results. This outcome is inconsistent with Westminster-style democratic accountability in which we know who is responsible and therefore who to praise or blame.

3.      ‘The evidence’ is not a ‘magic bullet’

In a series of other talks, I identify the reasons why ‘evidence based policymaking’ (EBPM) does not describe the policy process well.

Elsewhere, I also suggest that it is more difficult for evidence to ‘win the day’ in the broad area of prevention policy compared to the more specific field of tobacco control.

Generally speaking, a good simple rule about EBPM is that there is never a ‘magic bullet’ to take the place of judgement. Politics is about making choices which benefit some while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution.

A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention in ‘families policies’ seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to the highest evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field.

The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention:

  • intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour
  • an outreach model of support and training.

The evidence of success comes from evaluation plus a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without this intervention.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success built on RCT evidence. There is more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and services.

2. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem:

  • there are few examples of taking effective specialist projects ‘to scale’
  • there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners)
  • it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

3. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

Conclusion: vague consensus is no substitute for political choice

Governments begin with the sense that they have found the solution to many problems, only to find that they have to make and defend highly ‘political’ choices.

For example, see the UK government’s ‘imaginative’ use of evidence to make families policy. In a nutshell, it chose to play fast and loose with evidence, and demonise 117000 families, to provide political cover to a redistribution of resources to family intervention projects.

We can, with good reason, object to this style of politics. However, we would also have to produce a feasible alternative.

For example, the Scottish Government has taken a different approach (perhaps closer to what one might often expect in New Zealand), but it still needs to produce and defend a story about its choices, and it faces almost the same constraints as the UK. It’s self-described ‘decisive shift’ to prevention was no a decisive shift to prevention.

Overall, prevention is no different from any other policy area, except that it has proven to be much more complicated and difficult to sustain than most others. Prevention is part of an excellent idiom but not a magic bullet for policy problems.

Further reading:

Prevention

See also

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

 

 

Leave a comment

Filed under Uncategorized

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.

The event’s description sums up key conclusions in the literature on policy learning and policy transfer:

  1. technology and ‘entrepreneurs’ help ideas spread internationally, and domestic policymakers can use them to be more informed about global policy innovation, but
  2. there can be major unintended consequences to importing ideas, such as the adoption of policy solutions with poorly-evidenced success, or a broader sense of failed transportation caused by factors such as a poor fit between the aims of the exporter/importer.

In this post, I connect these conclusions to broader themes in policy studies, which suggest that:

  1. policy learning and policy transfer are political processes, not ‘rational’ or technical searches for information
  2. the use of evidence to spread policy innovation requires two interconnected choices: what counts as good evidence, and what role central governments should play.
  3. the following ’11 question guide’ to evidence based policy transfer serves more as a way to reflect than a blueprint for action.

As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.

anzog auckland transfer ad

Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?

Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:

  1. ‘Evidence based’ is a highly misleading description of the use of information in policy.
  2. To transfer a policy blueprint completely, in this manner, would require all places and contexts to be the same, and for the policy process to be technocratic and apolitical.
  3. There are general academic guides on how to learn lessons from others systematically – such as Richard Rose’s ‘practical guide’  – but most academic work on learning and transfer does not suggest that policymakers follow this kind of advice.

Rose 10 lessons rotated

Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.

Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:

3 reasons why ‘evidence based’ does not describe policymaking

In a series of ANZSOG talks on ‘evidence based policymaking’ (EBPM), I describe three main factors, all of which are broadly relevant to transfer:

  1. There are many forms of policy-relevant evidence and few policymakers adhere to a strict ‘hierarchy’ of knowledge.

Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.

  1. Policymakers must find ways to ignore most evidence – such as by combining ‘rational’ and ‘irrational’ cognitive shortcuts – to be able to act quickly.

The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.

  1. They do not control the policy process in which they engage.

We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.

The literature on ‘policy learning’ tells a similar story

Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.

We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:

1.It is collective and rule-bound

Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.

2.’Evidence based’ is one of several types of policy learning

  • Epistemic. Primarily by scientific experts transmitting knowledge to policymakers.
  • Reflection. Open dialogue to incorporate diverse forms of knowledge and encourage cooperation.
  • Bargaining. Actors learn how to cooperate and compete effectively.
  • Hierarchy. Actors with authority learn how to impose their aims; others learn the limits to their discretion.

3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.

Their analysis can produce relevant take-home points such as:

  • Experts will be ineffective if they assume that policy learning is epistemic. The assumption will leave them ill-prepared to deal with bargaining.
  • There is more than one legitimate way to learn, such as via deliberative processes that incorporate more perspectives and forms of knowledge.

What does the literature on transfer tell us?

‘Policy transfer’ can describe a spectrum of activity:

  • driven voluntarily, by a desire to learn from the story of another government’s policy’s success. In such cases, importers use shortcuts to learning, such as by restricting their search to systems with which they have something in common (such as geography or ideology), learning via intermediaries such as ‘entrepreneurs’, or limiting their searches for evidence of success.
  • driven by various forms of pressure, including encouragement by central (or supranational) governments, international norms or agreements, ‘spillover’ effects causing one system to respond to innovation by another, or demands by businesses to minimise the cost of doing business.

In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:

  • Failing to generate or use enough evidence on what made the initial policy successful
  • Failing to adapt that policy to local circumstances
  • Failing to back policy change with sufficient resources

However, other studies highlight some major qualifications:

  • If the process is about using ideas about one system to inform another, our attention may shift from ‘transfer’ to ‘translation’ or ‘transformation’, and the idea of ‘successful transfer’ makes less sense
  • Transfer success is not the same as implementation success, which depends on a wider range of factors
  • Nor is it the same as ‘policy success’, which can be assessed by a mix of questions to reflect political reality: did it make the government more re-electable, was the process of change relatively manageable, and did it produce intended outcomes?

The use of evidence to spread policy innovation requires a combination of profound political and governance choices

When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.

For example, consider these ideal-types or models in table 1:

Table 1 3 ideal types of EBBP

In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.

In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.

In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.

Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer  

In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.

  1. What problem did policymakers say they were trying to solve, and why?
  2. What solution(s) did they produce?
  3. Why?

Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2)  ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.

4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.

5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?

6. How do we account for the role of scale, and the different cultures and expectations in each policy field?

Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.

7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?

8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?

9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?

10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?

Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.

11. What will be the relationship between evidence and governance?

Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?

In conclusion

Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.

This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.

Paul Cairney Auckland Policy Transfer 12.10.18

 

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

The Politics of Evidence-Based Policymaking: ANZSOG talks

This post introduces a series of related talks on ‘the politics of evidence-based policymaking’ (EBPM) that I’m giving as part of larger series of talks during this ANZOG-funded/organised trip.

The EBPM talks begin with a discussion of the same three points: what counts as evidence, why we must ignore most of it (and how), and the policy process in which policymakers use some of it. However, the framing of these points, and the ways in which we discuss the implications, varies markedly by audience. So, in this post, I provide a short discussion of the three points, then show how the audience matters (referring to the city as a shorthand for each talk).

The overall take-home points are highly practical, in the same way that critical thinking has many practical applications (in other words, I’m not offering a map, toolbox, or blueprint):

  • If you begin with (a) the question ‘why don’t policymakers use my evidence?’ I like to think you will end with (b) the question ‘why did I ever think they would?’.
  • If you begin by taking the latter as (a) a criticism of politics and policymakers, I hope you will end by taking it as (b) a statement of the inevitability of the trade-offs that must accompany political choice.
  • We may address these issues by improving the supply and use of evidence. However, it is more important to maintain the legitimacy of the politicians and political systems in which policymakers choose to ignore evidence. Technocracy is no substitute for democracy.

3 ways to describe the use of evidence in policymaking

  1. Discussions of the use of evidence in policy often begin as a valence issue: who wouldn’t want to use good evidence when making policy?

However, it only remains a valence issue when we refuse to define evidence and justify what counts as good evidence. After that, you soon see the political choices emerge. A reference to evidence is often a shorthand for scientific research evidence, and good often refers to specific research methods (such as randomised control trials). Or, you find people arguing very strongly in the almost-opposite direction, criticising this shorthand as exclusionary and questioning the ability of scientists to justify claims to superior knowledge. Somewhere in the middle, we find that a focus on evidence is a good way to think about the many forms of information or knowledge on which we might make decisions, including: a wider range of research methods and analyses, knowledge from experience, and data relating to the local context with which policy would interact.

So, what begins as a valence issue becomes a gateway to many discussions about how to understand profound political choices regarding: how we make knowledge claims, how to ‘co-produce’ knowledge via dialogue among many groups, and the relationship between choices about evidence and governance.

  1. It is impossible to pay attention to all policy relevant evidence.

There is far more information about the world than we are able to process. A focus on evidence gaps often gives way to the recognition that we need to find effective ways to ignore most evidence.

There are many ways to describe how individuals combine cognition and emotion to limit their attention enough to make choices, and policy studies (to all intents and purposes) describe equivalent processes – described, for example, as ‘institutions’ or rules – in organisations and systems.

One shortcut between information and choice is to set aims and priorities; to focus evidence gathering on a small number of problems or one way to define a problem, and identify the most reliable or trustworthy sources of evidence (often via evidence ‘synthesis’). Another is to make decisions quickly by relying on emotion, gut instinct, habit, and existing knowledge or familiarity with evidence.

Either way, agenda setting and problem definition are political processes that address uncertainty and ambiguity. We gather evidence to reduce uncertainty, but first we must reduce ambiguity by exercising power to define the problem we seek to solve.

  1. It is impossible to control the policy process in which people use evidence.

Policy textbooks (well, my textbook at least!) provide a contrast between:

  • The model of a ‘policy cycle’ that sums up straightforward policymaking, through a series of stages, over which policymakers have clear control. At each stage, you know where evidence fits in: to help define the problem, generate solutions, and evaluate the results to set the agenda for the next cycle.
  • A more complex ‘policy process’, or policymaking environment, of which policymakers have limited knowledge and even less control. In this environment, it is difficult to know with whom engage, the rules of engagement, or the likely impact of evidence.

Overall, policy theories have much to offer people with an interest in evidence-use in policy, but primarily as a way to (a) manage expectations, to (b) produce more realistic strategies and less dispiriting conclusions. It is useful to frame our aim as to analyse the role of evidence within a policy process that (a) we don’t quite understand, rather than (b) we would like to exist.

The events themselves

Below, you will find a short discussion of the variations of audience and topic. I’ll update and reflect on this discussion (in a revised version of this post) after taking part in the events.

Social science and policy studies: knowledge claims, bounded rationality, and policy theory

For Auckland and Wellington A, I’m aiming for an audience containing a high proportion of people with a background in social science and policy studies. I describe the discussion as ‘meta’ because I am talking about how I talk about EBPM to other audiences, then inviting discussion on key parts of that talk, such as how to conceptualise the policy process and present conceptual insights to people who have no intention of deep dives into policy theory.

I often use the phrase ‘I’ve read it, so you don’t have to’ partly as a joke, but also to stress the importance of disciplinary synthesis when we engage in interdisciplinary (and inter-professional) discussion. If so, it is important to discuss how to produce such ‘synthetic’ accounts.

I tend to describe key components of a policymaking environment quickly: many policy makers and influencers spread across many levels and types of government, institutions, networks, socioeconomic factors and events, and ideas. However, each of these terms represents a shorthand to describe a large and diverse literature. For example, I can describe an ‘institution’ in a few sentences, but the study of institutions contains a variety of approaches.

Background post: I know my audience, but does my other audience know I know my audience?

Academic-practitioner discussions: improving the use of research evidence in policy

For Wellington B and Melbourne, the audience is an academic-practitioner mix. We discuss ways in which we can encourage the greater use of research evidence in policy, perhaps via closer collaboration between suppliers and users.

Discussions with scientists: why do policymakers ignore my evidence?

Sydney UNSW focuses more on researchers in scientific fields (often not in social science).  I frame the question in a way that often seems central to scientific researcher interest: why do policymakers seem to ignore my evidence, and what can I do about it?

Then, I tend to push back on the idea that the fault lies with politics and policymakers, to encourage researchers to think more about the policy process and how to engage effectively in it. If I’m trying to be annoying, I’ll suggest to a scientific audience that they see themselves as ‘rational’ and politicians as ‘irrational’. However, the more substantive discussion involves comparing (a) ‘how to make an impact’ advice drawn from the personal accounts of experienced individuals, giving advice to individuals, and (b) the sort of advice you might draw from policy theories which focus more on systems.

Background post: What can you do when policymakers ignore your evidence?

Early career researchers: the need to build ‘impact’ into career development

Canberra UNSW is more focused on early career researchers. I think this is the most difficult talk because I don’t rely on the same joke about my role: to turn up at the end of research projects to explain why they failed to have a non-academic impact.  Instead, my aim is to encourage intelligent discussion about situating the ‘how to’ advice for individual researchers into a wider discussion of policymaking systems.

Similarly, Brisbane A and B are about how to engage with practitioners, and communicate well to non-academic audiences, when most of your work and training is about something else entirely (such as learning about research methods and how to engage with the technical language of research).

Background posts:

What can you do when policymakers ignore your evidence? Tips from the ‘how to’ literature from the science community

What can you do when policymakers ignore your evidence? Encourage ‘knowledge management for policy’

3 Comments

Filed under Evidence Based Policymaking (EBPM)

Talks and blogs: ANZSOG trip

 

 

I’m taking a trip to New Zealand and Australia as a guest of ANZSOG.

Here is a list of dates and draft titles for each talk. If available, there is a link in the name of the city to the advert for the talk.

My plan is to insert a link in each title to a blog post on the talk (or, in some cases like UNSW, a recording).

There is also a selection of tweets to prove that I’m not making the trip up.

Place Oct Title
 

Auckland

11 Why don’t policymakers listen to my evidence (powerpoint)
 

Auckland

12 Teaching evidence-based policy to fly: transferring sound policies across the world (recorded by ANZSOG)
 

Auckland

12 Working with complexity – evidence based practice and practice based evidence (informal roundtable discussion of this kind of thing)
 

Wellington

15 The politics of evidence-based policy: the expectations of academics and policymakers and ways to meet in the middle (discussion-only, but see this kind of post or presentation on the ‘culture gap’)
Wellington 15 Prevention is better than cure: so why aren’t we doing more of it? (ppt)
Wellington 16 The politics of evidence-based policy (meta, workshop)
Wellington 17 Blogging and reaching out (see discussion of previous event and Q. Should PhD students blog? A. Yes)
Melbourne 18 The politics of evidence-based policy making (roundtable DPC)
Sydney UNSW 19 Why don’t policymakers listen to your evidence? (presentation then roundtable on impact)
Canberra 22 Taking lessons from policy theory into practice (presentation)
Canberra 23 Why might policymakers listen to your evidence? (PhD workshop)
Brisbane 24 How scholars can engage with practitioners

Communicating or translating research for wider audiences

Brisbane 25 The new policy sciences (article with Chris Weible)

More praise inflation to come.

1 Comment

Filed under public policy

Epistemic versus bargaining-driven policy learning

There is an excellent article by Professor Claire Dunlop called “The irony of epistemic learning: epistemic communities, policy learning and the case of Europe’s hormones saga” (Open Access). It uses the language of ‘policy learning’ rather than ‘evidence based policymaking’, but these descriptions are closely related. I describe it below, in the form I’ll use in the 2nd ed of Understanding Public Policy (it will be Box 12.2).

Dunlop (2017c) uses a case study – EU policy on the supply of growth hormones to cattle – to describe the ‘irony of epistemic learning’. It occurs in two initial steps.

First, a period of epistemic learning allowed scientists to teach policymakers the key facts on a newly emerging policy issue. The scientists, trusted to assess risk, engaged in the usual processes associated with scientific work: gathering evidence to reduce uncertainty, but always expressing the need to produce continuous research to address inevitable uncertainty in some cases.  The ‘Lamming’ committee of experts commissioned and analysed scientific evidence comprehensively before reporting (a) that the use of ‘naturally occurring’ hormones in livestock was low risk for human consumers if administered according to regulations and guidance, but (b) it wanted more time  to analyse the carcinogenic effects of two ‘synthetic compounds’ (2017c: 224).

Second, a period of bargaining changed the context. EU officials (in DG Agriculture) responded to European Parliament concerns, fuelled by campaigning from consumer groups, which focused on uncertainty and worst-case scenarios. Officials suspended the committee’s deliberations before it was due to report and banned the use of growth hormones in the EU (and the importation of relevant meat).

The irony is two-fold.

First, it results from the combination of processes: scientists, operating in epistemic mode, described low risk but some uncertainty; and policymakers, operating in bargaining mode, used this sense of uncertainty to reject scientific advice.

Second, scientists were there to help policymakers learn about the evidence, but were themselves unable to learn about how to communicate and form wider networks within a political system characterised by periods of bargaining-driven policy learning.

dunlop 2017c picture

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in a forthcoming article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Emotion and reason in politics: the rational/ irrational distinction

In ‘How to communicate effectively with policymakers’, Richard Kwiatkowski and I use the distinction between ‘rational’ and ‘irrational’ cognitive shortcuts ‘provocatively’. I sort of wish we had been more direct, because I have come to realise that:

  1. My attempts to communicate with sarcasm and facial gestures may only ever appeal to a niche audience, and
  2. even if you use the scare quotes – around a word like ‘irrational’ – to denote the word’s questionable use, it’s not always clear what I’m questioning, because
  3. you need to know the story behind someone’s discussion to know what they are questioning.*

So, here are some of the reference points I’m using when I tell a story about ‘irrationality’:

1. I’m often invited to be the type of guest speaker that challenges the audience, it is usually a scientific audience, and the topic is usually evidence based policymaking.

So, when I say ‘irrational’, I’m speaking to (some) scientists who think of themselves as rational and policymakers as irrational, and use this problematic distinction to complain about policy-based evidence, post-truth politics, and perhaps even the irrationality of voters for Brexit. Action based on this way of thinking would be counterproductive. In that context, I use the word ‘irrational’ as a way into some more nuanced discussions including:

  • all humans combine cognition and emotion to make choices; and,
  • emotions are one of many sources of ‘fast and frugal heuristics’ that help us make some decisions very quickly and often very well.

In other words, it is silly to complain that some people are irrational, when we are all making choices this way, and such decision-making is often a good thing.

2. This focus on scientific rationality is part of a wider discussion of what counts as good evidence or valuable knowledge. Examples include:

  • Policy debates on the value of bringing together many people with different knowledge claims – such as through user and practitioner experience – to ‘co-produce’ evidence.
  • Wider debates on the ‘decolonization of knowledge’ in which narrow ‘Western’ scientific principles help exclude the voices of many populations by undermining their claims to knowledge.

3. A focus on rationality versus irrationality is still used to maintain sexist and racist caricatures or stereotypes, and therefore dismiss people based on a misrepresentation of their behaviour.

I thought that, by now, we’d be done with dismissing women as emotional or hysterical, but apparently not. Indeed, as some recent racist and sexist coverage of Serena Williams demonstrates, the idea that black women are not rational is still tolerated in mainstream discussion.

4. Part of the reason that we can only conclude that people combine cognition and emotion, without being able to separate their effects in a satisfying way, is that the distinction is problematic.

It is difficult to demonstrate empirically. It is also difficult to assign some behaviours to one camp or the other, such as when we consider moral reasoning based on values and logic.

To sum up, I’ve been using the rational/irrational distinction explicitly to make a simple point that is relevant to the study of politics and policymaking:

  • All people use cognitive shortcuts to help them ignore almost all information about the world, to help them make decisions efficiently.
  • If you don’t understand and act on this simple insight, you’ll waste your time by trying to argue someone into submission or giving them a 500-page irrelevant report when they are looking for one page written in a way that makes sense to them.

Most of the rest has been mostly implicit, and communicated non-verbally, which is great when you want to keep a presentation brief and light, but not if you want to acknowledge nuance and more serious issues.

 

 

 

 

*which is why I’m increasingly interested in Riker’s idea of heresthetics, in which the starting point of a story is crucial. We can come to very different conclusions about a problem and its solution by choosing different starting points, to accentuate one aspect of a problem and downplay another, even when our beliefs and preferences remain basically the same.

 

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling