Tag Archives: prevention policy

Prevention is better than cure, so why aren’t we doing more of it?

This post provides a generous amount of background for my ANZSOG talk Prevention is better than cure, so why aren’t we doing more of it? If you read all of it, it’s a long read. If not, it’s a short read before the long read. Here is the talk’s description:

‘Does this sound familiar? A new government comes into office, promising to shift the balance in social and health policy from expensive remedial, high dependency care to prevention and early intervention. They commit to better policy-making; they say they will join up policy and program delivery, devolving responsibility to the local level and focusing on long term outcomes rather than short term widgets; and that they will ensure policy is evidence-based.  And then it all gets too hard, and the cycle begins again, leaving some exhausted and disillusioned practitioners in its wake. Why does this happen repeatedly, across different countries and with governments of different persuasions, even with the best will in the world?’ 

  • You’ll see from the question that I am not suggesting that all prevention or early intervention policies fail. Rather, I use policy theories to provide a general explanation for a major gap between the (realistic) expectations expressed in prevention strategies and the actual outcomes. We can then talk about how to close that gap.
  • You’ll also see the phrase ‘even with the best will in the world’, which I think is key to this talk. No-one needs me to rehearse the usually-vague and often-stated ways to explain failed prevention policies, including the ‘wickedness’ of policy problems, or the ‘pathology’ of public policy. Rather, I show that such policies may ‘fail’ even when there is wide and sincere cross-party agreement about the need to shift from reactive to more prevention policy design. I also suggest that the general explanation for failure – low ‘political will’ – is often damaging to the chances for future success.
  • Let’s start by defining prevention policy and policymaking.

When engaged in ‘prevention’, governments seek to:

  1. Reform policy.

Prevention policy is really a collection of policies designed to intervene as early as possible in people’s lives to improve their wellbeing and reduce inequalities and/or demand for acute services. The aim is to move from reactive to preventive public services, intervening earlier in people’s lives to address a wide range of longstanding problems – including crime and anti-social behaviour, ill health and unhealthy behaviour, low educational attainment, unemployment and low employability – before they become too severe.

  1. Reform policymaking.

Preventive policymaking describes the ways in which governments reform their practices to support prevention policy, including a commitment to:

  • ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area
  • give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users produce long term aims for outcomes, and
  • reduce short term performance targets in favour of long term outcomes agreements.
  1. Ensure that policy is ‘evidence based’.

Three general reasons why ‘prevention’ policies never seem to succeed.

  1. Policymakers don’t know what prevention means.

They express a commitment to prevention before defining it fully. When they start to make sense of prevention, they find out how difficult it is to pursue, and how many controversial choices it involves (see also uncertainty versus ambiguity)

  1. They engage in a policymaking system that is too complex to control.

They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes.

Yet, they also need to demonstrate to the electorate that they are in control, and find out how difficult it is to localise and centralise policy.

  1. They are unable and unwilling to produce ‘evidence based policymaking’.

Policymakers seek cognitive shortcuts (and their organisational equivalents) to gather enough information to make ‘good enough’ decisions. When they seek evidence on prevention, they find that it is patchy, inconclusive, often counter to their beliefs, and not a ‘magic bullet’ to help justify choices.

Throughout this process, their commitment to prevention policy can be sincere but unfulfilled. They do not articulate fully what prevention means or appreciate the scale of their task. When they try to deliver prevention strategies, they face several problems that, on their own, would seem daunting. Many of the problems they seek to ‘prevent’ are ‘wicked’, or difficult to define and seemingly impossible to solve, such as poverty, unemployment, low quality housing and homelessness, crime, and health and education inequalities. They face stark choices on how far they should go to shift the balance between state and market, redistribute wealth and income, distribute public resources, and intervene in people’s lives to change their behaviour and ways of thinking. Their focus on the long term faces major competition from more salient short-term policy issues that prompt them to maintain ‘reactive’ public services. Their often-sincere desire to ‘localise’ policymaking often gives way to national electoral politics, in which central governments face pressure to make policy from the ‘top’ and be decisive. Their pursuit of ‘evidence based’ policymaking often reveals a lack of evidence about which policy interventions work and the extent to which they can be ‘scaled up’ successfully.

These problems will not be overcome if policy makers and influencers misdiagnose them

  • If policy influencers make the simplistic assumption that this problem is caused by low political they will provide bad advice.
  • If new policymakers truly think that the problem was the low commitment and competence of their predecessors, they will begin with the same high hopes about the impact they can make, only to become disenchanted when they see the difference between their abstract aims and real world outcomes.
  • Poor explanation of limited success contributes to the high potential for (a) an initial period of enthusiasm and activity, replaced by (b) disenchantment and inactivity, and (c) for this cycle to be repeated without resolution.

Let’s add more detail to these general explanations:

1. What makes prevention so difficult to define?

When viewed as a simple slogan, ‘prevention’ seems like an intuitively appealing aim. It can generate cross-party consensus, bringing together groups on the ‘left’, seeking to reduce inequalities, and on the ‘right’, seeking to reduce economic inactivity and the cost of services.

Such consensus is superficial and illusory. When making a detailed strategy, prevention is open to many interpretations by many policymakers. Imagine the many types of prevention policy and policymaking that we could produce:

  1. What problem are we trying to solve?

Prevention policymaking represents a heroic solution to several crises: major inequalities, underfunded public services, and dysfunctional government.

  1. On what measures should we focus?

On which inequalities should we focus primarily? Wealth, occupation, income, race, ethnicity, gender, sexuality, disability, mental health.

On which measures of inequality? Economic, health, healthy behaviour, education attainment, wellbeing, punishment.

  1. On what solution should we focus?

To reduce poverty and socioeconomic inequalities, improve national quality of life, reduce public service costs, or increase value for money

  1. Which ‘tools’ or policy instruments should we use?

Redistributive policies to address ‘structural’ causes of poverty and inequality?

Or, individual-focused policies to: (a) boost the mental ‘resilience’ of public service users, (b) oblige, or (c) exhort people to change behaviour.

  1. How do we intervene as early as possible in people’s lives?

Primary prevention. Focus on the whole population to stop a problem occurring by investing early and/or modifying the social or physical environment. Akin to whole-population immunizations.

Secondary prevention. Focus on at-risk groups to identify a problem at a very early stage to minimise harm.

Tertiary prevention. Focus on affected groups to stop a problem getting worse.

  1. How do we pursue ‘evidence based policymaking’? 3 ideal-types

Using randomised control trials and systematic review to identify the best interventions?

Storytelling to share best governance practice?

‘Improvement’ methods to experiment on a small scale and share best practice?

  1. How does evidence gathering connect to long-term policymaking?

Does a national strategy drive long-term outcomes?

Does central government produce agreements with or targets for local authorities?

  1. Is preventive policymaking a philosophy or a profound reform process?

How serious are national governments – about localism, service user-driven public services, and joined up or holistic policymaking – when their elected policymakers are held to account for outcomes?

  1. What is the nature of state intervention?

It may be punitive or supportive. See: How would Lisa Simpson and Monty Burns make progressive social policy?

2.     Making ‘hard choices’: what problems arise when politics meets policymaking?

When policymakers move from idiom and broad philosophy towards specific policies and practices, they find a range of obstacles, including:

The scale of the task becomes overwhelming, and not suited to electoral cycles.

Developing policy and reforming policymaking takes time, and the effect may take a generation to see.

There is competition for policymaking resources such as attention and money.

Prevention is general, long-term, and low salience. It competes with salient short-term problems that politicians feel compelled to solve first.

Prevention is akin to capital investment with no guarantee of a return. Reductions in funding ‘fire-fighting’, ‘frontline’ services to pay for prevention initiatives, are hard to sell. Governments invest in small steps, and investment is vulnerable when money is needed quickly to fund public service crises.

The benefits are difficult to measure and see.

Short-term impacts are hard to measure, long-term impacts are hard to attribute to a single intervention, and prevention does not necessarily save money (or provide ‘cashable’ savings’).

Reactive policies have a more visible impact, such as to reduce hospital waiting times or increase the number of teachers or police officers.

Problems are ‘wicked’.

Getting to the ‘root causes’ of problems is not straightforward; policymakers often have no clear sense of the cause of problems or effect of solutions. Few aspects of prevention in social policy resemble disease prevention, in which we know the cause of many diseases, how to screen for them, and how to prevent them in a population.

Performance management is not conducive to prevention.

Performance management systems encourage public sector managers to focus on their services’ short-term and measurable targets over shared aims with public service partners or the wellbeing of their local populations.

Performance management is about setting priorities when governments have too many aims to fulfil. When central governments encourage local governing bodies to form long-term partnerships to address inequalities and meet short-term targets, the latter come first.

Governments face major ethical dilemmas.

Political choices co-exist with normative judgements concerning the role of the state and personal responsibility, often undermining cross-party agreement.

One aspect of prevention may undermine the other.

A cynical view of prevention initiatives is that they represent a quick political fix rather than a meaningful long-term solution:

  • Central governments describe prevention as the solution to public sector costs while also delegating policymaking responsibility to, and reducing the budgets of, local public bodies.
  • Then, public bodies prioritise their most pressing statutory responsibilities.

Someone must be held to account.

If everybody is involved in making and shaping policy, it becomes unclear who can be held to account over the results. This outcome is inconsistent with Westminster-style democratic accountability in which we know who is responsible and therefore who to praise or blame.

3.      ‘The evidence’ is not a ‘magic bullet’

In a series of other talks, I identify the reasons why ‘evidence based policymaking’ (EBPM) does not describe the policy process well.

Elsewhere, I also suggest that it is more difficult for evidence to ‘win the day’ in the broad area of prevention policy compared to the more specific field of tobacco control.

Generally speaking, a good simple rule about EBPM is that there is never a ‘magic bullet’ to take the place of judgement. Politics is about making choices which benefit some while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution.

A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention in ‘families policies’ seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to the highest evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field.

The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention:

  • intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour
  • an outreach model of support and training.

The evidence of success comes from evaluation plus a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without this intervention.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success built on RCT evidence. There is more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and services.

2. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem:

  • there are few examples of taking effective specialist projects ‘to scale’
  • there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners)
  • it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

3. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

Conclusion: vague consensus is no substitute for political choice

Governments begin with the sense that they have found the solution to many problems, only to find that they have to make and defend highly ‘political’ choices.

For example, see the UK government’s ‘imaginative’ use of evidence to make families policy. In a nutshell, it chose to play fast and loose with evidence, and demonise 117000 families, to provide political cover to a redistribution of resources to family intervention projects.

We can, with good reason, object to this style of politics. However, we would also have to produce a feasible alternative.

For example, the Scottish Government has taken a different approach (perhaps closer to what one might often expect in New Zealand), but it still needs to produce and defend a story about its choices, and it faces almost the same constraints as the UK. It’s self-described ‘decisive shift’ to prevention was no a decisive shift to prevention.

Overall, prevention is no different from any other policy area, except that it has proven to be much more complicated and difficult to sustain than most others. Prevention is part of an excellent idiom but not a magic bullet for policy problems.

Further reading:

Prevention

See also

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

 

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Here’s why there is always an expectations gap in prevention policy

Prevention is the most important social policy agenda of our time. Many governments make a sincere commitment to it, backed up by new policy strategies and resources. Yet, they also make limited progress before giving up or changing tack. Then, a new government arrives, producing the same cycle of enthusiasm and despair. This fundamental agenda never seems to get off the ground. We aim to explain this ‘prevention puzzle’, or the continuous gap between policymaker expectations and actual outcomes.

What is prevention policy and policymaking?

When engaged in ‘prevention’, governments seek to:

  1. Reform policy. To move from reactive to preventive public services, intervening earlier in people’s lives to ward off social problems and their costs when they seem avoidable.
  2. Reform policymaking. To (a) ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area, (b) give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users, and (c) produce long term aims for outcomes, and reduce short term performance targets.
  3. Ensure that policy is ‘evidence based’.

Three reasons why they never seem to succeed

We use well established policy theories/ studies to explain the prevention puzzle.

  1. They don’t know what prevention means. They express a commitment to something before defining it. When they start to make sense of it, they find out how difficult it is to pursue, and how many controversial choices it involves.
  2. They engage in a policy process that is too complex to control. They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes. Yet, they need to demonstrate to the electorate that they are in control. When they make sense of policymaking, they find out how difficult it is to localise and centralise.
  3. They are unable and unwilling to produce ‘evidence based policymaking’. Policymakers seek ‘rational’ and ‘irrational’ shortcuts to gather enough information to make ‘good enough’ decisions. When they seek evidence on preventing problems before they arise, they find that it is patchy, inconclusive, often counter to their beliefs, and unable to provide a ‘magic bullet’ to help make and justify choices.

Who knows what happens when they address these problems at the same time?

We draw on empirical and comparative UK and devolved government analysis to show in detail how policymaking differs according to the (a) type of government, (b) issue, and (c) era in which they operate.

Although it is reasonable to expect policymaking to be very different in, for example, the UK versus Scottish, or Labour versus Conservative governments, and in eras of boom versus austerity, a key part of our research is to show that the same basic ‘prevention puzzle’ exists at all times. You can’t simply solve it with a change of venue or government.

Our UK book will be out in 2018, with new draft chapters appearing here soon. Our longer term agenda – via IMAJINE – is to examine how policymakers try to reduce territorial inequalities across Europe partly by pursuing prevention and reforming public services.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

See also our project website IMAJINE.

Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.

Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).

So, what should we do?

I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.

I have mocked up a paper abstract to describe this kind of work:

In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them.  In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.

Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.

We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.

So, in the paper we (my future research partner and I), would:

  • Outline the payoffs to this engage-early approach. Early engagement will inform the research questions you ask, how you ask them, and how you ‘frame’ the results. It should also help produce more academic publications (which is still the key consideration for many academics), partly because this early approach will help us speak with some authority about policy and policymaking in many countries.
  • Describe the complications of engaging with different policymakers in many ‘venues’ in different countries: you would expect very different questions to arise, and perhaps struggle to manage competing audience demands.
  • Raise practical questions about the research audience, including: should we interview key advocacy groups and private sources of funding for applied research, as well as policymakers, when refining questions? I ask this question partly because it can be more effective to communicate evidence via policy influencers rather than try to engage directly with policymakers.
  • Raise ethical questions, including: what if policymaker interviewees want the ‘wrong’ questions answered? What if they are only interested in policy solutions that we think are misguided, either because the evidence-base is limited (and yet they seek a magic bullet) or their aims are based primarily on ideology (an allegedly typical dilemma regards left-wing academics providing research for right-wing governments)?

Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?

The case study of territorial inequalities is a key source of such dilemmas …

…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.

When considering solutions, they can choose from three basic categories:

  1. Tax and spending to redistribute income and wealth, perhaps treating economic inequalities as the source of most others (such as health and education inequalities).
  2. The provision of public services to help mitigate the effects of economic and other inequalities (such as free healthcare and education, and public transport in urban and rural areas).
  3. The adoption of ‘prevention’ strategies to engage as early as possible in people’s lives, on the assumption that key inequalities are well-established by the time children are three years old.

Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.

‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.

For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Why doesn’t evidence win the day in policy and policymaking?

cairney-southampton-evidence-win-the-dayPolitics has a profound influence on the use of evidence in policy, but we need to look ‘beyond the headlines’ for a sense of perspective on its impact.

It is tempting for scientists to identify the pathological effect of politics on policymaking, particularly after high profile events such as the ‘Brexit’ vote in the UK and the election of Donald Trump as US President. We have allegedly entered an era of ‘post-truth politics’ in which ideology and emotion trumps evidence and expertise (a story told many times at events like this), particularly when issues are salient.

Yet, most policy is processed out of this public spotlight, because the flip side of high attention to one issue is minimal attention to most others. Science has a crucial role in this more humdrum day-to-day business of policymaking which is far more important than visible. Indeed, this lack of public visibility can help many actors secure a privileged position in the policy process (and further exclude citizens).

In some cases, experts are consulted routinely. There is often a ‘logic’ of consultation with the ‘usual suspects’, including the actors most able to provide evidence-informed advice. In others, scientific evidence is often so taken for granted that it is part of the language in which policymakers identify problems and solutions.

In that context, we need better explanations of an ‘evidence-policy’ gap than the pathologies of politics and egregious biases of politicians.

To understand this process, and appearance of contradiction between excluded versus privileged experts, consider the role of evidence in politics and policymaking from three different perspectives.

The perspective of scientists involved primarily in the supply of evidence

Scientists produce high quality evidence only for politicians often ignore it or, even worse, distort its message to support their ideologically-driven policies. If they expect ‘evidence-based policymaking’ they soon become disenchanted and conclude that ‘policy-based evidence’ is more likely. This perspective has long been expressed in scientific journals and commentaries, but has taken on new significance following ‘Brexit’ and Trump.

The perspective of elected politicians

Elected politicians are involved primarily in managing government and maximising public and organisational support for policies. So, scientific evidence is one piece of a large puzzle. They may begin with a manifesto for government and, if elected, feel an obligation to carry it out. Evidence may play a part in that process but the search for evidence on policy solutions is not necessarily prompted by evidence of policy problems.

Further, ‘evidence based policy’ is one of many governance principles that politicians should feel the need to juggle. For example, in Westminster systems, ministers may try to delegate policymaking to foster ‘localism’ and/ or pragmatic policymaking, but also intervene to appear to be in control of policy, to foster a sense of accountability built on an electoral imperative. The likely mix of delegation and intervention seems almost impossible to predict, and this dynamic has a knock-on effect for evidence-informed policy. In some cases, central governments roll out the same basic policy intervention and limit local discretion; in others, it identifies broad outcomes and invites other bodies to gather evidence on how best to meet them. These differences in approach can have profound consequences on the models of evidence-informed policy available to us (see the example of Scottish policymaking).

Political science and policy studies provide a third perspective

Policy theories help us identify the relationship between evidence and policy by showing that a modern focus on ‘evidence-based policymaking’ (EBPM) is one of many versions of the same fairy tale – about ‘rational’ policymaking – that have developed in the post-war period. We talk about ‘bounded rationality’ to identify key ways in which policymakers or organisations could not achieve ‘comprehensive rationality’:

  1. They cannot separate values and facts.
  2. They have multiple, often unclear, objectives which are difficult to rank in any meaningful way.
  3. They have to use major shortcuts to gather a limited amount of information in a limited time.
  4. They can’t make policy from the ‘top down’ in a cycle of ordered and linear stages.

Limits to ‘rational’ policymaking: two shortcuts to make decisions

We can sum up the first three bullet points with one statement: policymakers have to try to evaluate and solve many problems without the ability to understand what they are, how they feel about them as a whole, and what effect their actions will have.

To do so, they use two shortcuts: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly.

Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing issues to produce or reinforce a dominant way to define policy problems. Successful actors combine evidence and emotional appeals or simple stories to capture policymaker attention, and/ or help policymakers interpret information through the lens of their strongly-held beliefs.

Scientific evidence plays its part, but scientists often make the mistake of trying to bombard policymakers with evidence when they should be trying to (a) understand how policymakers understand problems, so that they can anticipate their demand for evidence, and (b) frame their evidence according to the cognitive biases of their audience.

Policymaking in ‘complex systems’ or multi-level policymaking environments

Policymaking takes place in less ordered, less hierarchical, and less predictable environment than suggested by the image of the policy cycle. Such environments are made up of:

  1. a wide range of actors (individuals and organisations) influencing policy at many levels of government
  2. a proliferation of rules and norms followed by different levels or types of government
  3. close relationships (‘networks’) between policymakers and powerful actors
  4. a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

These five properties – plus a ‘model of the individual’ built on a discussion of ‘bounded rationality’ – make up the building blocks of policy theories (many of which I summarise in 1000 Word posts). I say this partly to aid interdisciplinary conversation: of course, each theory has its own literature and jargon, and it is difficult to compare and combine their insights, but if you are trained in a different discipline it’s unfair to ask you devote years of your life to studying policy theory to end up at this point.

To show that policy theories have a lot to offer, I have been trying to distil their collective insights into a handy guide – using this same basic format – that you can apply to a variety of different situations, from explaining painfully slow policy change in some areas but dramatic change in others, to highlighting ways in which you can respond effectively.

We can use this approach to help answer many kinds of questions. With my Southampton gig in mind, let’s use some examples from public health and prevention.

Why doesn’t evidence win the day in tobacco policy?

My colleagues and I try to explain why it takes so long for the evidence on smoking and health to have a proportionate impact on policy. Usually, at the back of my mind, is a public health professional audience trying to work out why policymakers don’t act quickly or effectively enough when presented with unequivocal scientific evidence. More recently, they wonder why there is such uneven implementation of a global agreement – the WHO Framework Convention on Tobacco Control – that almost every country in the world has signed.

We identify three conditions under which evidence will ‘win the day’:

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems. In leading countries, it took decades to command attention to the health effects of smoking, reframe tobacco primarily as a public health epidemic (not an economic good), and generate support for the most effective evidence-based solutions.
  2. The policy environment becomes conducive to policy change. A new and dominant frame helps give health departments (often in multiple venues) a greater role; health departments foster networks with public health and medical groups at the expense of the tobacco industry; and, they emphasise the socioeconomic conditions – reductions in smoking prevalence, opposition to tobacco control, and economic benefits to tobacco – supportive of tobacco control.
  3. Actors exploit ‘windows of opportunity’ successfully. A supportive frame and policy environment maximises the chances of high attention to a public health epidemic and provides the motive and opportunity of policymakers to select relatively restrictive policy instruments.

So, scientific evidence is a necessary but insufficient condition for major policy change. Key actors do not simply respond to new evidence: they use it as a resource to further their aims, to frame policy problems in ways that will generate policymaker attention, and underpin technically and politically feasible solutions that policymakers will have the motive and opportunity to select. This remains true even when the evidence seems unequivocal and when countries have signed up to an international agreement which commits them to major policy change. Such commitments can only be fulfilled over the long term, when actors help change the policy environment in which these decisions are made and implemented. So far, this change has not occurred in most countries (or, in other aspects of public health in the UK, such as alcohol policy).

Why doesn’t evidence win the day in prevention and early intervention policy?

UK and devolved governments draw on health and economic evidence to make a strong and highly visible commitment to preventive policymaking, in which the aim is to intervene earlier in people’s lives to improve wellbeing and reduce socioeconomic inequalities and/ or public sector costs. This agenda has existed in one form or another for decades without the same signs of progress we now associate with areas like tobacco control. Indeed, the comparison is instructive, since prevention policy rarely meets the three conditions outlined above:

  1. Prevention is a highly ambiguous term and many actors make sense of it in many different ways. There is no equivalent to a major shift in problem definition for prevention policy as a whole, and little agreement on how to determine the most effective or cost-effective solutions.
  2. A supportive policy environment is far harder to identify. Prevention policy cross-cuts many policymaking venues at many levels of government, with little evidence of ‘ownership’ by key venues. Consequently, there are many overlapping rules on how and from whom to seek evidence. Networks are diffuse and hard to manage. There is no dominant way of thinking across government (although the Treasury’s ‘value for money’ focus is key currency across departments). There are many socioeconomic indicators of policy problems but little agreement on how to measure or which measures to privilege (particularly when predicting future outcomes).
  3. The ‘window of opportunity’ was to adopt a vague solution to an ambiguous policy problem, providing a limited sense of policy direction. There have been several ‘windows’ for more specific initiatives, but their links to an overarching policy agenda are unclear.

These limitations help explain slow progress in key areas. The absence of an unequivocal frame, backed strongly by key actors, leaves policy change vulnerable to successful opposition, especially in areas where early intervention has major implications for redistribution (taking from existing services to invest in others) and personal freedom (encouraging or obliging behavioural change). The vagueness and long term nature of policy aims – to solve problems that often seem intractable – makes them uncompetitive, and often undermined by more specific short term aims with a measurable pay-off (as when, for example, funding for public health loses out to funding to shore up hospital management). It is too easy to reframe existing policy solutions as preventive if the definition of prevention remains slippery, and too difficult to demonstrate the population-wide success of measures generally applied to high risk groups.

What happens when attitudes to two key principles – evidence based policy and localism – play out at the same time?

A lot of discussion of the politics of EBPM assumes that there is something akin to a scientific consensus on which policymakers do not act proportionately. Yet, in many areas – such as social policy and social work – there is great disagreement on how to generate and evaluate the best evidence. Broadly speaking, a hierarchy of evidence built on ‘evidence based medicine’ – which has randomised control trials and their systematic review at the top, and practitioner knowledge and service user feedback at the bottom – may be completely subverted by other academics and practitioners. This disagreement helps produce a spectrum of ways in which we might roll-out evidence based interventions, from an RCT-driven roll-out of the same basic intervention to a storytelling driven pursuit of tailored responses built primarily on governance principles (such as to co-produce policy with users).

At the same time, governments may be wrestling with their own governance principles, including EBPM but also regarding the most appropriate balance between centralism and localism.

If you put both concerns together, you have a variety of possible outcomes (and a temptation to ‘let a thousand flowers bloom’) and a set of competing options (outlined in table 1), all under the banner of ‘evidence based’ policymaking.

Table 1 Three ideal types EBBP

What happens when a small amount of evidence goes a very long way?

So, even if you imagine a perfectly sincere policymaker committed to EBPM, you’d still not be quite sure what they took it to mean in practice. If you assume this commitment is a bit less sincere, and you add in the need to act quickly to use the available evidence and satisfy your electoral audience, you get all sorts of responses based in some part on a reference to evidence.

One fascinating case is of the UK Government’s ‘troubled families’ programme which combined bits and pieces of evidence with ideology and a Westminster-style-accountability imperative, to produce:

  • The argument that the London riots were caused by family breakdown and bad parenting.
  • The use of proxy measures to identify the most troubled families
  • The use of superficial performance management to justify notionally extra expenditure for local authorities
  • The use of evidence in a problematic way, from exaggerating the success of existing ‘family intervention projects’ to sensationalising neuroscientific images related to brain development in deprived children …

normal brain

…but also

In other words, some governments feel the need to dress up their evidence-informed policies in a language appropriate to Westminster politics. Unless we understand this language, and the incentives for elected policymakers to use it, we will fail to understand how to act effectively to influence those policymakers.

What can you do to maximise the use of evidence?

When you ask the generic question you can generate a set of transferable strategies to engage in policymaking:

how-to-be-heard

ebpm-5-things-to-do

Yet, as these case studies of public health and social policy suggest, the question lacks sufficient meaning when applied to real world settings. Would you expect the advice that I give to (primarily) natural scientists (primarily in the US) to be identical to advice for social scientists in specific fields (in, say, the UK)?

No, you’d expect me to end with a call for more research! See for example this special issue in which many scholars from many disciplines suggest insights on how to maximise the use of evidence in policy.

Palgrave C special

9 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

caspi-et-al-abstract

Avshalom Caspi and colleagues have used the 45-year ‘Dunedin’ study in New Zealand to identify the ‘large economic burden’ associated with ‘a small segment of the population’. They don’t quite achieve the 20%-causes-80% mark, but suggest that 22% of the population account disproportionately for the problems that most policymakers would like to solve, including unhealthy, economically inactive, and criminal behaviour. Most importantly, they discuss some success in predicting such outcomes from a 45-minute diagnostic test of 3 year olds.

Of course, any such publication will prompt major debates about how we report, interpret, and deal with such information, and these debates tend to get away from the original authors as soon as they publish and others report (follow the tweet thread):

This is true even though the authors have gone to unusual lengths to show the many ways in which you could interpret their figures. Theirs is a politically aware report, using some of the language of elected politicians but challenging simple responses. You can see this in their discussion which has a lengthy list of points about the study’s limitations.

The ambiguity dilemma: more evidence does not produce more agreement

‘The most costly adults in our cohort started the race of life from a starting block somewhere behind the rest, and while carrying a heavy handicap in brain health’.

The first limitation is that evidence does not help us adjudicate between competing attempts to define the problem. For some, it reinforces the idea of an ‘underclass’ or small collection of problem/ troubled families that should be blamed for society’s ills (it’s the fault of families and individuals). For others, it reinforces the idea that socio-economic inequalities harm the life chances of people as soon as they are born (it is out of the control of individuals).

The intervention dilemma: we know more about the problem than its solution

The second limitation is that this study tells us a lot about a problem but not its solution. Perhaps there is some common ground on the need to act, and to invest in similar interventions, but:

  1. The evidence on the effectiveness of solutions is not as strong or systematic as this new evidence on the problem.
  2. There are major dilemmas involved in ‘scaling up’ such solutions and transferring them from one area to another.
  3. The overall ‘tone’ of debate still matters to policy delivery, to determine for example if any intervention should be punitive and compulsory (you will cause the problem, so you have to engage with the solution) or supportive and voluntary (you face disadvantages, so we’ll try to help you if you let us).

The moral dilemma: we may only pay attention to the problem if there is a feasible solution

Prevention and early intervention policy agendas often seem to fail because the issues they raise seem too difficult to solve. Governments make the commitment to ‘prevention’ in the abstract but ‘do not know what it means or appreciate scale of their task’.

A classic policymaker heuristic described by Kingdon is that policymakers only pay attention to problems they think they can solve. So, they might initially show enthusiasm, only to lose interest when problems seem intractable or there is high opposition to specific solutions.

This may be true of most policies, but prevention and early intervention also seem to magnify the big moral question that can stop policy in its tracks: to what extent is it appropriate to intervene in people’s lives to change their behaviour?

Some may vocally oppose interventions based on their concern about the controlling nature of the state, particularly when it intervenes to prevent (say, criminal) behaviour that will not necessarily occur. It may be easier to make the case for intervening to help children, but difficult to look like you are not second guessing their parents.

Others may quietly oppose interventions based on an unresolved economic question: does it really save money to intervene early? Put bluntly, a key ‘economic burden’ relates to population longevity; the ‘20%’ may cause economic problems in their working years but die far earlier than the 80%. Put less bluntly by the authors:

This is an important question because the health-care burden of developed societies concentrates in older age groups. To the extent that factors such as smoking, excess weight and health problems during midlife foretell health-care burden and social dependency, findings here should extend to later life (keeping in mind that midlife smoking, weight problems and health problems also forecast premature mortality)’.

So, policymakers find initially that ‘early intervention’ a valence issue only in the abstract – who wouldn’t want to intervene as early as possible in a child’s life to protect them or improve their life chances? – but not when they try to deliver concrete policies.

The evidence-based policymaking dilemma

Overall, we are left with the sense that even the best available evidence of a problem may not help us solve it. Choosing to do nothing may be just as ‘evidence based’ as choosing a solution with minimal effects. Choosing to do something requires us to use far more limited evidence of solution effectiveness and to act in the face of high uncertainty. Add into the mix that prevention policy does not seem to be particularly popular and you might wonder why any policymaker would want to do anything with the best evidence of a profound societal problem.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

I am now part of a large EU-funded Horizon2020 project called IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe), which begins in January 2017. It is led by Professor Michael Woods at Aberystwyth University and has a dozen partners across the EU. I’ll be leading one work package in partnership with Professor Michael Keating.

imajine-logo-2017

The aim in our ‘work package’ is deceptively simple: generate evidence to identify how EU countries try to reduce territorial inequalities, see who is the most successful, and recommend the transfer of that success to other countries.

Life is not that simple, though, is it?! If it were, we’d know for sure what ‘territorial inequalities’ are, what causes them, what governments are willing to do to reduce them, and if they’ll succeed if they really try.

Instead, here are some of the problems you encounter along the way, including an inability to identify:

  • What policies are designed explicitly to reduce inequalities. Instead, we piece together many intentions, actions, instruments, and outputs, in many levels and types of government, and call it ‘policy’.
  • The link between ‘policy’ and policy outcomes, because many factors interact to produce those outcomes.
  • Success. Even if we could solve the methodological problems, to separate cause and effect, we face a political problem about choosing measures to evaluate and report success.
  • Good ways to transfer successful policies. A policy is not like a #gbbo cake, in which you can produce a great product and give out the recipe. In that scenario, you can assume that we all have the same aims (we all want cake, and of course chocolate is the best), starting point (basically the same shops and kitchens), and language to describe the task (use loads of sugar and cocoa). In policy, governments describe and seek to solve similar-looking problems in very different ways and, if they look elsewhere for lessons, those insights have to be relevant to their context (and the evidence-gathering process has to fit their idea of good governance). They also ‘transfer’ some policies while maintaining their own, and a key finding from our previous work is that governments simultaneously pursue policies to reduce inequalities and undermine their inequality-reducing policies.

So, academics like me tend to spend their time highlighting problems, explaining why such processes are not ‘evidence-based’, and identifying all the things that will go wrong from your perspective if you think policymaking and policy transfer can ever be straightforward.

Yet, policymakers do not have this luxury to identify problems, find them interesting, then go home. Instead, they have to make decisions in the face of ambiguity (what problem are they trying to solve?), uncertainty (evidence will help, but always be limited), and limited time.

So, academics like me are now focused increasingly on trying to help address the problems we raise. On the plus side, it prompts us to speak with policymakers from start to finish, to try to understand what evidence they’re interested in and how they’ll use it. On the less positive side (at least if you are a purist about research), it might prompt all sorts of compromises about how to combine research and policy advice if you want policymakers to use your evidence (on, for example, the line between science and advice, and the blurry boundaries between evidence and advice). If you are interested, please let me know, or follow the IMAJINE category on this site (and #IMAJINE).

See also:

New EU study looks at gap between rich and poor

New research project examines regional inequalities in Europe

Understanding the transfer of policy failure: bricolage, experimentalism and translation by Diane Stone

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy