Tag Archives: prevention policy

Here’s why there is always an expectations gap in prevention policy

Prevention is the most important social policy agenda of our time. Many governments make a sincere commitment to it, backed up by new policy strategies and resources. Yet, they also make limited progress before giving up or changing tack. Then, a new government arrives, producing the same cycle of enthusiasm and despair. This fundamental agenda never seems to get off the ground. We aim to explain this ‘prevention puzzle’, or the continuous gap between policymaker expectations and actual outcomes.

What is prevention policy and policymaking?

When engaged in ‘prevention’, governments seek to:

  1. Reform policy. To move from reactive to preventive public services, intervening earlier in people’s lives to ward off social problems and their costs when they seem avoidable.
  2. Reform policymaking. To (a) ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area, (b) give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users, and (c) produce long term aims for outcomes, and reduce short term performance targets.
  3. Ensure that policy is ‘evidence based’.

Three reasons why they never seem to succeed

We use well established policy theories/ studies to explain the prevention puzzle.

  1. They don’t know what prevention means. They express a commitment to something before defining it. When they start to make sense of it, they find out how difficult it is to pursue, and how many controversial choices it involves.
  2. They engage in a policy process that is too complex to control. They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes. Yet, they need to demonstrate to the electorate that they are in control. When they make sense of policymaking, they find out how difficult it is to localise and centralise.
  3. They are unable and unwilling to produce ‘evidence based policymaking’. Policymakers seek ‘rational’ and ‘irrational’ shortcuts to gather enough information to make ‘good enough’ decisions. When they seek evidence on preventing problems before they arise, they find that it is patchy, inconclusive, often counter to their beliefs, and unable to provide a ‘magic bullet’ to help make and justify choices.

Who knows what happens when they address these problems at the same time?

We draw on empirical and comparative UK and devolved government analysis to show in detail how policymaking differs according to the (a) type of government, (b) issue, and (c) era in which they operate.

Although it is reasonable to expect policymaking to be very different in, for example, the UK versus Scottish, or Labour versus Conservative governments, and in eras of boom versus austerity, a key part of our research is to show that the same basic ‘prevention puzzle’ exists at all times. You can’t simply solve it with a change of venue or government.

Our UK book will be out in 2018, with new draft chapters appearing here soon. Our longer term agenda – via IMAJINE – is to examine how policymakers try to reduce territorial inequalities across Europe partly by pursuing prevention and reforming public services.


Leave a comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.

Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).

So, what should we do?

I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.

I have mocked up a paper abstract to describe this kind of work:

In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them.  In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.

Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.

We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.

So, in the paper we (my future research partner and I), would:

  • Outline the payoffs to this engage-early approach. Early engagement will inform the research questions you ask, how you ask them, and how you ‘frame’ the results. It should also help produce more academic publications (which is still the key consideration for many academics), partly because this early approach will help us speak with some authority about policy and policymaking in many countries.
  • Describe the complications of engaging with different policymakers in many ‘venues’ in different countries: you would expect very different questions to arise, and perhaps struggle to manage competing audience demands.
  • Raise practical questions about the research audience, including: should we interview key advocacy groups and private sources of funding for applied research, as well as policymakers, when refining questions? I ask this question partly because it can be more effective to communicate evidence via policy influencers rather than try to engage directly with policymakers.
  • Raise ethical questions, including: what if policymaker interviewees want the ‘wrong’ questions answered? What if they are only interested in policy solutions that we think are misguided, either because the evidence-base is limited (and yet they seek a magic bullet) or their aims are based primarily on ideology (an allegedly typical dilemma regards left-wing academics providing research for right-wing governments)?

Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?

The case study of territorial inequalities is a key source of such dilemmas …

…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.

When considering solutions, they can choose from three basic categories:

  1. Tax and spending to redistribute income and wealth, perhaps treating economic inequalities as the source of most others (such as health and education inequalities).
  2. The provision of public services to help mitigate the effects of economic and other inequalities (such as free healthcare and education, and public transport in urban and rural areas).
  3. The adoption of ‘prevention’ strategies to engage as early as possible in people’s lives, on the assumption that key inequalities are well-established by the time children are three years old.

Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.

‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.

For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy



Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Why doesn’t evidence win the day in policy and policymaking?

cairney-southampton-evidence-win-the-dayPolitics has a profound influence on the use of evidence in policy, but we need to look ‘beyond the headlines’ for a sense of perspective on its impact.

It is tempting for scientists to identify the pathological effect of politics on policymaking, particularly after high profile events such as the ‘Brexit’ vote in the UK and the election of Donald Trump as US President. We have allegedly entered an era of ‘post-truth politics’ in which ideology and emotion trumps evidence and expertise (a story told many times at events like this), particularly when issues are salient.

Yet, most policy is processed out of this public spotlight, because the flip side of high attention to one issue is minimal attention to most others. Science has a crucial role in this more humdrum day-to-day business of policymaking which is far more important than visible. Indeed, this lack of public visibility can help many actors secure a privileged position in the policy process (and further exclude citizens).

In some cases, experts are consulted routinely. There is often a ‘logic’ of consultation with the ‘usual suspects’, including the actors most able to provide evidence-informed advice. In others, scientific evidence is often so taken for granted that it is part of the language in which policymakers identify problems and solutions.

In that context, we need better explanations of an ‘evidence-policy’ gap than the pathologies of politics and egregious biases of politicians.

To understand this process, and appearance of contradiction between excluded versus privileged experts, consider the role of evidence in politics and policymaking from three different perspectives.

The perspective of scientists involved primarily in the supply of evidence

Scientists produce high quality evidence only for politicians often ignore it or, even worse, distort its message to support their ideologically-driven policies. If they expect ‘evidence-based policymaking’ they soon become disenchanted and conclude that ‘policy-based evidence’ is more likely. This perspective has long been expressed in scientific journals and commentaries, but has taken on new significance following ‘Brexit’ and Trump.

The perspective of elected politicians

Elected politicians are involved primarily in managing government and maximising public and organisational support for policies. So, scientific evidence is one piece of a large puzzle. They may begin with a manifesto for government and, if elected, feel an obligation to carry it out. Evidence may play a part in that process but the search for evidence on policy solutions is not necessarily prompted by evidence of policy problems.

Further, ‘evidence based policy’ is one of many governance principles that politicians should feel the need to juggle. For example, in Westminster systems, ministers may try to delegate policymaking to foster ‘localism’ and/ or pragmatic policymaking, but also intervene to appear to be in control of policy, to foster a sense of accountability built on an electoral imperative. The likely mix of delegation and intervention seems almost impossible to predict, and this dynamic has a knock-on effect for evidence-informed policy. In some cases, central governments roll out the same basic policy intervention and limit local discretion; in others, it identifies broad outcomes and invites other bodies to gather evidence on how best to meet them. These differences in approach can have profound consequences on the models of evidence-informed policy available to us (see the example of Scottish policymaking).

Political science and policy studies provide a third perspective

Policy theories help us identify the relationship between evidence and policy by showing that a modern focus on ‘evidence-based policymaking’ (EBPM) is one of many versions of the same fairy tale – about ‘rational’ policymaking – that have developed in the post-war period. We talk about ‘bounded rationality’ to identify key ways in which policymakers or organisations could not achieve ‘comprehensive rationality’:

  1. They cannot separate values and facts.
  2. They have multiple, often unclear, objectives which are difficult to rank in any meaningful way.
  3. They have to use major shortcuts to gather a limited amount of information in a limited time.
  4. They can’t make policy from the ‘top down’ in a cycle of ordered and linear stages.

Limits to ‘rational’ policymaking: two shortcuts to make decisions

We can sum up the first three bullet points with one statement: policymakers have to try to evaluate and solve many problems without the ability to understand what they are, how they feel about them as a whole, and what effect their actions will have.

To do so, they use two shortcuts: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly.

Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing issues to produce or reinforce a dominant way to define policy problems. Successful actors combine evidence and emotional appeals or simple stories to capture policymaker attention, and/ or help policymakers interpret information through the lens of their strongly-held beliefs.

Scientific evidence plays its part, but scientists often make the mistake of trying to bombard policymakers with evidence when they should be trying to (a) understand how policymakers understand problems, so that they can anticipate their demand for evidence, and (b) frame their evidence according to the cognitive biases of their audience.

Policymaking in ‘complex systems’ or multi-level policymaking environments

Policymaking takes place in less ordered, less hierarchical, and less predictable environment than suggested by the image of the policy cycle. Such environments are made up of:

  1. a wide range of actors (individuals and organisations) influencing policy at many levels of government
  2. a proliferation of rules and norms followed by different levels or types of government
  3. close relationships (‘networks’) between policymakers and powerful actors
  4. a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

These five properties – plus a ‘model of the individual’ built on a discussion of ‘bounded rationality’ – make up the building blocks of policy theories (many of which I summarise in 1000 Word posts). I say this partly to aid interdisciplinary conversation: of course, each theory has its own literature and jargon, and it is difficult to compare and combine their insights, but if you are trained in a different discipline it’s unfair to ask you devote years of your life to studying policy theory to end up at this point.

To show that policy theories have a lot to offer, I have been trying to distil their collective insights into a handy guide – using this same basic format – that you can apply to a variety of different situations, from explaining painfully slow policy change in some areas but dramatic change in others, to highlighting ways in which you can respond effectively.

We can use this approach to help answer many kinds of questions. With my Southampton gig in mind, let’s use some examples from public health and prevention.

Why doesn’t evidence win the day in tobacco policy?

My colleagues and I try to explain why it takes so long for the evidence on smoking and health to have a proportionate impact on policy. Usually, at the back of my mind, is a public health professional audience trying to work out why policymakers don’t act quickly or effectively enough when presented with unequivocal scientific evidence. More recently, they wonder why there is such uneven implementation of a global agreement – the WHO Framework Convention on Tobacco Control – that almost every country in the world has signed.

We identify three conditions under which evidence will ‘win the day’:

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems. In leading countries, it took decades to command attention to the health effects of smoking, reframe tobacco primarily as a public health epidemic (not an economic good), and generate support for the most effective evidence-based solutions.
  2. The policy environment becomes conducive to policy change. A new and dominant frame helps give health departments (often in multiple venues) a greater role; health departments foster networks with public health and medical groups at the expense of the tobacco industry; and, they emphasise the socioeconomic conditions – reductions in smoking prevalence, opposition to tobacco control, and economic benefits to tobacco – supportive of tobacco control.
  3. Actors exploit ‘windows of opportunity’ successfully. A supportive frame and policy environment maximises the chances of high attention to a public health epidemic and provides the motive and opportunity of policymakers to select relatively restrictive policy instruments.

So, scientific evidence is a necessary but insufficient condition for major policy change. Key actors do not simply respond to new evidence: they use it as a resource to further their aims, to frame policy problems in ways that will generate policymaker attention, and underpin technically and politically feasible solutions that policymakers will have the motive and opportunity to select. This remains true even when the evidence seems unequivocal and when countries have signed up to an international agreement which commits them to major policy change. Such commitments can only be fulfilled over the long term, when actors help change the policy environment in which these decisions are made and implemented. So far, this change has not occurred in most countries (or, in other aspects of public health in the UK, such as alcohol policy).

Why doesn’t evidence win the day in prevention and early intervention policy?

UK and devolved governments draw on health and economic evidence to make a strong and highly visible commitment to preventive policymaking, in which the aim is to intervene earlier in people’s lives to improve wellbeing and reduce socioeconomic inequalities and/ or public sector costs. This agenda has existed in one form or another for decades without the same signs of progress we now associate with areas like tobacco control. Indeed, the comparison is instructive, since prevention policy rarely meets the three conditions outlined above:

  1. Prevention is a highly ambiguous term and many actors make sense of it in many different ways. There is no equivalent to a major shift in problem definition for prevention policy as a whole, and little agreement on how to determine the most effective or cost-effective solutions.
  2. A supportive policy environment is far harder to identify. Prevention policy cross-cuts many policymaking venues at many levels of government, with little evidence of ‘ownership’ by key venues. Consequently, there are many overlapping rules on how and from whom to seek evidence. Networks are diffuse and hard to manage. There is no dominant way of thinking across government (although the Treasury’s ‘value for money’ focus is key currency across departments). There are many socioeconomic indicators of policy problems but little agreement on how to measure or which measures to privilege (particularly when predicting future outcomes).
  3. The ‘window of opportunity’ was to adopt a vague solution to an ambiguous policy problem, providing a limited sense of policy direction. There have been several ‘windows’ for more specific initiatives, but their links to an overarching policy agenda are unclear.

These limitations help explain slow progress in key areas. The absence of an unequivocal frame, backed strongly by key actors, leaves policy change vulnerable to successful opposition, especially in areas where early intervention has major implications for redistribution (taking from existing services to invest in others) and personal freedom (encouraging or obliging behavioural change). The vagueness and long term nature of policy aims – to solve problems that often seem intractable – makes them uncompetitive, and often undermined by more specific short term aims with a measurable pay-off (as when, for example, funding for public health loses out to funding to shore up hospital management). It is too easy to reframe existing policy solutions as preventive if the definition of prevention remains slippery, and too difficult to demonstrate the population-wide success of measures generally applied to high risk groups.

What happens when attitudes to two key principles – evidence based policy and localism – play out at the same time?

A lot of discussion of the politics of EBPM assumes that there is something akin to a scientific consensus on which policymakers do not act proportionately. Yet, in many areas – such as social policy and social work – there is great disagreement on how to generate and evaluate the best evidence. Broadly speaking, a hierarchy of evidence built on ‘evidence based medicine’ – which has randomised control trials and their systematic review at the top, and practitioner knowledge and service user feedback at the bottom – may be completely subverted by other academics and practitioners. This disagreement helps produce a spectrum of ways in which we might roll-out evidence based interventions, from an RCT-driven roll-out of the same basic intervention to a storytelling driven pursuit of tailored responses built primarily on governance principles (such as to co-produce policy with users).

At the same time, governments may be wrestling with their own governance principles, including EBPM but also regarding the most appropriate balance between centralism and localism.

If you put both concerns together, you have a variety of possible outcomes (and a temptation to ‘let a thousand flowers bloom’) and a set of competing options (outlined in table 1), all under the banner of ‘evidence based’ policymaking.

Table 1 Three ideal types EBBP

What happens when a small amount of evidence goes a very long way?

So, even if you imagine a perfectly sincere policymaker committed to EBPM, you’d still not be quite sure what they took it to mean in practice. If you assume this commitment is a bit less sincere, and you add in the need to act quickly to use the available evidence and satisfy your electoral audience, you get all sorts of responses based in some part on a reference to evidence.

One fascinating case is of the UK Government’s ‘troubled families’ programme which combined bits and pieces of evidence with ideology and a Westminster-style-accountability imperative, to produce:

  • The argument that the London riots were caused by family breakdown and bad parenting.
  • The use of proxy measures to identify the most troubled families
  • The use of superficial performance management to justify notionally extra expenditure for local authorities
  • The use of evidence in a problematic way, from exaggerating the success of existing ‘family intervention projects’ to sensationalising neuroscientific images related to brain development in deprived children …

normal brain

…but also

In other words, some governments feel the need to dress up their evidence-informed policies in a language appropriate to Westminster politics. Unless we understand this language, and the incentives for elected policymakers to use it, we will fail to understand how to act effectively to influence those policymakers.

What can you do to maximise the use of evidence?

When you ask the generic question you can generate a set of transferable strategies to engage in policymaking:



Yet, as these case studies of public health and social policy suggest, the question lacks sufficient meaning when applied to real world settings. Would you expect the advice that I give to (primarily) natural scientists (primarily in the US) to be identical to advice for social scientists in specific fields (in, say, the UK)?

No, you’d expect me to end with a call for more research! See for example this special issue in which many scholars from many disciplines suggest insights on how to maximise the use of evidence in policy.

Palgrave C special


Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.


Avshalom Caspi and colleagues have used the 45-year ‘Dunedin’ study in New Zealand to identify the ‘large economic burden’ associated with ‘a small segment of the population’. They don’t quite achieve the 20%-causes-80% mark, but suggest that 22% of the population account disproportionately for the problems that most policymakers would like to solve, including unhealthy, economically inactive, and criminal behaviour. Most importantly, they discuss some success in predicting such outcomes from a 45-minute diagnostic test of 3 year olds.

Of course, any such publication will prompt major debates about how we report, interpret, and deal with such information, and these debates tend to get away from the original authors as soon as they publish and others report (follow the tweet thread):

This is true even though the authors have gone to unusual lengths to show the many ways in which you could interpret their figures. Theirs is a politically aware report, using some of the language of elected politicians but challenging simple responses. You can see this in their discussion which has a lengthy list of points about the study’s limitations.

The ambiguity dilemma: more evidence does not produce more agreement

‘The most costly adults in our cohort started the race of life from a starting block somewhere behind the rest, and while carrying a heavy handicap in brain health’.

The first limitation is that evidence does not help us adjudicate between competing attempts to define the problem. For some, it reinforces the idea of an ‘underclass’ or small collection of problem/ troubled families that should be blamed for society’s ills (it’s the fault of families and individuals). For others, it reinforces the idea that socio-economic inequalities harm the life chances of people as soon as they are born (it is out of the control of individuals).

The intervention dilemma: we know more about the problem than its solution

The second limitation is that this study tells us a lot about a problem but not its solution. Perhaps there is some common ground on the need to act, and to invest in similar interventions, but:

  1. The evidence on the effectiveness of solutions is not as strong or systematic as this new evidence on the problem.
  2. There are major dilemmas involved in ‘scaling up’ such solutions and transferring them from one area to another.
  3. The overall ‘tone’ of debate still matters to policy delivery, to determine for example if any intervention should be punitive and compulsory (you will cause the problem, so you have to engage with the solution) or supportive and voluntary (you face disadvantages, so we’ll try to help you if you let us).

The moral dilemma: we may only pay attention to the problem if there is a feasible solution

Prevention and early intervention policy agendas often seem to fail because the issues they raise seem too difficult to solve. Governments make the commitment to ‘prevention’ in the abstract but ‘do not know what it means or appreciate scale of their task’.

A classic policymaker heuristic described by Kingdon is that policymakers only pay attention to problems they think they can solve. So, they might initially show enthusiasm, only to lose interest when problems seem intractable or there is high opposition to specific solutions.

This may be true of most policies, but prevention and early intervention also seem to magnify the big moral question that can stop policy in its tracks: to what extent is it appropriate to intervene in people’s lives to change their behaviour?

Some may vocally oppose interventions based on their concern about the controlling nature of the state, particularly when it intervenes to prevent (say, criminal) behaviour that will not necessarily occur. It may be easier to make the case for intervening to help children, but difficult to look like you are not second guessing their parents.

Others may quietly oppose interventions based on an unresolved economic question: does it really save money to intervene early? Put bluntly, a key ‘economic burden’ relates to population longevity; the ‘20%’ may cause economic problems in their working years but die far earlier than the 80%. Put less bluntly by the authors:

This is an important question because the health-care burden of developed societies concentrates in older age groups. To the extent that factors such as smoking, excess weight and health problems during midlife foretell health-care burden and social dependency, findings here should extend to later life (keeping in mind that midlife smoking, weight problems and health problems also forecast premature mortality)’.

So, policymakers find initially that ‘early intervention’ a valence issue only in the abstract – who wouldn’t want to intervene as early as possible in a child’s life to protect them or improve their life chances? – but not when they try to deliver concrete policies.

The evidence-based policymaking dilemma

Overall, we are left with the sense that even the best available evidence of a problem may not help us solve it. Choosing to do nothing may be just as ‘evidence based’ as choosing a solution with minimal effects. Choosing to do something requires us to use far more limited evidence of solution effectiveness and to act in the face of high uncertainty. Add into the mix that prevention policy does not seem to be particularly popular and you might wonder why any policymaker would want to do anything with the best evidence of a profound societal problem.



Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

I am now part of a large EU-funded Horizon2020 project called IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe), which begins in January 2017. It is led by Professor Michael Woods at Aberystwyth University and has a dozen partners across the EU. I’ll be leading one work package in partnership with Professor Michael Keating.


The aim in our ‘work package’ is deceptively simple: generate evidence to identify how EU countries try to reduce territorial inequalities, see who is the most successful, and recommend the transfer of that success to other countries.

Life is not that simple, though, is it?! If it were, we’d know for sure what ‘territorial inequalities’ are, what causes them, what governments are willing to do to reduce them, and if they’ll succeed if they really try.

Instead, here are some of the problems you encounter along the way, including an inability to identify:

  • What policies are designed explicitly to reduce inequalities. Instead, we piece together many intentions, actions, instruments, and outputs, in many levels and types of government, and call it ‘policy’.
  • The link between ‘policy’ and policy outcomes, because many factors interact to produce those outcomes.
  • Success. Even if we could solve the methodological problems, to separate cause and effect, we face a political problem about choosing measures to evaluate and report success.
  • Good ways to transfer successful policies. A policy is not like a #gbbo cake, in which you can produce a great product and give out the recipe. In that scenario, you can assume that we all have the same aims (we all want cake, and of course chocolate is the best), starting point (basically the same shops and kitchens), and language to describe the task (use loads of sugar and cocoa). In policy, governments describe and seek to solve similar-looking problems in very different ways and, if they look elsewhere for lessons, those insights have to be relevant to their context (and the evidence-gathering process has to fit their idea of good governance). They also ‘transfer’ some policies while maintaining their own, and a key finding from our previous work is that governments simultaneously pursue policies to reduce inequalities and undermine their inequality-reducing policies.

So, academics like me tend to spend their time highlighting problems, explaining why such processes are not ‘evidence-based’, and identifying all the things that will go wrong from your perspective if you think policymaking and policy transfer can ever be straightforward.

Yet, policymakers do not have this luxury to identify problems, find them interesting, then go home. Instead, they have to make decisions in the face of ambiguity (what problem are they trying to solve?), uncertainty (evidence will help, but always be limited), and limited time.

So, academics like me are now focused increasingly on trying to help address the problems we raise. On the plus side, it prompts us to speak with policymakers from start to finish, to try to understand what evidence they’re interested in and how they’ll use it. On the less positive side (at least if you are a purist about research), it might prompt all sorts of compromises about how to combine research and policy advice if you want policymakers to use your evidence (on, for example, the line between science and advice, and the blurry boundaries between evidence and advice). If you are interested, please let me know, or follow the IMAJINE category on this site (and #IMAJINE).

See also:

New EU study looks at gap between rich and poor

New research project examines regional inequalities in Europe

Understanding the transfer of policy failure: bricolage, experimentalism and translation by Diane Stone



Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:


See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.


Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Imagine this as your ‘early intervention’ policy choice: (a) a universal and non-stigmatising programme for all parents/ children, with minimal evidence of effectiveness, high cost, and potential public opposition about the state intervening in family life; or (b) a targeted, stigmatising programme for a small number, with more evidence, less cost, but the sense that you are not really intervening ‘early’ (instead, you are waiting for problems to arise before you intervene). What would you do, and how would you sell your choice to the public?

I ask this question because ‘early intervention’ seems to the classic valence issue with a twist. Most people seem to want it in the abstract: isn’t it best to intervene as early as possible in a child’s life to protect them or improve their life chances?

However, profound problems or controversies arise when governments try to pursue it. There are many more choices than I presented, but the same basic trade-offs arise in each case. So, at the start, it looks like you have lucked onto a policy that almost everyone loves. At the end, you realise that you can’t win. There is no such thing as a valence issue at the point of policy choice and delivery.

To expand on these dilemmas in more depth, I compare cases of Scottish and UK Government ‘families policies’. In previous posts, I portrayed their differences – at least in the field of prevention and early intervention policies – as more difficult to pin down than you might think. Often, they either say the same things but ‘operationalise’ them in very different ways, or describe very different problems then select very similar solutions.

This basic description sums up very similar waves of key ‘families policies’ since devolution: an initial focus on social inclusion, then anti-social behaviour, followed by a contemporary focus on ‘whole family’ approaches and early intervention. I will show how they often go their own ways, but note the same basic context for choice, and similar choices, which help qualify that picture.

Early intervention & prevention policies are valence issues …

A valence (or ‘motherhood and apple pie’) issue is one in which you can generate huge support because the aim seems, to most people, to be obviously good. Broad aims include ‘freedom’ and ‘democracy’. In the UK specific aims include a national health service free at the point of use. We often focus on valence issues to highlight the importance of a political party’s or leader’s image of governing competence: it is not so much what we want (when the main parties support very similar things), but who you trust to get it.

Early intervention seems to fit the bill: who would want you to intervene late or too late in someone’s life when you can intervene early, to boost their life chances at an early stage as possible? All we have to do is work out how to do it well, with reference to some good evidence. Yet, as I discuss below, things get complicated as soon as we consider the types of early intervention available, generally described roughly as a spectrum from primary (stop a problem occurring and focus on the whole population – like a virus inoculation) to secondary (address a problem at an early stage, using proxy indicators to identify high-risk groups), and tertiary (stop a problem getting worse in already affected groups).

Similarly, look at how Emily St Denny and I describe prevention policy. Would many people object to the basic principles?

“In the name of prevention, the UK and Scottish Governments propose to radically change policy and policymaking across the whole of government. Their deceptively simple definition of ‘prevention policy’ is: a major shift in resources, from the delivery of reactive public services to solve acute problems, to the prevention of those problems before they occur. The results they promise are transformative, to address three crises in politics simultaneously: a major reduction in socioeconomic equalities by focusing on their ‘root causes’; a solution to unsustainable public spending which is pushing public services to breaking point; and, new forms of localised policymaking, built on community and service user engagement, to restore trust in politics”.

… but the evidence on their effectiveness is inconvenient …

A good simple rule about ‘evidence-based policymaking’ is that there is never a ‘magic bullet’ to tell you what to do or take the place of judgement. Politics is about making choices which benefit some people while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution. A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to high evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field. The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention: intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour; and an outreach model of support and training. The evidence of success comes from evaluation and a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened to prevent (for example) family homelessness. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without an intervention of this sort.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success. This reputation has been generated according to evidential rules associated with ‘evidence based medicine’ (EBM), in which there is relatively strong adherence to a hierarchy of evidence, with RCTs and their systematic review at the top, and the belief that there should be ‘fidelity’ to programmes to make sure that the ‘dosage’ of the intervention is delivered properly and its effect measured. Key examples include the Family Nurse Partnership (although its first UK RCT evaluation was not promising), Triple P (although James Coyne has his doubts!), and Incredible Years (but note the importance of ‘indicated’ versus ‘selective’ programmes, below). In this approach, there may be more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and levels of existing services. We know that some interventions are associated with positive outcomes, but we struggle to establish definitively that they caused them (solely, separate from their context).

  1. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem with primary prevention in this field. It is difficult to see much evidence of success because: there are few examples of taking effective specialist projects ‘to scale’; there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners); and, it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

  1. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

… so governments have to make and defend highly ‘political’ choices …

I think this is key context in which we can try to understand the often-different choices by the UK and Scottish Governments. Faced with the same broad aim, to intervene early to prevent poor outcomes, the same uncertainty and lack of evidence that their interventions will produce the desired effect, and the same need to DO SOMETHING rather than wait for the evidence that may never arise, what do they do?

Both governments often did remarkably similar things before they did different things

From the late 1990s, both governments placed primary emphasis initially on a positive social inclusion agenda, followed by a relatively negative focus on anti-social behaviour (ASB), before a renewed focus on the social determinants of inequalities and the use of early intervention to prevent poor outcomes.

Both governments link families policies strongly to parenting skills, reinforcing the idea that parents are primarily responsible for the life chances of their children.

Both governments talk about getting away from deficit models of intervention (the Scottish Government in particular focuses on the ‘assets’ of individuals, families, and communities) but use deficit-model proxies to identify families in need of support, including: lone parenthood, debt problems, ill health (including disability and depression), and at least one member subject to domestic abuse or intergenerational violence, as well as professional judgements on the ‘chaotic’ or ‘dysfunctional’ nature of family life and of the likelihood of ‘family breakdown’ when, for example, a child it taken into care.

So, when we consider their headline-grabbing differences, note this common set of problems and drivers, and similar responses.

… and selling their early intervention choices is remarkably difficult …

Although our starting point was valence politics, prevention and early intervention policies are incredibly hard to get off the ground. As Emily St Denny and I describe elsewhere, when policymakers ‘make a sincere commitment to prevention, they do not know what it means or appreciate the scale of their task. They soon find a set of policymaking constraints that will always be present. When they ‘operationalise’ prevention, they face several fundamental problems, including: the identification of ‘wicked’ problems which are difficult to define and seem impossible to solve; inescapable choices on how far they should go to redistribute income, distribute public resources, and intervene in people’s lives; major competition from more salient policy aims which prompt them to maintain existing public services; and, a democratic system which limits their ability to reform the ways in which they make policy. These problems may never be overcome. More importantly, policymakers soon think that their task is impossible. Therefore, there is high potential for an initial period of enthusiasm and activity to be replaced by disenchantment and inactivity, and for this cycle to be repeated without resolution’.

These constraints refer to the broad idea of prevention policy, while specific policies can involve different drivers and constraints. With general prevention policy, it is difficult to know what government policy is and how you measure its success. ‘Prevention’ is vague, plus governments encourage local discretion to adapt the evidence of ‘what works’ to local circumstances.

Governments don’t get away with this regarding specific policies. Instead, Westminster politics is built on a simple idea of accountability in which you know who is in charge and therefore to blame. UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect, particularly in the UK, but also the Scottish, government.

… so the UK Government goes for it and faces the consequences ….

‘Troubled Families’ in England: the massive expansion of secondary prevention?

So, although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable short term outcomes – even if the broader aim is to encourage local discretion and successful long term outcomes.

In the absence of unequivocally supportive evidence (which may never appear), the UK government relied on a crisis (the London riots in 2011) to sell policy, and ridiculous processes of estimation of the size of the problem and performance measurement to sell the success of its solution. In this system, ministers perceive the need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and to do these things far more quickly than the people gathering evidence of more substantive success. There is a lot of criticism of the programme in terms of its lack, or cynical use, of evidence but little of it considers policy from an elected government’s perspective.

…while the Scottish Government is more careful, but faces unintended consequences

This particular UK Government response has no parallel in Scotland. The UK Government is far more likely than its Scottish counterpart to link families policies to a moral agenda in response to crisis, and there is no Scottish Government equivalent to ‘payment by results’ and massive programme expansion. Instead, it continued more modest roll-outs in partnership with local public bodies. Indeed, if we ‘zoom in’ to this one example, at this point in time, the comparison confirms the idea of a ‘Scottish Approach’ to policy and policymaking.

Yet, the Scottish Government has not solved the problems I describe in this post: it has not found an alternative ‘evidence based’ way to ‘scale up’ early intervention significantly and move from secondary/ tertiary forms of prevention to the more universal/ primary initiatives that you might associate intuitively with prevention policy.

Instead, its different experiences have highlighted different issues. For example, its key vehicle for early intervention and prevention is the ‘collaborative’ approach, such as in the Early Years Collaborative. Possibly, it represents the opposite of the UK’s attempt to centralise and performance-manage-the-hell-out-of the direction of major expansion.

Table 1 Three ideal types EBBP

Certainty, with this approach, your main aim is not to generate evidence of the success of interventions – at least not in the way we associate with ‘evidence based medicine’, randomised control trials, and the star ratings developed by the Early Intervention Foundation. Rather, the aim is to train local practitioners to use existing evidence and adapt it to local circumstances, experimenting as you go, and gathering/using data on progress in ways not associated with, for example, the family nurse partnership.

So, in terms of the discussion so far, perhaps its main advantage is that a government does not have to sell its political choices (it is more of a delivery system than a specific intervention) or back them up with evidence of success elsewhere. In the absence of much public, media, or political party attention, maybe it’s a nice pragmatic political solution built more on governance principles than specific evidence.

Yet, despite our fixation with the constitution, some policy issues do occasionally get discussed. For our purposes, the most relevant is the ‘named person’ scheme because it looks like a way to ‘scale up’ an initiative to support a universal or primary prevention approach and avoid stigmatising some groups by offering a service to everyone (in this respect, it is the antithesis to ‘troubled families’). In this case, all children in Scotland (and their parents or guardians) get access to a senior member of a public service, and that person acts as a way to ‘join up’ a public sector response to a child’s problems.

Interestingly, this universal approach has its own problems. ‘Troubled families’ sets up a distinction between troubled/ untroubled to limit its proposed intervention in family life. Its problem is the potential to stigmatise and demoralise ‘troubled’ families. ‘Named person’ shows the potential for greater outcry when governments try to not identify and stigmatise specific families. The scheme is largely a response to the continuous suggestion – made after high profile cases of child abuse or neglect – that children can suffer when no agency takes overall responsibility for their care, but has been opposed as excessive infringement on normal family life and data protection, successfully enough to delay its implementation.

The punchline to early intervention as a valence issue

Problems arise almost instantly when you try to turn a valence issue into something concrete. A vague and widely-supported policy, to intervene early to prevent bad outcomes, becomes a set of policy choices based on how governments frame the balance between ideology, stigma, and the evidence of the impact and cost-effectiveness of key interventions (which is often very limited).

Their experiences are not always directly comparable, but the UK and Scottish Governments have helped show us the pitfalls of concrete approaches to prevention and early intervention. They help us show that your basic policy choices include: (a) targeted programmes which increase stigma, (b) ‘indicated’ approaches which don’t always look like early intervention; (c) ‘selective’ approaches which seem to be less effective despite intervening at an earlier stage, (d) universal programmes which might cross a notional line between the state and the family, and (e) approaches which focus primarily on local experimentation with uncertain outcomes.

None of these approaches provide a solution to the early intervention dilemmas that all governments face, and there is no easy way to choose between approaches. We can make these choices more informed and systematic, by highlighting how all of the pieces of the jigsaw fit together, and somehow comparing their intended and unintended consequences. However, this process does not replace political judgement – and quite right too – because there is no such thing as a valence issue at the point of policy choice and delivery.






Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Scottish politics, UK politics and policy