Tag Archives: agenda setting

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

Leave a comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Telling Stories that Shape Public Policy

This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.

But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story.  Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place.  The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.

Why focus on stories?

The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.

The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.

Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.

The 5 Steps of Good Policy Narrating

  1. Tell a Story: Remember, facts never speak for themselves. If you are presenting best practices, relaying scientific information, or detailing cost/benefit analyses, you are telling or contributing to a story.  Engage your storytelling deliberately.
  2. Set the Stage: Policy narratives have a setting and in this setting you will find specific evidence, geography, legal parameters, and other policy consequential items and information.  Think of these setting items as props.  Not all stages can hold every relevant prop.  Be true to science; be true to your craft, but set your stage with props that maximize the potency of your story, which always includes making your setting amenable to your audience.
  3. Establish the Plot: In public policy plots usually define the problem (and polices do not exist without at least a potential problem). Define your problem. Doing so determines the causes, which establishes blame.
  4. Cast the Characters:  Having established a plot and defined your problem, the roles you will need your characters to play become apparent. Determine who the victim is (who is harmed by the problem), who is responsible (the villain) and who can bring relief (the hero). Cast characters your audience will appreciate in their roles.
  5. Clearly Specify the Moral: Postmodern films might get away without having a point.  Policy narratives usually do not. Let your audience know what the solution is.

Public Policy Intervention Points

There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.

These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.

Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.

When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.

  1. Who are you and what are your goals? Are you an outsider trying to affect change to solve a problem or push an agency to do something it might not be inclined to do?  Are you an insider trying to evaluate and improve policy making and implementation? Understanding your role and your goals is essential to both selecting an appropriate intervention point and optimizing your narrative therein.
  2. Carefully consider your audience. Who are they and what is their posture towards your overall goal? Understanding your audience’s values and beliefs is essential for avoiding invoking defensiveness.
  3. There is the intervention point itself – what is the best way to reach your audience? What are the rules for the type of communication you plan to use? For example, media communications can be done with lengthy press releases, interviews with the press, or in the confines of a simple tweet.  All of these methods have both formal and informal constraints that will determine what you can and can’t do.

Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.

On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.

 

1 Comment

Filed under public policy, Storytelling

Three ways to communicate more effectively with policymakers

By Paul Cairney and Richard Kwiatkowski

Use psychological insights to inform communication strategies

Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).

They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.

Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.

If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.

We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.

To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.

Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.

On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:

  1. Tailor framing strategies to policymaker bias. If people are cognitive misers, minimise the cognitive burden of your presentation. If policymakers combine cognitive and emotive processes, combine facts with emotional appeals. If policymakers make quick choices based on their values and simple moral judgements, tell simple stories with a hero and moral. If policymakers reflect a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with those beliefs.
  2. Identify ‘windows of opportunity’ to influence individuals and processes. ‘Timing’ can refer to the right time to influence an individual, depending on their current way of thinking, or to act while political conditions are aligned.
  3. Adapt to real-world ‘dysfunctional’ organisations rather than waiting for an orderly process to appear. Form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.


This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.

Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

How do we get governments to make better decisions?

This is a guest post by Chris Koski (left) and Sam Workman (right), discussing how to use insights from punctuated equilibrium theory to reform government policy making. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Koski Workman

Many people assume that the main problem faced by governments is an information deficit. However, the opposite is true. A surfeit of information exists and institutions have a hard time managing it.  At the same time, all the information that exists in defining problems may be insufficient. Institutions need to develop a capacity to seek out better quality information too.

Institutions, from the national government, to state legislatures, to city councils – try to solve the information processing dilemma by delegating authority to smaller subgroups. Delegation increases the information processing capacity of governments by involving more actors to attend to narrower issues.

The delegation of authority is ultimately a delegation of attention. It solves the ‘flow’ problem, but also introduces new ‘filters’.  The preferences, interests, and modes of information search all influence the process. Even narrowly focused smaller organizations face limitations in their capacity to search and are subject to similar forces as the governments which created them – filters for the deluge of information and capacity limitations for information seeking.

Organizational design predisposes institutions to filter information for ideas that support status quo problem definitions – that is, definitions that existed at the time of delegation – and to seek out information based on these status quo understandings.  As a result, despite a desire to expand attention and information processing to adapt to changes in problem characteristics, most institutions look for information that supports their identity.  Institutional problem definitions stay the same even as the problems change.

Governments eventually face trade-offs between the gains made from delegating decision-making to smaller subgroups and the losses associated with coordinating the information generated by those subgroups.

Governments get stuck in the same ruts as when the delegation process started: status quo bias that doesn’t adjust with change problem conditions.  There is a sense among citizens and academics that governments make bad decisions in part because they respond to problems of today with the policies of 10 years ago.  Government solutions look like hammers in search of nails when they ought to look more like contractors or even urban planners.

Governments should not respond simply by centralizing

When institutions become stultified in their problem definitions, policymakers and citizens often misdiagnose the problem as entirely a coordination problem.  The logic here is that a small group of actors have captured policymaking and are using such capture for their own gain.  This understanding may be true, or may not, but it leads to the “centralization as savior” fallacy.  The idea here is that organizations with broader latitude will be better able to receive a wider variety of information from a broader range of sources.

There are two problems with this strategy.  First, centralization might guarantee an outcome, but at the expense of an honest problems search and, likely, at the expense of what we might call policy stability.  Second, centralization may offer the opportunity for a broader array of information to bear on policy decisions, but, in practice will rely on even narrower information filters given the number of issues to which the newly centralized policymaking forum must attend.

More delegation produces fragmentation

The alternative, more delegation, has significant coordination challenges as we find bottlenecks of attention when multiple subsystems bear on decision-points.  Also, simply delegating authority can predispose subsystems to a particular solution, which we want to avoid.

We’d propose: Adaptive governance

  • Design institutions not just to attend to problems, but to be specifically information seeking. For example, NEPA requires that all US federal decision-making regarding the environment undergo some kind of environmental assessment – this can be as simply as saying “the environmental will not be harmed” or as complex as an environmental impact statement.  At the same time, we’d suggest greater coordination of institutional actions – enhance communication across delegated units but also better feedback mechanisms to overarching institutions.
  • Institutions need to listen to the signals that their delegated units give them. When delegated institutions come to similar conclusions regarding similar problems, these are key signals to broader policymaking bodies.  Listening to signals from multiple delegated units allows for expertise to shine.  At the same time, disharmony across delegated units on the same problems is a good indicator of disharmony in information search.  Sometimes institutions respond to this disharmony by attempting to reduce participation in the policy process or cast outliers as simply outliers.  We think this is a bad idea as it exaggerates the acceptability of the status quo.
  • We propose ‘issue bundling’ which allows for issues to be less tied up by monolithic problem definitions. Policymaking institutions ought to formally direct delegated institutions to look at the same problem relying upon different expertise.  Examples here are climate change or critical infrastructure protection.  To create institutions to deal with these issues is a challenge given the wide range of information necessary to address each.  Institutions can solve the attention problems that emerge from the multiple sources by creating specific channels of information.  This allows for multiple subsystems  – e.g. Agriculture, Transportation, or Environmental Protection – to assist institutional decision-making by sorting issue specific – e.g. Climate Change – information.

Our solutions do solve fundamental problems of information processing in terms of sorting and seeking information – such problems are fundamental to humans and human-created organizations.  However, while governments may be predisposed to prioritize decisions over information, we are optimistic that our recommendations can facilitate better informed policy in the future.

1 Comment

Filed under agenda setting, public policy

Practical Lessons from Policy Theories

  1. Three habits of successful policy entrepreneurs
  2. How do we get governments to make better decisions?
  3. Why advocacy coalitions matter and how to think about them
  4. How can governments better collaborate to address complex problems?
  5. Telling stories that shape public policy
  6. How to navigate complex policy designs
  7. Three ways to encourage policy learning
  8. How to design ‘maps’ for policymakers relying on their ‘internal compass’

Policy influence is impossible to find if you don’t know where to look. Policies theories can help you look in the right places, but they take time to understand.

It’s not realistic to expect people with their own day jobs – such as scientists producing policy-relevant knowledge in other fields – to take the time to use the insights it takes my colleagues a full-time career to appreciate.

So, we need a way to explain those insights in a way that people can pick up and use when they engage in the policy process for the first time. That’s why Chris Weible and I asked a group of policy theory experts to describe the ‘state of the art’ in their field and the practical lessons that they offer.

None of these abstract theories provide a ‘blueprint’ for action (they were designed primarily to examine the policy process scientifically). Instead, they offer one simple insight: you’ll save a lot of energy if you engage with the policy process that exists, not the one you want to see.

Then, they describe variations on the same themes, including:

  1. There are profound limits to the power of individual policymakers: they can only process so much information, have to ignore almost all issues, and therefore tend to share policymaking with many other actors.
  2. You can increase your chances of success if you work with that insight: identify the right policymakers, the ‘venues’ in which they operate, and the ‘rules of the game’ in each venue; build networks and form coalitions to engage in those venues; shape agendas by framing problems and telling good stories, design politically feasible solutions, and learn how to exploit ‘windows of opportunity’ for their selection.

Our next presentation is at the ECPR in Oslo:

 

14 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

The UK Government’s ‘troubled families’ policy appears to be a classic top-down, evidence-free, and quick emotional reaction to crisis. It developed after riots in England (primarily in London) in August 2011. Within one week, and before announcing an inquiry into them, then Prime Minister David Cameron made a speech linking behaviour directly to ‘thugs’ and immorality – ‘people showing indifference to right and wrong…people with a twisted moral code…people with a complete absence of self-restraint’ – before identifying a breakdown in family life as a major factor (Cameron, 2011a).

Although the development of parenting programmes was already government policy, Cameron used the riots to raise parenting to the top of the agenda:

We are working on ways to help improve parenting – well now I want that work accelerated, expanded and implemented as quickly as possible. This has got to be right at the top of our priority list. And we need more urgent action, too, on the families that some people call ‘problem’, others call ‘troubled’. The ones that everyone in their neighbourhood knows and often avoids …Now that the riots have happened I will make sure that we clear away the red tape and the bureaucratic wrangling, and put rocket boosters under this programme …with a clear ambition that within the lifetime of this Parliament we will turn around the lives of the 120,000 most troubled families in the country.

Cameron reinforced this agenda in December 2011 by stressing the need for individuals and families to take moral responsibility for their actions, and for the state to intervene earlier in their lives to reduce public spending in the long term:

Officialdom might call them ‘families with multiple disadvantages’. Some in the press might call them ‘neighbours from hell’. Whatever you call them, we’ve known for years that a relatively small number of families are the source of a large proportion of the problems in society. Drug addiction. Alcohol abuse. Crime. A culture of disruption and irresponsibility that cascades through generations. We’ve always known that these families cost an extraordinary amount of money…but now we’ve come up the actual figures. Last year the state spent an estimated £9 billion on just 120,000 families…that is around £75,000 per family.

The policy – primarily of expanding the provision of ‘family intervention’ approaches – is often described as a ‘classic case of policy based evidence’: policymakers cherry pick or tell tall tales about evidence to justify action. It is a great case study for two reasons:

  1. Within this one programme are many different kinds of evidence-use which attract the ire of academic commentators, from an obviously dodgy estimate and performance management system to a more-sincere-but-still-criticised use of evaluations and neuroscience.
  2. It is easy to criticise the UK government’s actions but more difficult to say – when viewing the policy problem from its perspective – what the government should do instead.

In other words, it is useful to note that the UK government is not winning awards for ‘evidence-based policymaking’ (EBPM) in this area, but less useful to deny the politics of EBPM and hold it up to a standard that no government can meet.

The UK Government’s problematic use of evidence

Take your pick from the following ways in which the UK Government has been criticised for its use of evidence to make and defend ‘troubled families’ policy.

Its identification of the most troubled families: cherry picking or inventing evidence

At the heart of the programme is the assertion that we know who the ‘troubled families’ are, what causes their behaviour, and how to stop it. Yet, much of the programme is built on a value judgements about feckless parents, and tipping the balance from support to sanctions, and unsubstantiated anecdotes about key aspects such as the tendency of ‘worklessness’ or ‘welfare dependency’ to pass from one generation to another.

The UK government’s target of almost 120000 families was based speculatively on previous Cabinet Office estimates in 2006 that about ‘2% of families in England experience multiple and complex difficulties’. This estimate was based on limited survey data and modelling to identify families who met five of seven criteria relating to unemployment, poor housing, parental education, the mental health of the mother, the chronic illness or disability of either parent, an income below 60% of the median, and an inability to by certain items of food or clothing.

It then gave locally specific estimates to each local authority and asked them to find that number of families, identifying households with: (1) at least one under-18-year-old who has committed an offense in the last year, or is subject to an ASBO; and/ or (2) has been excluded from school permanently, or suspended on three consecutive terms, in a Pupil Referral Unit, off the school roll, or has over 15% unauthorised absences over three consecutive terms; and (3) an adult on out of work benefits.

If the household met all three criteria, they would automatically be included. Otherwise, local authorities had the discretion to identify further troubled families meeting two of the criteria and other indicators of concerns about ‘high costs’ of late intervention such as, ‘a child who is on a Child Protection Plan’, ‘Families subject to frequent police call-outs or arrests’, and ‘Families with health problems’ linked to mental health, addiction, chronic conditions, domestic abuse, and teenage pregnancy.

Its measure of success: ‘turning around’ troubled families

The UK government declared almost-complete success without convincing evidence. Success ‘in the last 6 months’ to identify a ‘turned around family’ is measured in two main ways: (1) the child no longer having three exclusions in a row, a reduction in the child offending rate of 33% or ASB rate of 60%, and/or the adult entering a relevant ‘progress to work’ programme; or (2) at least one adult moving from out of work benefits to continuous employment. It was self-declared by local authorities, and both parties had a high incentive to declare it: local authorities received £4000 per family payments and the UK government received a temporary way to declare progress without long term evidence.

The declaration is in stark contrast to an allegedly suppressed report to the government which stated that the programme had ‘no discernible effect on unemployment, truancy or criminality’. This lack of impact was partly confirmed by FOI requests by The Guardian – demonstrating that at least 8000 families received no intervention, but showed improvement anyway – and analysis by Levitas and Crossley which suggests that local authorities could only identify families by departing from the DCLG’s initial criteria.

Its investment in programmes with limited evidence of success

The UK government’s massive expansion of ‘family intervention projects’, and related initiatives, is based on limited evidence of success from a small sample of people from a small number of pilots. The ‘evidence for the effectiveness of family intervention projects is weak’ and a government-commissioned systematic review suggests that there are no good quality evaluations to demonstrate (well) the effectiveness or value-for-money of key processes such as coordinated service provision. The impact of other interventions, previously with good reputations, has been unclear, such as the Family Nurse Partnership imported from the US which so far has produced ‘no additional short-term benefit’. Overall, Crossley and Lambert suggest that “the weight of evidence surrounding ‘family intervention’ and similar approaches, over the longue durée, actually suggests that the approach doesn’t work”. There is also no evidence to support its heroic claim that spending £10000 per family will save £65000.

Its faith in sketchy neuroscientific evidence on the benefits of early intervention

The government is driven partly by a belief in the benefits of early intervention in the lives of children (from 0-3, or even before birth), which is based partly on the ‘now or never’ argument found in key reviews by Munro and Allen (one and two).

normal brain

Policymakers take liberties with neuroscientific evidence to emphasise the profound effect of stress on early brain development (measured, for example, by levels of cortisol found in hair samples). These accounts underpinning the urgency of early intervention are received far more critically in fields such as social science, neuroscience, and psychology. For example, Wastell and White find no good quality scientific evidence behind the comparison of child brain development reproduced in Allen’s reports.

Now let’s try to interpret and explain these points partly from a government perspective

Westminster politics necessitates this presentation of ‘prevention’ policies

If you strip away the rhetoric, the troubled families programme is a classic attempt at early intervention to prevent poor outcomes. In this general field, it is difficult to know what government policy is – what it stands for and how you measure its success. ‘Prevention’ is vague, plus governments make a commitment to meaningful local discretion and the sense that local actors should be guided by a combination of the evidence of ‘what works’ and its applicability to local circumstances.

This approach is not tolerated in Westminster politics, built on the simple idea of accountability in which you know who is in charge and therefore to blame! UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect: although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable outcomes – even if the broader aim is to encourage local discretion.

This context helps explain why governments appear to exploit crises to sell existing policies, and pursue ridiculous processes of estimation and performance measurement. They need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and they have to do these things very quickly.

Consequently, for example, they will not worry about some academics complaining about policy based evidence – they are more concerned about their media and public reception and the ability of the opposition to exploit their failures – and few people in politics have the time (that many academics take for granted) to wait for research. This is the lens through which we should view all discussions of the use of evidence in politics and policy.

Unequivocal evidence is impossible to produce and we can’t wait forever

The argument for evidence-based-policy rather than policy-based-evidence suggests that we know what the evidence is. Yet, in this field in particular, there is potential for major disagreement about the ‘bar’ we set for evidence.

Table 1 Three ideal types EBBP

For some, it relates to a hierarchy of evidence in which randomised control trials (RCTs) and their systematic review are at the top: the aim is to demonstrate that an intervention’s effect was positive, and more positive than another intervention or non-intervention. This requires experiments: to compare the effects of interventions in controlled settings, in ways that are directly comparable with other experiments.

As table 1 suggests, some other academics do not adhere to – and some reject – this hierarchy. This context highlights three major issues for policymakers:

  1. In general, when they seek evidence, they find this debate about how to gather and analyse it (and the implications for policy delivery).
  2. When seeking evidence on interventions, they find some academics using the hierarchy to argue the ‘evidence for the effectiveness of family intervention projects is weak’. This adherence to a hierarchy to determine research value also doomed to failure a government-commissioned systematic review: the review applied a hierarchy of evidence to its analysis of reports by authors who did not adhere to the same model. The latter tend to be more pragmatic in their research design (and often more positive about their findings), and their government audience rarely adheres to the same evidential standard built on a hierarchy. In the absence of someone giving ground, some researchers will never be satisfied with the available evidence and elected policymakers are unlikely to listen to them.
  3. The evidence generated from RCTs is often disappointing. The so-far-discouraging experience of the Family Nurse Partnership has a particularly symbolic impact, and policymakers can easily pick up a general sense of uncertainty about the best policies in which to invest.

So, if your main viewpoint is academic, you can easily conclude that the available evidence does not yet justify massive expansion in the troubled families programme (perhaps you might prefer the Scottish approach of smaller scale piloting, or for the government to abandon certain interventions altogether).

However, if you are a UK government policymaker feeling the need to act – and knowing that you always have to make decisions despite uncertainty – you may also feel that there will never be enough evidence on which to draw. Given the problems outlined above, you may as well act now than wait for years for little to change.

The ends justify the means

Policymakers may feel that the ends of such policies – investment in early intervention by shifting funds from late intervention – may justify the means, which can include a ridiculous oversimplification of evidence. It may seem almost impossible for governments to find other ways to secure the shift, given the multiple factors which undermine its progress.

Governments sometimes hint at this approach when simplifying key figures – effectively to argue that late intervention costs £9bn while early intervention will only cost £448m – to reinforce policy change: ‘the critical point for the Government was not necessarily the precise figure, but whether a sufficiently compelling case for a new approach was made’.

Similarly the vivid comparison of healthy versus neglected brains provides shocking reference points to justify early intervention. Their rhetorical value far outweighs their evidential value. As in all EBPM, the choice for policymakers is to play the game, to generate some influence in not-ideal circumstances, or hope that science and reason will save the day (and the latter tends to be based on hope rather than evidence). So, the UK appeared to follow the US’ example in which neuroscience ‘was chosen as the scientific vehicle for the public relations campaign to promote early childhood programs more for rhetorical, than scientific reasons’, partly because a focus on, for example, permanent damage to brain circuitry is less abstract than a focus on behaviour.

Overall, policymakers seem willing to build their case on major simplifications and partial truths to secure what they believe to be a worthy programme (although it would be interesting to find out which policymakers actually believe the things they say). If so, pointing out their mistakes or alleging lies can often have a minimal impact (or worse, if policymakers ‘double down’ in the face of criticism).

Implications for academics, practitioners, and ‘policy based evidence’

I have been writing on ‘troubled families’ while encouraging academics and practitioners to describe pragmatic strategies to increase the use of evidence in policy.

Palgrave C special

Our starting point is relevant to this discussion – since it asks what we should do if policymakers don’t think like academics:

  • They worry more about Westminster politics – their media and public reception and the ability of the opposition party to exploit their failures – than what academics think of their actions.
  • They do not follow the same rules of evidence generation and analysis.
  • They do not have the luxury of uncertainty and time.

Generally, this is a useful lens through which we should view discussions of the realistic use of evidence in politics and policy. Without being pragmatic – to recognise that policymakers will never think like scientists, and always face different pressures – we might simply declare ‘policy based evidence’ in all cases. Although a commitment to pragmatism does not solve these problems, at least it prompts us to be more specific about categories of PBE, the criteria we use to identify it, if our colleagues share a commitment to those criteria, what we can reasonably expect of policymakers, and how we might respond.

In disciplines like social policy we might identify a further issue, linked to:

  1. A tradition of providing critical accounts of government policy to help hold elected policymakers to account. If so, the primary aim may be to publicise key flaws without engaging directly with policymakers to help fix them – and perhaps even to criticise other scholars for doing so – because effective criticism requires critical distance.
  2. A tendency of many other social policy scholars to engage directly in evaluations of government policy, with the potential to influence and be influenced by policymakers.

It is a dynamic that highlights well the difficulty of separating empirical and normative evaluations when critics point to the inappropriate nature of the programmes as they interrogate the evidence for their effectiveness. This difficulty is often more hidden in other fields, but it is always a factor.

For example, Parr noted in 2009 that ‘despite ostensibly favourable evidence … it has been argued that the apparent benign-welfarism of family and parenting-based antisocial behaviour interventions hide a growing punitive authoritarianism’. The latter’s most extreme telling is by Garrett in 2007, who compares residential FIPs (‘sin bins’) to post-war Dutch programmes resembling Nazi social engineering and criticises social policy scholars for giving them favourable evaluations – an argument criticised in turn by Nixon and Bennister et al.

For present purposes, note Nixon’s identification of ‘an unusual case of policy being directly informed by independent research’, referring to the possible impact of favourable evaluations of FIPs on the UK Government’s move way from (a) an intense focus on anti-social behaviour and sanctions towards (b) greater support. While it would be a stretch to suggest that academics can set government agendas, they can at least enhance their impact by framing their analysis in a way that secures policymaker interest. If academics seek influence, rather than critical distance, they may need to get their hands dirty: seeking to understand policymakers to find alternative policies that still give them what they want.

5 Comments

Filed under Prevention policy, public policy, UK politics and policy