Tag Archives: troubled families

The UK government’s imaginative use of evidence to make policy

This post describes a new article published in British Politics (Open Access). Please find:

(1) A super-exciting video/audio powerpoint I use for a talk based on the article

(2) The audio alone (link)

(3) The powerpoint to download, so that the weblinks work (link) or the ppsx/ presentation file in case you are having a party (link)

(4) A written/ tweeted discussion of the main points

In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.

So I describe two cases in which its evidence-use seems cynical, when:

  1. Declaring complete success in turning around the lives of ‘troubled families’
  2. Exploiting vivid neuroscientific images to support ‘early intervention’

Then I describe more difficult cases in which supportive evidence is not clear:

  1. Family intervention project evaluations are of limited value and only tentatively positive
  2. Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’

As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?

At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:

Cairney 2018 British Politics discussion section

I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.

The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’

I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.

Cairney British Politics 2018 Table 1

As such, it forms part of a far wider body of work …

In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.

In that wider context, it is worth comparing this talk with the one I gave at the IDS (which, I reckon is a good primer for – or prequel to – the UK talk):

See also:

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Why doesn’t evidence win the day in policy and policymaking?

(found by searching for early intervention)

See also:

Here’s why there is always an expectations gap in prevention policy

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

(found by searching for prevention)

Powerpoint for guest lecture: Paul Cairney UK Government Evidence Policy

5 Comments

Filed under Evidence Based Policymaking (EBPM), POLU9UK, Prevention policy, UK politics and policy

Why doesn’t evidence win the day in policy and policymaking?

cairney-southampton-evidence-win-the-dayPolitics has a profound influence on the use of evidence in policy, but we need to look ‘beyond the headlines’ for a sense of perspective on its impact.

It is tempting for scientists to identify the pathological effect of politics on policymaking, particularly after high profile events such as the ‘Brexit’ vote in the UK and the election of Donald Trump as US President. We have allegedly entered an era of ‘post-truth politics’ in which ideology and emotion trumps evidence and expertise (a story told many times at events like this), particularly when issues are salient.

Yet, most policy is processed out of this public spotlight, because the flip side of high attention to one issue is minimal attention to most others. Science has a crucial role in this more humdrum day-to-day business of policymaking which is far more important than visible. Indeed, this lack of public visibility can help many actors secure a privileged position in the policy process (and further exclude citizens).

In some cases, experts are consulted routinely. There is often a ‘logic’ of consultation with the ‘usual suspects’, including the actors most able to provide evidence-informed advice. In others, scientific evidence is often so taken for granted that it is part of the language in which policymakers identify problems and solutions.

In that context, we need better explanations of an ‘evidence-policy’ gap than the pathologies of politics and egregious biases of politicians.

To understand this process, and appearance of contradiction between excluded versus privileged experts, consider the role of evidence in politics and policymaking from three different perspectives.

The perspective of scientists involved primarily in the supply of evidence

Scientists produce high quality evidence only for politicians often ignore it or, even worse, distort its message to support their ideologically-driven policies. If they expect ‘evidence-based policymaking’ they soon become disenchanted and conclude that ‘policy-based evidence’ is more likely. This perspective has long been expressed in scientific journals and commentaries, but has taken on new significance following ‘Brexit’ and Trump.

The perspective of elected politicians

Elected politicians are involved primarily in managing government and maximising public and organisational support for policies. So, scientific evidence is one piece of a large puzzle. They may begin with a manifesto for government and, if elected, feel an obligation to carry it out. Evidence may play a part in that process but the search for evidence on policy solutions is not necessarily prompted by evidence of policy problems.

Further, ‘evidence based policy’ is one of many governance principles that politicians should feel the need to juggle. For example, in Westminster systems, ministers may try to delegate policymaking to foster ‘localism’ and/ or pragmatic policymaking, but also intervene to appear to be in control of policy, to foster a sense of accountability built on an electoral imperative. The likely mix of delegation and intervention seems almost impossible to predict, and this dynamic has a knock-on effect for evidence-informed policy. In some cases, central governments roll out the same basic policy intervention and limit local discretion; in others, it identifies broad outcomes and invites other bodies to gather evidence on how best to meet them. These differences in approach can have profound consequences on the models of evidence-informed policy available to us (see the example of Scottish policymaking).

Political science and policy studies provide a third perspective

Policy theories help us identify the relationship between evidence and policy by showing that a modern focus on ‘evidence-based policymaking’ (EBPM) is one of many versions of the same fairy tale – about ‘rational’ policymaking – that have developed in the post-war period. We talk about ‘bounded rationality’ to identify key ways in which policymakers or organisations could not achieve ‘comprehensive rationality’:

  1. They cannot separate values and facts.
  2. They have multiple, often unclear, objectives which are difficult to rank in any meaningful way.
  3. They have to use major shortcuts to gather a limited amount of information in a limited time.
  4. They can’t make policy from the ‘top down’ in a cycle of ordered and linear stages.

Limits to ‘rational’ policymaking: two shortcuts to make decisions

We can sum up the first three bullet points with one statement: policymakers have to try to evaluate and solve many problems without the ability to understand what they are, how they feel about them as a whole, and what effect their actions will have.

To do so, they use two shortcuts: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly.

Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing issues to produce or reinforce a dominant way to define policy problems. Successful actors combine evidence and emotional appeals or simple stories to capture policymaker attention, and/ or help policymakers interpret information through the lens of their strongly-held beliefs.

Scientific evidence plays its part, but scientists often make the mistake of trying to bombard policymakers with evidence when they should be trying to (a) understand how policymakers understand problems, so that they can anticipate their demand for evidence, and (b) frame their evidence according to the cognitive biases of their audience.

Policymaking in ‘complex systems’ or multi-level policymaking environments

Policymaking takes place in less ordered, less hierarchical, and less predictable environment than suggested by the image of the policy cycle. Such environments are made up of:

  1. a wide range of actors (individuals and organisations) influencing policy at many levels of government
  2. a proliferation of rules and norms followed by different levels or types of government
  3. close relationships (‘networks’) between policymakers and powerful actors
  4. a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

These five properties – plus a ‘model of the individual’ built on a discussion of ‘bounded rationality’ – make up the building blocks of policy theories (many of which I summarise in 1000 Word posts). I say this partly to aid interdisciplinary conversation: of course, each theory has its own literature and jargon, and it is difficult to compare and combine their insights, but if you are trained in a different discipline it’s unfair to ask you devote years of your life to studying policy theory to end up at this point.

To show that policy theories have a lot to offer, I have been trying to distil their collective insights into a handy guide – using this same basic format – that you can apply to a variety of different situations, from explaining painfully slow policy change in some areas but dramatic change in others, to highlighting ways in which you can respond effectively.

We can use this approach to help answer many kinds of questions. With my Southampton gig in mind, let’s use some examples from public health and prevention.

Why doesn’t evidence win the day in tobacco policy?

My colleagues and I try to explain why it takes so long for the evidence on smoking and health to have a proportionate impact on policy. Usually, at the back of my mind, is a public health professional audience trying to work out why policymakers don’t act quickly or effectively enough when presented with unequivocal scientific evidence. More recently, they wonder why there is such uneven implementation of a global agreement – the WHO Framework Convention on Tobacco Control – that almost every country in the world has signed.

We identify three conditions under which evidence will ‘win the day’:

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems. In leading countries, it took decades to command attention to the health effects of smoking, reframe tobacco primarily as a public health epidemic (not an economic good), and generate support for the most effective evidence-based solutions.
  2. The policy environment becomes conducive to policy change. A new and dominant frame helps give health departments (often in multiple venues) a greater role; health departments foster networks with public health and medical groups at the expense of the tobacco industry; and, they emphasise the socioeconomic conditions – reductions in smoking prevalence, opposition to tobacco control, and economic benefits to tobacco – supportive of tobacco control.
  3. Actors exploit ‘windows of opportunity’ successfully. A supportive frame and policy environment maximises the chances of high attention to a public health epidemic and provides the motive and opportunity of policymakers to select relatively restrictive policy instruments.

So, scientific evidence is a necessary but insufficient condition for major policy change. Key actors do not simply respond to new evidence: they use it as a resource to further their aims, to frame policy problems in ways that will generate policymaker attention, and underpin technically and politically feasible solutions that policymakers will have the motive and opportunity to select. This remains true even when the evidence seems unequivocal and when countries have signed up to an international agreement which commits them to major policy change. Such commitments can only be fulfilled over the long term, when actors help change the policy environment in which these decisions are made and implemented. So far, this change has not occurred in most countries (or, in other aspects of public health in the UK, such as alcohol policy).

Why doesn’t evidence win the day in prevention and early intervention policy?

UK and devolved governments draw on health and economic evidence to make a strong and highly visible commitment to preventive policymaking, in which the aim is to intervene earlier in people’s lives to improve wellbeing and reduce socioeconomic inequalities and/ or public sector costs. This agenda has existed in one form or another for decades without the same signs of progress we now associate with areas like tobacco control. Indeed, the comparison is instructive, since prevention policy rarely meets the three conditions outlined above:

  1. Prevention is a highly ambiguous term and many actors make sense of it in many different ways. There is no equivalent to a major shift in problem definition for prevention policy as a whole, and little agreement on how to determine the most effective or cost-effective solutions.
  2. A supportive policy environment is far harder to identify. Prevention policy cross-cuts many policymaking venues at many levels of government, with little evidence of ‘ownership’ by key venues. Consequently, there are many overlapping rules on how and from whom to seek evidence. Networks are diffuse and hard to manage. There is no dominant way of thinking across government (although the Treasury’s ‘value for money’ focus is key currency across departments). There are many socioeconomic indicators of policy problems but little agreement on how to measure or which measures to privilege (particularly when predicting future outcomes).
  3. The ‘window of opportunity’ was to adopt a vague solution to an ambiguous policy problem, providing a limited sense of policy direction. There have been several ‘windows’ for more specific initiatives, but their links to an overarching policy agenda are unclear.

These limitations help explain slow progress in key areas. The absence of an unequivocal frame, backed strongly by key actors, leaves policy change vulnerable to successful opposition, especially in areas where early intervention has major implications for redistribution (taking from existing services to invest in others) and personal freedom (encouraging or obliging behavioural change). The vagueness and long term nature of policy aims – to solve problems that often seem intractable – makes them uncompetitive, and often undermined by more specific short term aims with a measurable pay-off (as when, for example, funding for public health loses out to funding to shore up hospital management). It is too easy to reframe existing policy solutions as preventive if the definition of prevention remains slippery, and too difficult to demonstrate the population-wide success of measures generally applied to high risk groups.

What happens when attitudes to two key principles – evidence based policy and localism – play out at the same time?

A lot of discussion of the politics of EBPM assumes that there is something akin to a scientific consensus on which policymakers do not act proportionately. Yet, in many areas – such as social policy and social work – there is great disagreement on how to generate and evaluate the best evidence. Broadly speaking, a hierarchy of evidence built on ‘evidence based medicine’ – which has randomised control trials and their systematic review at the top, and practitioner knowledge and service user feedback at the bottom – may be completely subverted by other academics and practitioners. This disagreement helps produce a spectrum of ways in which we might roll-out evidence based interventions, from an RCT-driven roll-out of the same basic intervention to a storytelling driven pursuit of tailored responses built primarily on governance principles (such as to co-produce policy with users).

At the same time, governments may be wrestling with their own governance principles, including EBPM but also regarding the most appropriate balance between centralism and localism.

If you put both concerns together, you have a variety of possible outcomes (and a temptation to ‘let a thousand flowers bloom’) and a set of competing options (outlined in table 1), all under the banner of ‘evidence based’ policymaking.

Table 1 Three ideal types EBBP

What happens when a small amount of evidence goes a very long way?

So, even if you imagine a perfectly sincere policymaker committed to EBPM, you’d still not be quite sure what they took it to mean in practice. If you assume this commitment is a bit less sincere, and you add in the need to act quickly to use the available evidence and satisfy your electoral audience, you get all sorts of responses based in some part on a reference to evidence.

One fascinating case is of the UK Government’s ‘troubled families’ programme which combined bits and pieces of evidence with ideology and a Westminster-style-accountability imperative, to produce:

  • The argument that the London riots were caused by family breakdown and bad parenting.
  • The use of proxy measures to identify the most troubled families
  • The use of superficial performance management to justify notionally extra expenditure for local authorities
  • The use of evidence in a problematic way, from exaggerating the success of existing ‘family intervention projects’ to sensationalising neuroscientific images related to brain development in deprived children …

normal brain

…but also

In other words, some governments feel the need to dress up their evidence-informed policies in a language appropriate to Westminster politics. Unless we understand this language, and the incentives for elected policymakers to use it, we will fail to understand how to act effectively to influence those policymakers.

What can you do to maximise the use of evidence?

When you ask the generic question you can generate a set of transferable strategies to engage in policymaking:

how-to-be-heard

ebpm-5-things-to-do

Yet, as these case studies of public health and social policy suggest, the question lacks sufficient meaning when applied to real world settings. Would you expect the advice that I give to (primarily) natural scientists (primarily in the US) to be identical to advice for social scientists in specific fields (in, say, the UK)?

No, you’d expect me to end with a call for more research! See for example this special issue in which many scholars from many disciplines suggest insights on how to maximise the use of evidence in policy.

Palgrave C special

13 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

caspi-et-al-abstract

Avshalom Caspi and colleagues have used the 45-year ‘Dunedin’ study in New Zealand to identify the ‘large economic burden’ associated with ‘a small segment of the population’. They don’t quite achieve the 20%-causes-80% mark, but suggest that 22% of the population account disproportionately for the problems that most policymakers would like to solve, including unhealthy, economically inactive, and criminal behaviour. Most importantly, they discuss some success in predicting such outcomes from a 45-minute diagnostic test of 3 year olds.

Of course, any such publication will prompt major debates about how we report, interpret, and deal with such information, and these debates tend to get away from the original authors as soon as they publish and others report (follow the tweet thread):

This is true even though the authors have gone to unusual lengths to show the many ways in which you could interpret their figures. Theirs is a politically aware report, using some of the language of elected politicians but challenging simple responses. You can see this in their discussion which has a lengthy list of points about the study’s limitations.

The ambiguity dilemma: more evidence does not produce more agreement

‘The most costly adults in our cohort started the race of life from a starting block somewhere behind the rest, and while carrying a heavy handicap in brain health’.

The first limitation is that evidence does not help us adjudicate between competing attempts to define the problem. For some, it reinforces the idea of an ‘underclass’ or small collection of problem/ troubled families that should be blamed for society’s ills (it’s the fault of families and individuals). For others, it reinforces the idea that socio-economic inequalities harm the life chances of people as soon as they are born (it is out of the control of individuals).

The intervention dilemma: we know more about the problem than its solution

The second limitation is that this study tells us a lot about a problem but not its solution. Perhaps there is some common ground on the need to act, and to invest in similar interventions, but:

  1. The evidence on the effectiveness of solutions is not as strong or systematic as this new evidence on the problem.
  2. There are major dilemmas involved in ‘scaling up’ such solutions and transferring them from one area to another.
  3. The overall ‘tone’ of debate still matters to policy delivery, to determine for example if any intervention should be punitive and compulsory (you will cause the problem, so you have to engage with the solution) or supportive and voluntary (you face disadvantages, so we’ll try to help you if you let us).

The moral dilemma: we may only pay attention to the problem if there is a feasible solution

Prevention and early intervention policy agendas often seem to fail because the issues they raise seem too difficult to solve. Governments make the commitment to ‘prevention’ in the abstract but ‘do not know what it means or appreciate scale of their task’.

A classic policymaker heuristic described by Kingdon is that policymakers only pay attention to problems they think they can solve. So, they might initially show enthusiasm, only to lose interest when problems seem intractable or there is high opposition to specific solutions.

This may be true of most policies, but prevention and early intervention also seem to magnify the big moral question that can stop policy in its tracks: to what extent is it appropriate to intervene in people’s lives to change their behaviour?

Some may vocally oppose interventions based on their concern about the controlling nature of the state, particularly when it intervenes to prevent (say, criminal) behaviour that will not necessarily occur. It may be easier to make the case for intervening to help children, but difficult to look like you are not second guessing their parents.

Others may quietly oppose interventions based on an unresolved economic question: does it really save money to intervene early? Put bluntly, a key ‘economic burden’ relates to population longevity; the ‘20%’ may cause economic problems in their working years but die far earlier than the 80%. Put less bluntly by the authors:

This is an important question because the health-care burden of developed societies concentrates in older age groups. To the extent that factors such as smoking, excess weight and health problems during midlife foretell health-care burden and social dependency, findings here should extend to later life (keeping in mind that midlife smoking, weight problems and health problems also forecast premature mortality)’.

So, policymakers find initially that ‘early intervention’ a valence issue only in the abstract – who wouldn’t want to intervene as early as possible in a child’s life to protect them or improve their life chances? – but not when they try to deliver concrete policies.

The evidence-based policymaking dilemma

Overall, we are left with the sense that even the best available evidence of a problem may not help us solve it. Choosing to do nothing may be just as ‘evidence based’ as choosing a solution with minimal effects. Choosing to do something requires us to use far more limited evidence of solution effectiveness and to act in the face of high uncertainty. Add into the mix that prevention policy does not seem to be particularly popular and you might wonder why any policymaker would want to do anything with the best evidence of a profound societal problem.

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Imagine this as your ‘early intervention’ policy choice: (a) a universal and non-stigmatising programme for all parents/ children, with minimal evidence of effectiveness, high cost, and potential public opposition about the state intervening in family life; or (b) a targeted, stigmatising programme for a small number, with more evidence, less cost, but the sense that you are not really intervening ‘early’ (instead, you are waiting for problems to arise before you intervene). What would you do, and how would you sell your choice to the public?

I ask this question because ‘early intervention’ seems to the classic valence issue with a twist. Most people seem to want it in the abstract: isn’t it best to intervene as early as possible in a child’s life to protect them or improve their life chances?

However, profound problems or controversies arise when governments try to pursue it. There are many more choices than I presented, but the same basic trade-offs arise in each case. So, at the start, it looks like you have lucked onto a policy that almost everyone loves. At the end, you realise that you can’t win. There is no such thing as a valence issue at the point of policy choice and delivery.

To expand on these dilemmas in more depth, I compare cases of Scottish and UK Government ‘families policies’. In previous posts, I portrayed their differences – at least in the field of prevention and early intervention policies – as more difficult to pin down than you might think. Often, they either say the same things but ‘operationalise’ them in very different ways, or describe very different problems then select very similar solutions.

This basic description sums up very similar waves of key ‘families policies’ since devolution: an initial focus on social inclusion, then anti-social behaviour, followed by a contemporary focus on ‘whole family’ approaches and early intervention. I will show how they often go their own ways, but note the same basic context for choice, and similar choices, which help qualify that picture.

Early intervention & prevention policies are valence issues …

A valence (or ‘motherhood and apple pie’) issue is one in which you can generate huge support because the aim seems, to most people, to be obviously good. Broad aims include ‘freedom’ and ‘democracy’. In the UK specific aims include a national health service free at the point of use. We often focus on valence issues to highlight the importance of a political party’s or leader’s image of governing competence: it is not so much what we want (when the main parties support very similar things), but who you trust to get it.

Early intervention seems to fit the bill: who would want you to intervene late or too late in someone’s life when you can intervene early, to boost their life chances at an early stage as possible? All we have to do is work out how to do it well, with reference to some good evidence. Yet, as I discuss below, things get complicated as soon as we consider the types of early intervention available, generally described roughly as a spectrum from primary (stop a problem occurring and focus on the whole population – like a virus inoculation) to secondary (address a problem at an early stage, using proxy indicators to identify high-risk groups), and tertiary (stop a problem getting worse in already affected groups).

Similarly, look at how Emily St Denny and I describe prevention policy. Would many people object to the basic principles?

“In the name of prevention, the UK and Scottish Governments propose to radically change policy and policymaking across the whole of government. Their deceptively simple definition of ‘prevention policy’ is: a major shift in resources, from the delivery of reactive public services to solve acute problems, to the prevention of those problems before they occur. The results they promise are transformative, to address three crises in politics simultaneously: a major reduction in socioeconomic equalities by focusing on their ‘root causes’; a solution to unsustainable public spending which is pushing public services to breaking point; and, new forms of localised policymaking, built on community and service user engagement, to restore trust in politics”.

… but the evidence on their effectiveness is inconvenient …

A good simple rule about ‘evidence-based policymaking’ is that there is never a ‘magic bullet’ to tell you what to do or take the place of judgement. Politics is about making choices which benefit some people while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution. A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to high evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field. The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention: intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour; and an outreach model of support and training. The evidence of success comes from evaluation and a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened to prevent (for example) family homelessness. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without an intervention of this sort.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success. This reputation has been generated according to evidential rules associated with ‘evidence based medicine’ (EBM), in which there is relatively strong adherence to a hierarchy of evidence, with RCTs and their systematic review at the top, and the belief that there should be ‘fidelity’ to programmes to make sure that the ‘dosage’ of the intervention is delivered properly and its effect measured. Key examples include the Family Nurse Partnership (although its first UK RCT evaluation was not promising), Triple P (although James Coyne has his doubts!), and Incredible Years (but note the importance of ‘indicated’ versus ‘selective’ programmes, below). In this approach, there may be more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and levels of existing services. We know that some interventions are associated with positive outcomes, but we struggle to establish definitively that they caused them (solely, separate from their context).

  1. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem with primary prevention in this field. It is difficult to see much evidence of success because: there are few examples of taking effective specialist projects ‘to scale’; there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners); and, it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

  1. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

… so governments have to make and defend highly ‘political’ choices …

I think this is key context in which we can try to understand the often-different choices by the UK and Scottish Governments. Faced with the same broad aim, to intervene early to prevent poor outcomes, the same uncertainty and lack of evidence that their interventions will produce the desired effect, and the same need to DO SOMETHING rather than wait for the evidence that may never arise, what do they do?

Both governments often did remarkably similar things before they did different things

From the late 1990s, both governments placed primary emphasis initially on a positive social inclusion agenda, followed by a relatively negative focus on anti-social behaviour (ASB), before a renewed focus on the social determinants of inequalities and the use of early intervention to prevent poor outcomes.

Both governments link families policies strongly to parenting skills, reinforcing the idea that parents are primarily responsible for the life chances of their children.

Both governments talk about getting away from deficit models of intervention (the Scottish Government in particular focuses on the ‘assets’ of individuals, families, and communities) but use deficit-model proxies to identify families in need of support, including: lone parenthood, debt problems, ill health (including disability and depression), and at least one member subject to domestic abuse or intergenerational violence, as well as professional judgements on the ‘chaotic’ or ‘dysfunctional’ nature of family life and of the likelihood of ‘family breakdown’ when, for example, a child it taken into care.

So, when we consider their headline-grabbing differences, note this common set of problems and drivers, and similar responses.

… and selling their early intervention choices is remarkably difficult …

Although our starting point was valence politics, prevention and early intervention policies are incredibly hard to get off the ground. As Emily St Denny and I describe elsewhere, when policymakers ‘make a sincere commitment to prevention, they do not know what it means or appreciate the scale of their task. They soon find a set of policymaking constraints that will always be present. When they ‘operationalise’ prevention, they face several fundamental problems, including: the identification of ‘wicked’ problems which are difficult to define and seem impossible to solve; inescapable choices on how far they should go to redistribute income, distribute public resources, and intervene in people’s lives; major competition from more salient policy aims which prompt them to maintain existing public services; and, a democratic system which limits their ability to reform the ways in which they make policy. These problems may never be overcome. More importantly, policymakers soon think that their task is impossible. Therefore, there is high potential for an initial period of enthusiasm and activity to be replaced by disenchantment and inactivity, and for this cycle to be repeated without resolution’.

These constraints refer to the broad idea of prevention policy, while specific policies can involve different drivers and constraints. With general prevention policy, it is difficult to know what government policy is and how you measure its success. ‘Prevention’ is vague, plus governments encourage local discretion to adapt the evidence of ‘what works’ to local circumstances.

Governments don’t get away with this regarding specific policies. Instead, Westminster politics is built on a simple idea of accountability in which you know who is in charge and therefore to blame. UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect, particularly in the UK, but also the Scottish, government.

… so the UK Government goes for it and faces the consequences ….

‘Troubled Families’ in England: the massive expansion of secondary prevention?

So, although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable short term outcomes – even if the broader aim is to encourage local discretion and successful long term outcomes.

In the absence of unequivocally supportive evidence (which may never appear), the UK government relied on a crisis (the London riots in 2011) to sell policy, and ridiculous processes of estimation of the size of the problem and performance measurement to sell the success of its solution. In this system, ministers perceive the need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and to do these things far more quickly than the people gathering evidence of more substantive success. There is a lot of criticism of the programme in terms of its lack, or cynical use, of evidence but little of it considers policy from an elected government’s perspective.

…while the Scottish Government is more careful, but faces unintended consequences

This particular UK Government response has no parallel in Scotland. The UK Government is far more likely than its Scottish counterpart to link families policies to a moral agenda in response to crisis, and there is no Scottish Government equivalent to ‘payment by results’ and massive programme expansion. Instead, it continued more modest roll-outs in partnership with local public bodies. Indeed, if we ‘zoom in’ to this one example, at this point in time, the comparison confirms the idea of a ‘Scottish Approach’ to policy and policymaking.

Yet, the Scottish Government has not solved the problems I describe in this post: it has not found an alternative ‘evidence based’ way to ‘scale up’ early intervention significantly and move from secondary/ tertiary forms of prevention to the more universal/ primary initiatives that you might associate intuitively with prevention policy.

Instead, its different experiences have highlighted different issues. For example, its key vehicle for early intervention and prevention is the ‘collaborative’ approach, such as in the Early Years Collaborative. Possibly, it represents the opposite of the UK’s attempt to centralise and performance-manage-the-hell-out-of the direction of major expansion.

Table 1 Three ideal types EBBP

Certainty, with this approach, your main aim is not to generate evidence of the success of interventions – at least not in the way we associate with ‘evidence based medicine’, randomised control trials, and the star ratings developed by the Early Intervention Foundation. Rather, the aim is to train local practitioners to use existing evidence and adapt it to local circumstances, experimenting as you go, and gathering/using data on progress in ways not associated with, for example, the family nurse partnership.

So, in terms of the discussion so far, perhaps its main advantage is that a government does not have to sell its political choices (it is more of a delivery system than a specific intervention) or back them up with evidence of success elsewhere. In the absence of much public, media, or political party attention, maybe it’s a nice pragmatic political solution built more on governance principles than specific evidence.

Yet, despite our fixation with the constitution, some policy issues do occasionally get discussed. For our purposes, the most relevant is the ‘named person’ scheme because it looks like a way to ‘scale up’ an initiative to support a universal or primary prevention approach and avoid stigmatising some groups by offering a service to everyone (in this respect, it is the antithesis to ‘troubled families’). In this case, all children in Scotland (and their parents or guardians) get access to a senior member of a public service, and that person acts as a way to ‘join up’ a public sector response to a child’s problems.

Interestingly, this universal approach has its own problems. ‘Troubled families’ sets up a distinction between troubled/ untroubled to limit its proposed intervention in family life. Its problem is the potential to stigmatise and demoralise ‘troubled’ families. ‘Named person’ shows the potential for greater outcry when governments try to not identify and stigmatise specific families. The scheme is largely a response to the continuous suggestion – made after high profile cases of child abuse or neglect – that children can suffer when no agency takes overall responsibility for their care, but has been opposed as excessive infringement on normal family life and data protection, successfully enough to delay its implementation.

[Update 20.9.19: Named person scheme scrapped by Scottish Government]

The punchline to early intervention as a valence issue

Problems arise almost instantly when you try to turn a valence issue into something concrete. A vague and widely-supported policy, to intervene early to prevent bad outcomes, becomes a set of policy choices based on how governments frame the balance between ideology, stigma, and the evidence of the impact and cost-effectiveness of key interventions (which is often very limited).

Their experiences are not always directly comparable, but the UK and Scottish Governments have helped show us the pitfalls of concrete approaches to prevention and early intervention. They help us show that your basic policy choices include: (a) targeted programmes which increase stigma, (b) ‘indicated’ approaches which don’t always look like early intervention; (c) ‘selective’ approaches which seem to be less effective despite intervening at an earlier stage, (d) universal programmes which might cross a notional line between the state and the family, and (e) approaches which focus primarily on local experimentation with uncertain outcomes.

None of these approaches provide a solution to the early intervention dilemmas that all governments face, and there is no easy way to choose between approaches. We can make these choices more informed and systematic, by highlighting how all of the pieces of the jigsaw fit together, and somehow comparing their intended and unintended consequences. However, this process does not replace political judgement – and quite right too – because there is no such thing as a valence issue at the point of policy choice and delivery.

See also:

Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open Access PDF

Paul Cairney and Emily St Denny (in press, January 2020) Why Isn’t Government Policy More Preventive? (Oxford: Oxford University Press) Preview Introduction Preview Conclusion

 

 

 

 

12 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Scottish politics, UK politics and policy

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

The UK Government’s ‘troubled families’ policy appears to be a classic top-down, evidence-free, and quick emotional reaction to crisis. It developed after riots in England (primarily in London) in August 2011. Within one week, and before announcing an inquiry into them, then Prime Minister David Cameron made a speech linking behaviour directly to ‘thugs’ and immorality – ‘people showing indifference to right and wrong…people with a twisted moral code…people with a complete absence of self-restraint’ – before identifying a breakdown in family life as a major factor (Cameron, 2011a).

Although the development of parenting programmes was already government policy, Cameron used the riots to raise parenting to the top of the agenda:

We are working on ways to help improve parenting – well now I want that work accelerated, expanded and implemented as quickly as possible. This has got to be right at the top of our priority list. And we need more urgent action, too, on the families that some people call ‘problem’, others call ‘troubled’. The ones that everyone in their neighbourhood knows and often avoids …Now that the riots have happened I will make sure that we clear away the red tape and the bureaucratic wrangling, and put rocket boosters under this programme …with a clear ambition that within the lifetime of this Parliament we will turn around the lives of the 120,000 most troubled families in the country.

Cameron reinforced this agenda in December 2011 by stressing the need for individuals and families to take moral responsibility for their actions, and for the state to intervene earlier in their lives to reduce public spending in the long term:

Officialdom might call them ‘families with multiple disadvantages’. Some in the press might call them ‘neighbours from hell’. Whatever you call them, we’ve known for years that a relatively small number of families are the source of a large proportion of the problems in society. Drug addiction. Alcohol abuse. Crime. A culture of disruption and irresponsibility that cascades through generations. We’ve always known that these families cost an extraordinary amount of money…but now we’ve come up the actual figures. Last year the state spent an estimated £9 billion on just 120,000 families…that is around £75,000 per family.

The policy – primarily of expanding the provision of ‘family intervention’ approaches – is often described as a ‘classic case of policy based evidence’: policymakers cherry pick or tell tall tales about evidence to justify action. It is a great case study for two reasons:

  1. Within this one programme are many different kinds of evidence-use which attract the ire of academic commentators, from an obviously dodgy estimate and performance management system to a more-sincere-but-still-criticised use of evaluations and neuroscience.
  2. It is easy to criticise the UK government’s actions but more difficult to say – when viewing the policy problem from its perspective – what the government should do instead.

In other words, it is useful to note that the UK government is not winning awards for ‘evidence-based policymaking’ (EBPM) in this area, but less useful to deny the politics of EBPM and hold it up to a standard that no government can meet.

The UK Government’s problematic use of evidence

Take your pick from the following ways in which the UK Government has been criticised for its use of evidence to make and defend ‘troubled families’ policy.

Its identification of the most troubled families: cherry picking or inventing evidence

At the heart of the programme is the assertion that we know who the ‘troubled families’ are, what causes their behaviour, and how to stop it. Yet, much of the programme is built on a value judgements about feckless parents, and tipping the balance from support to sanctions, and unsubstantiated anecdotes about key aspects such as the tendency of ‘worklessness’ or ‘welfare dependency’ to pass from one generation to another.

The UK government’s target of almost 120000 families was based speculatively on previous Cabinet Office estimates in 2006 that about ‘2% of families in England experience multiple and complex difficulties’. This estimate was based on limited survey data and modelling to identify families who met five of seven criteria relating to unemployment, poor housing, parental education, the mental health of the mother, the chronic illness or disability of either parent, an income below 60% of the median, and an inability to by certain items of food or clothing.

It then gave locally specific estimates to each local authority and asked them to find that number of families, identifying households with: (1) at least one under-18-year-old who has committed an offense in the last year, or is subject to an ASBO; and/ or (2) has been excluded from school permanently, or suspended on three consecutive terms, in a Pupil Referral Unit, off the school roll, or has over 15% unauthorised absences over three consecutive terms; and (3) an adult on out of work benefits.

If the household met all three criteria, they would automatically be included. Otherwise, local authorities had the discretion to identify further troubled families meeting two of the criteria and other indicators of concerns about ‘high costs’ of late intervention such as, ‘a child who is on a Child Protection Plan’, ‘Families subject to frequent police call-outs or arrests’, and ‘Families with health problems’ linked to mental health, addiction, chronic conditions, domestic abuse, and teenage pregnancy.

Its measure of success: ‘turning around’ troubled families

The UK government declared almost-complete success without convincing evidence. Success ‘in the last 6 months’ to identify a ‘turned around family’ is measured in two main ways: (1) the child no longer having three exclusions in a row, a reduction in the child offending rate of 33% or ASB rate of 60%, and/or the adult entering a relevant ‘progress to work’ programme; or (2) at least one adult moving from out of work benefits to continuous employment. It was self-declared by local authorities, and both parties had a high incentive to declare it: local authorities received £4000 per family payments and the UK government received a temporary way to declare progress without long term evidence.

The declaration is in stark contrast to an allegedly suppressed report to the government which stated that the programme had ‘no discernible effect on unemployment, truancy or criminality’. This lack of impact was partly confirmed by FOI requests by The Guardian – demonstrating that at least 8000 families received no intervention, but showed improvement anyway – and analysis by Levitas and Crossley which suggests that local authorities could only identify families by departing from the DCLG’s initial criteria.

Its investment in programmes with limited evidence of success

The UK government’s massive expansion of ‘family intervention projects’, and related initiatives, is based on limited evidence of success from a small sample of people from a small number of pilots. The ‘evidence for the effectiveness of family intervention projects is weak’ and a government-commissioned systematic review suggests that there are no good quality evaluations to demonstrate (well) the effectiveness or value-for-money of key processes such as coordinated service provision. The impact of other interventions, previously with good reputations, has been unclear, such as the Family Nurse Partnership imported from the US which so far has produced ‘no additional short-term benefit’. Overall, Crossley and Lambert suggest that “the weight of evidence surrounding ‘family intervention’ and similar approaches, over the longue durée, actually suggests that the approach doesn’t work”. There is also no evidence to support its heroic claim that spending £10000 per family will save £65000.

Its faith in sketchy neuroscientific evidence on the benefits of early intervention

The government is driven partly by a belief in the benefits of early intervention in the lives of children (from 0-3, or even before birth), which is based partly on the ‘now or never’ argument found in key reviews by Munro and Allen (one and two).

normal brain

Policymakers take liberties with neuroscientific evidence to emphasise the profound effect of stress on early brain development (measured, for example, by levels of cortisol found in hair samples). These accounts underpinning the urgency of early intervention are received far more critically in fields such as social science, neuroscience, and psychology. For example, Wastell and White find no good quality scientific evidence behind the comparison of child brain development reproduced in Allen’s reports.

Now let’s try to interpret and explain these points partly from a government perspective

Westminster politics necessitates this presentation of ‘prevention’ policies

If you strip away the rhetoric, the troubled families programme is a classic attempt at early intervention to prevent poor outcomes. In this general field, it is difficult to know what government policy is – what it stands for and how you measure its success. ‘Prevention’ is vague, plus governments make a commitment to meaningful local discretion and the sense that local actors should be guided by a combination of the evidence of ‘what works’ and its applicability to local circumstances.

This approach is not tolerated in Westminster politics, built on the simple idea of accountability in which you know who is in charge and therefore to blame! UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect: although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable outcomes – even if the broader aim is to encourage local discretion.

This context helps explain why governments appear to exploit crises to sell existing policies, and pursue ridiculous processes of estimation and performance measurement. They need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and they have to do these things very quickly.

Consequently, for example, they will not worry about some academics complaining about policy based evidence – they are more concerned about their media and public reception and the ability of the opposition to exploit their failures – and few people in politics have the time (that many academics take for granted) to wait for research. This is the lens through which we should view all discussions of the use of evidence in politics and policy.

Unequivocal evidence is impossible to produce and we can’t wait forever

The argument for evidence-based-policy rather than policy-based-evidence suggests that we know what the evidence is. Yet, in this field in particular, there is potential for major disagreement about the ‘bar’ we set for evidence.

Table 1 Three ideal types EBBP

For some, it relates to a hierarchy of evidence in which randomised control trials (RCTs) and their systematic review are at the top: the aim is to demonstrate that an intervention’s effect was positive, and more positive than another intervention or non-intervention. This requires experiments: to compare the effects of interventions in controlled settings, in ways that are directly comparable with other experiments.

As table 1 suggests, some other academics do not adhere to – and some reject – this hierarchy. This context highlights three major issues for policymakers:

  1. In general, when they seek evidence, they find this debate about how to gather and analyse it (and the implications for policy delivery).
  2. When seeking evidence on interventions, they find some academics using the hierarchy to argue the ‘evidence for the effectiveness of family intervention projects is weak’. This adherence to a hierarchy to determine research value also doomed to failure a government-commissioned systematic review: the review applied a hierarchy of evidence to its analysis of reports by authors who did not adhere to the same model. The latter tend to be more pragmatic in their research design (and often more positive about their findings), and their government audience rarely adheres to the same evidential standard built on a hierarchy. In the absence of someone giving ground, some researchers will never be satisfied with the available evidence and elected policymakers are unlikely to listen to them.
  3. The evidence generated from RCTs is often disappointing. The so-far-discouraging experience of the Family Nurse Partnership has a particularly symbolic impact, and policymakers can easily pick up a general sense of uncertainty about the best policies in which to invest.

So, if your main viewpoint is academic, you can easily conclude that the available evidence does not yet justify massive expansion in the troubled families programme (perhaps you might prefer the Scottish approach of smaller scale piloting, or for the government to abandon certain interventions altogether).

However, if you are a UK government policymaker feeling the need to act – and knowing that you always have to make decisions despite uncertainty – you may also feel that there will never be enough evidence on which to draw. Given the problems outlined above, you may as well act now than wait for years for little to change.

The ends justify the means

Policymakers may feel that the ends of such policies – investment in early intervention by shifting funds from late intervention – may justify the means, which can include a ridiculous oversimplification of evidence. It may seem almost impossible for governments to find other ways to secure the shift, given the multiple factors which undermine its progress.

Governments sometimes hint at this approach when simplifying key figures – effectively to argue that late intervention costs £9bn while early intervention will only cost £448m – to reinforce policy change: ‘the critical point for the Government was not necessarily the precise figure, but whether a sufficiently compelling case for a new approach was made’.

Similarly the vivid comparison of healthy versus neglected brains provides shocking reference points to justify early intervention. Their rhetorical value far outweighs their evidential value. As in all EBPM, the choice for policymakers is to play the game, to generate some influence in not-ideal circumstances, or hope that science and reason will save the day (and the latter tends to be based on hope rather than evidence). So, the UK appeared to follow the US’ example in which neuroscience ‘was chosen as the scientific vehicle for the public relations campaign to promote early childhood programs more for rhetorical, than scientific reasons’, partly because a focus on, for example, permanent damage to brain circuitry is less abstract than a focus on behaviour.

Overall, policymakers seem willing to build their case on major simplifications and partial truths to secure what they believe to be a worthy programme (although it would be interesting to find out which policymakers actually believe the things they say). If so, pointing out their mistakes or alleging lies can often have a minimal impact (or worse, if policymakers ‘double down’ in the face of criticism).

Implications for academics, practitioners, and ‘policy based evidence’

I have been writing on ‘troubled families’ while encouraging academics and practitioners to describe pragmatic strategies to increase the use of evidence in policy.

Palgrave C special

Our starting point is relevant to this discussion – since it asks what we should do if policymakers don’t think like academics:

  • They worry more about Westminster politics – their media and public reception and the ability of the opposition party to exploit their failures – than what academics think of their actions.
  • They do not follow the same rules of evidence generation and analysis.
  • They do not have the luxury of uncertainty and time.

Generally, this is a useful lens through which we should view discussions of the realistic use of evidence in politics and policy. Without being pragmatic – to recognise that policymakers will never think like scientists, and always face different pressures – we might simply declare ‘policy based evidence’ in all cases. Although a commitment to pragmatism does not solve these problems, at least it prompts us to be more specific about categories of PBE, the criteria we use to identify it, if our colleagues share a commitment to those criteria, what we can reasonably expect of policymakers, and how we might respond.

In disciplines like social policy we might identify a further issue, linked to:

  1. A tradition of providing critical accounts of government policy to help hold elected policymakers to account. If so, the primary aim may be to publicise key flaws without engaging directly with policymakers to help fix them – and perhaps even to criticise other scholars for doing so – because effective criticism requires critical distance.
  2. A tendency of many other social policy scholars to engage directly in evaluations of government policy, with the potential to influence and be influenced by policymakers.

It is a dynamic that highlights well the difficulty of separating empirical and normative evaluations when critics point to the inappropriate nature of the programmes as they interrogate the evidence for their effectiveness. This difficulty is often more hidden in other fields, but it is always a factor.

For example, Parr noted in 2009 that ‘despite ostensibly favourable evidence … it has been argued that the apparent benign-welfarism of family and parenting-based antisocial behaviour interventions hide a growing punitive authoritarianism’. The latter’s most extreme telling is by Garrett in 2007, who compares residential FIPs (‘sin bins’) to post-war Dutch programmes resembling Nazi social engineering and criticises social policy scholars for giving them favourable evaluations – an argument criticised in turn by Nixon and Bennister et al.

For present purposes, note Nixon’s identification of ‘an unusual case of policy being directly informed by independent research’, referring to the possible impact of favourable evaluations of FIPs on the UK Government’s move way from (a) an intense focus on anti-social behaviour and sanctions towards (b) greater support. While it would be a stretch to suggest that academics can set government agendas, they can at least enhance their impact by framing their analysis in a way that secures policymaker interest. If academics seek influence, rather than critical distance, they may need to get their hands dirty: seeking to understand policymakers to find alternative policies that still give them what they want.

5 Comments

Filed under Prevention policy, public policy, UK politics and policy