Tag Archives: Evaluation

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

The politics of implementing evidence-based policies

This post by me and Kathryn Oliver appeared in the Guardian political science blog on 27.4.16: If scientists want to influence policymaking, they need to understand it . It builds on this discussion of ‘evidence based best practice’ in Evidence and Policy. There is further reading at the end of the post.

Three things to remember when you are trying to close the ‘evidence-policy gap’

Last week, a new major report on The Science of Using Science: Researching the Use of Research Evidence in Decision-Making suggested that there is very limited evidence of ‘what works’ to turn scientific evidence into policy. There are many publications out there on how to influence policy, but few are proven to work.

This is because scientists think about how to produce the best possible evidence rather than how different policymakers use evidence differently in complex policymaking systems (what the report describes as the ‘capability, motivation, and opportunity’ to use evidence). For example, scientists identify, from their perspective, a cultural gap between them and policymakers. This story tells us that we need to overcome differences in the languages used to communicate findings, the timescales to produce recommendations, and the incentives to engage.

This scientist perspective tends to assume that there is one arena in which policymakers and scientists might engage. Yet, the action takes place in many venues at many levels involving many types of policymaker. So, if we view the process from many different perspectives we see new ways in which to understand the use of evidence.

Examples from the delivery of health and social care interventions show us why we need to understand policymaker perspectives. We identify three main issues to bear in mind.

First, we must choose what counts as ‘the evidence’. In some academic disciplines there is a strong belief that some kinds of evidence are better than others: the best evidence is gathered using randomised control trials and accumulated in systematic reviews. In others, these ideas have limited appeal or are rejected outright, in favour of (say) practitioner experience and service user-based feedback as the knowledge on which to base policies. Most importantly, policymakers may not care about these debates; they tend to beg, borrow, or steal information from readily available sources.

Second, we must choose the lengths to which we are prepared to go ensure that scientific evidence is the primary influence on policy delivery. When we open up the ‘black box’ of policymaking we find a tendency of central governments to juggle many models of government – sometimes directing policy from the centre but often delegating delivery to public, third, and private sector bodies. Those bodies can retain some degree of autonomy during service delivery, often based on governance principles such as ‘localism’ and the need to include service users in the design of public services.

This presents a major dilemma for scientists because policy solutions based on RCTs are likely to come with conditions that limit local discretion. For example, a condition of the UK government’s license of the ‘family nurse partnership’ is that there is ‘fidelity’ to the model, to ensure the correct ‘dosage’ and that an RCT can establish its effect. It contrasts with approaches that focus on governance principles, such as ‘my home life’, in which evidence – as practitioner stories – may or may not be used by new audiences. Policymakers may not care about the profound differences underpinning these approaches, preferring to use a variety of models in different settings rather than use scientific principles to choose between them.

Third, scientists must recognise that these choices are not ours to make. We have our own ideas about the balance between maintaining evidential hierarchies and governance principles, but have no ability to impose these choices on policymakers.

This point has profound consequences for the ways in which we engage in strategies to create impact. A research design to combine scientific evidence and governance seems like a good idea that few pragmatic scientists would oppose. However, this decision does not come close to settling the matter because these compromises look very different when designed by scientists or policymakers.

Take for example the case of ‘improvement science’ in which local practitioners are trained to use evidence to experiment with local pilots and learn and adapt to their experiences. Improvement science-inspired approaches have become very common in health sciences, but in many examples the research agenda is set by research leads and it focuses on how to optimise delivery of evidence-based practice.

In contrast, models such as the Early Years Collaborative reverse this emphasis, using scholarship as one of many sources of information (based partly on scepticism about the practical value of RCTs) and focusing primarily on the assets of practitioners and service users.

Consequently, improvement science appears to offer pragmatic solutions to the gap between divergent approaches, but only because they mean different things to different people. Its adoption is only one step towards negotiating the trade-offs between RCT-driven and story-telling approaches.

These examples help explain why we know so little about how to influence policy. They take us beyond the bland statement – there is a gap between evidence and policy – trotted out whenever scientists try and maximise their own impact. The alternative is to try to understand the policy process, and the likely demand for and uptake of evidence, before working out how to produce evidence that would fit into the process. This different mind-set requires a far more sophisticated knowledge of the policy process than we see in most studies of the evidence-policy gap.  Before trying to influence policymaking, we should try to understand it.

Further reading

The initial further reading uses this table to explore three ways in which policymakers, scientists, and other groups have tried to resolve the problems we discuss:

Table 1 Three ideal types EBBP

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer

You can also explore these links to discussions of EBPM, policy theory, and specific policy fields such as prevention

  1. My academic articles on these topics
  2. The Politics of Evidence Based Policymaking
  3. Key policy theories and concepts in 1000 words
  4. Prevention policy

 

Leave a comment

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based best practice: 4 messages

Well, it’s really a set of messages, geared towards slightly different audiences, and summed up by this table:

Table 1 Three ideal types EBBP.JPG

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer.

Further reading (links):

My academic articles on these topics

The Politics of Evidence Based Policymaking

Key policy theories and concepts in 1000 words

Prevention policy

8 Comments

Filed under 1000 words, ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), Prevention policy, Scottish politics, UK politics and policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

‘Policy-based evidence’ (PBE) is a great phrase because you instantly know what it means: a policymaker decided what they wanted to do, then sought any old evidence to back up their policy.

‘Evidence-based policymaking’ (EBPM) is a rotten phrase largely because no one knows or agrees what it means. So, you can never be sure if a policy was ‘evidence-based’ even when policymakers support apparently well-evidence programmes.

The binary distinction only works politically, to call something you don’t like PBE when it doesn’t meet a standard of EBPM we can’t properly describe.

To give you a sense of what happens when you try to move away from the binary distinction, consider this large number of examples. Which relate to PBE and which to EBPM?

  1. The evidence on problems and solutions come first, and policymakers select the best interventions according to narrow scientific criteria (e.g. on the effectiveness of an intervention).
  2. The evidence comes first, but policymakers do not select the best interventions according to narrow scientific criteria. For example, their decision may be based primarily on economic factors such as value for money (VFM).
  3. The evidence comes first, but policymakers do not select the best interventions according to narrow scientific criteria or VFM. They have ideological or party political/ electoral reasons for rejecting evidence-based interventions.
  4. The evidence comes first, then policymakers inflate the likely success of interventions. They get ahead of the limited evidence when pushing a programme.
  5. Policymakers first decide what they want to do, then seek evidence to back up their decisions.
  6. Policymakers recognise a problem, the evidence base is not well developed, but policymakers act quickly anyway.
  7. Policymakers recognise a problem, the evidence is highly contested, and policymakers select the recommendations of one group of experts and reject those of another.
  8. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on evidence (on its effectiveness) from randomised control trials.
  9. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on evidence (on its effectiveness) from qualitative data (feedback from service users and professional experience).
  10. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on their personal experience and assessment of what is politically feasible.
  11. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on evidence (on its effectiveness) from randomised control trials.
  12. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on evidence (on its effectiveness) from qualitative data (feedback from service users and professional experience).
  13. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on their personal experience and assessment of what is politically feasible.

I could go on, but you get the idea. Things get more confusing when you find combinations of these descriptions in fluid policies, such as when a programme is built partially on promising evidence (e.g. from pilots), only to be rejected or supported completely when events prompt policymakers to act more quickly than expected. Or, an evidence-based programme may be imported without enough evaluation to check if it works as intended in a new arena.

In a nutshell, the problem is that neither description captures these processes well. So, people use the phrase ‘evidence informed’ policymaking – but does this phrase help us much more? Have a look at the 13 examples and see which ones you’d describe as evidence informed.

Is it 12?

If so, when you use the phrase ‘evidence-informed’ policy or policymaking, which example do you mean?

For more reading on EBPM see: https://paulcairney.wordpress.com/ebpm/

7 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty

There is now a large literature on the gaps between the production of scientific evidence and a policy or policymaking response. However, the literature in key fields – such as health and environmental sciences – does not use policy theory to help explain the gap. In this book, and in work that I am developing with Kathryn Oliver and Adam Wellstead, I explain why this matters by identifying the difference between empirical uncertainty and policy ambiguity. Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to consider all evidence relevant to policy problems. Instead, they employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, and habits to make decisions quickly. This takes place in a complex policymaking system in which policymaker attention can lurch from issue to issue, policy is made routinely in subsystems, and the ‘rules of the game’ take time to learn.

The key problem in the health and environmental sciences is that studies focus only on the first short cut. They identify the problem of uncertainty that arises when policymakers have incomplete information, and seek to solve it by improving the supply of information and encouraging academic-practitioner networks and workshops. They ignore the importance of a wider process of debate, coalition formation, lobbying, and manipulation, to reduce ambiguity and establish a dominant way to frame policy problems. Further, while scientific evidence cannot solve the problem of ambiguity, persuasion and framing can help determine the demand for scientific evidence.

Therefore, the second solution is to engage in a process of framing and persuasion by, for example, forming coalitions with actors with the same aims or beliefs, and accompanying scientific information with simple stories to exploit or adapt to the emotional and ideological biases of policymakers. This is less about packaging information to make it simpler to understand, and more about responding to the ways in which policymakers think – in general, and in relation to emerging issues – and, therefore, how they demand information.

In the book, I present this argument in three steps. First, I bring together a range of insights from policy theory, to show the huge amount of accumulated knowledge of policymaking on which other scientists and evidence advocates should draw. Second, I discuss two systematic reviews – one by Oliver et al, and one that Wellstead and I developed – of the literature on ‘barriers’ to evidence and policy in health and environmental studies. They show that the vast majority of studies in each field employ minimal policy theory and present solutions which focus only on uncertainty. Third, I identify the practical consequences for actors trying to maximize the uptake of scientific evidence within government.

My conclusion has profound implications for the role of science and scientific experts in policymaking. Scientists have a stark choice: to produce information and accept that it will have a limited impact (but that scientists will maintain an often-useful image of objectivity), or to go beyond one’s comfort zone, and expertise, to engage in a normative enterprise that can increase impact at the expense of objectivity.

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty

The case studies of health and environmental policy, discussed in this book, largely confirm the concern that I raise in the introduction: it is too easy to bemoan the lack of evidence-based policymaking without being clear on what it means. There is great potential to conflate a series of problems that should be separated analytically:

  • The lack of reliable or uncontested evidence on the nature of a policy problem. In some cases, (a) complaints that policymakers do not respond quickly or proportionately to ‘the evidence’ go hand in hand with (b) admissions that the evidence of problems is equivocal. In turn, patchy evidence feeds into a wider political process in which actors compete to provide the dominant way to frame or understand policy problems.
  • The tendency of policymakers to pay insufficient attention to pressing, well-evidenced, problems. In other cases, the evidence of a problem is relatively clear, but policymakers are unable to understand it, unwilling to address it, or more likely to pay attention to other problems.
  • The lack of reliable or uncontested evidence on the effectiveness of policy solutions. In some cases, scientists are clear on the size and nature of the problem, but the evidence on policy solutions is patchy. Consequently, policymakers may be reluctant to act, or invest in expensive solutions, even if they recognise that there is a pressing problem to solve.
  • The tendency of policymakers to ignore or reject the most effective or best-evidenced policy solutions.
  • The tendency of policymakers to decide what they want to do, then seek enough evidence, or distort that evidence, to support their decision.

This lack of clarity combines with a lack of appreciation of the key ‘barriers’ to the use of evidence in policymaking. A large part of the literature, produced by health and environmental scientists with limited reference to policy theory, identifies a gulf in cultures between scientists and policymakers, and suggests that to solve this problem is to address a key issue in EBPM. Scientific information, provided in the right way, can address the problem of ‘bounded rationality’ in policymakers. If so, the failure of politicians to act accordingly indicates a lack of ‘political will’ to do the right thing.

Yet, the better translation of scientific evidence contributes primarily to one aspect of bounded rationality: the reduction of empirical uncertainty. It contributes less to a wider process of debate, competition, and persuasion, to reduce ambiguity and establish a dominant way to frame policy problems. Scientific evidence cannot solve the problem of ambiguity, but persuasion and framing can help determine the demand for scientific evidence. To address this second aspect of bounded rationality, we need to understand how policymakers use emotional, ideological, and habitual short cuts to understand policy problems. This is less about packaging information to make it simpler to understand, and more about responding to the ways in which policymakers think and, therefore, how they demand information.

Leave a comment

Filed under Evidence Based Policymaking (EBPM)

Can a Policy Fail and Succeed at the Same Time?

In Policy Concepts in 1000 Words: Success and Failure, I argue that evaluation is party political. Parties compete to describe policies as successes or failures based on their beliefs and their selective use of evidence. There is often a lot of room for debate because the aims of policymakers are not always clear. In this post, I argue that this room still exists even if a policymaker’s aims appear to be clear. The complication is that a policy aim consists of an explicit statement of intent plus an often-implicit set of assumptions about what that statement of intent means in practice. This complication is exploited by parties in the same way as they exploit ambiguities and their selective use of evidence.

Let’s take the example of class sizes in Scottish schools, partly because it is often highlighted by opposition parties as a clear example of policy failure. The SNP manifesto 2007 (p52) seems crystal clear:

We will reduce class sizes in Primary 1, 2 and 3 to eighteen pupils or less (sic)

Further, the SNP Scottish Government did not appear to fulfil the spirit of its commitment. There is some wiggle room because it does not say all classes or set a deadline, but it is reasonable to assume that the pledge refers to extensive progress by 2011 (the end of the parliamentary session). Indeed, the lack of progress was seized upon by opposition parties, who seemed to be partly responsible for the removal of the Education Secretary from her post in 2009.  The issue arose again at the end of 2013 when average class sizes appeared to be higher than when the pledge was made.

My magic trick will be to persuade you that, in an important way, the reduction of class sizes was not the SNP’s aim. What I mean is this:

  1. Each policy aim is part of a wider set of aims which may undermine rather than reinforce each other. In general, for example, spending on one aim comes at the expense of another. In this specific case, another SNP aim was to promote a new relationship with local authorities. It sought to set an overall national strategy and fund programmes via local authorities, but not impose policy outputs or outcomes on implementing bodies. Those two aims could be compatible: the Scottish Government could persuade local authorities to share its aims and spend money on achieving them. Or, they could be contradictory, forcing the Scottish Government to pursue one aim at the expense of another: either imposing policy on local authorities, or accepting the partial loss of one aim to secure a particular relationship with local authorities.
  2. Class sizes are not aims in themselves. Instead, they are means to an end, or headline-grabbing proxy measures for performance. The broader aim is to improve learning and/ or education attainment (and to address learning-based inequalities). Further, local authorities may have their own ideas about how to make this happen, perhaps by spending their ‘class size’ money on a different project with the same broader aim (I have not made up this point – a lot of teaching professionals are not keen on these targets). Again, the Scottish Government has a choice: impose their own aim or trust some local authorities to do things their own way – which might produce a lack of implementation of a specific aim but the pursuit of a broader one.
  3. The assumption is always that nothing will go wrong between the promise and the action. Yet, things almost-always go wrong because policy outcomes are often out of the control of policymakers. We like to pretend that governments are infallible so that we can hold them responsible and blame them for being fallible.

Consequently, a key question about policy success is this: how far would you go to achieve it in each case?  Would you sacrifice one aim for another? How do you prioritise a large set of aims which may not be compatible with each other? Would you accept the unintended consequences of a too-rigid attachment to a policy aim? Or, would you set a broad strategy and accept that implementing authorities should have considerable say in how to carry it out?

In this sense, it is possible to succeed and fail simultaneously – either by successfully achieving a narrow policy aim but with unintended consequences, or by accepting a level of defeat for the greater good.*

*Or, I suppose, if you are not of the bright-side persuasion, you can fail and fail.

Further Reading:

Policy Concepts in 1000 Words: Success and Failure (Evaluation)

How Big is the Incentive for Politicians to Look Dishonest or Stupid?

Further Reading: class sizes

Class sizes

http://www.scotsman.com/news/education/hyslop-admits-government-has-failed-on-class-sizes-1-779106

http://www.scotsman.com/news/politics/top-stories/fiona-hyslop-sacked-as-education-secretary-1-1224513

http://www.bbc.co.uk/news/uk-scotland-25332478

http://www.scotland.gov.uk/Topics/Education/Schools/Teaching/classes

http://www.scotland.gov.uk/Topics/Statistics/Browse/School-Education/ClassSizeDatasets

http://www.scotsman.com/news/education/class-sizes-up-as-teacher-numbers-fall-in-scotland-1-3228707

http://www.heraldscotland.com/news/education/dismay-as-primary-class-sizes-larger-since-snp-took-power.22933389

4 Comments

Filed under public policy, Scottish politics