Tag Archives: Evaluation

The Politics of Evidence

This is a draft of my review of Justin Parkhurst (2017) The Politics of Evidence (Routledge, Open Access)

Justin Parkhurst’s aim is to identify key principles to take forward the ‘good governance of evidence’. The good governance of scientific evidence in policy and policymaking requires us to address two fundamentally important ‘biases’:

  1. Technical bias. Some organisations produce bad evidence, some parts of government cherry-pick, manipulate, or ignore evidence, and some politicians misinterpret the implications of evidence when calculating risk. Sometimes, these things are done deliberately for political gain. Sometimes they are caused by cognitive biases which cause us to interpret evidence in problematic ways. For example, you can seek evidence that confirms your position, and/ or only believe the evidence that confirms it.
  2. Issue bias. Some evidence advocates use the mantra of ‘evidence based policy’ to depoliticise issues or downplay the need to resolve conflicts over values. They also focus on the problems most conducive to study via their most respected methods such as randomised control trials (RCTs). Methodological rigour trumps policy relevance and simple experiments trump the exploration of complex solutions. So, we lose sight of the unintended consequences of producing the ‘best’ evidence to address a small number of problems, and making choices about the allocation of research resources and attention. Again, this can be deliberate or caused by cognitive biases, such as to seek simpler and more answerable questions than complex questions with no obvious answer.

To address both problems, Parkhurst seeks pragmatic ways to identify principles to decide what counts as ‘good evidence to inform policy’ and ‘what constitutes the good use of evidence within a policy process’:

‘it is necessary to consider how to establish evidence advisory systems that promote the good governance of evidence – working to ensure that rigorous, sys­tematic and technically valid pieces of evidence are used within decision-making processes that are inclusive of, representative of and accountable to the multiple social interests of the population served’ (p8).

Parkhurst identifies some ways in which to bring evidence and policy closer together. First, to produce evidence more appropriate for, or relevant to, policymaking (‘good evidence for policy’):

  1. Relate evidence more closely to policy goals.
  2. Modify research approaches and methods to answer policy relevant questions.
  3. Ensure that the evidence relates to the local or relevant context.

Second, to produce the ‘good use of evidence’, combine three forms of ‘legitimacy’:

  1. Input, to ensure democratic representative bodies have the final say.
  2. Throughput, to ensure widespread deliberation.
  3. Output, to ensure proper consideration the use of the most systematic, unbiased and rigorously produced scientific evidence relevant to the problem.

In the final chapter, Parkhurst suggests that these aims can be pursued in many ways depending on how governments want to design evidence advisory systems, but that it’s worth drawing on the examples of good practice he identifies. Parkhurst also explores the role for Academies of science, or initiatives such as the Cochrane Collaboration, to provide independent advice. He then outlines the good governance of evidence built on key principles: appropriate evidence, accountability in evidence use, transparency, and contestability (to ensure sufficient debate).

The overall result is a book full of interesting discussion and very sensible, general advice for people new to the topic of evidence and policy. This is no mean feat: most readers will seek a clearly explained and articulate account of the subject, and they get it here.

For me, the most interesting thing about Parkhurst’s book is the untold story, or often-implicit reasoning behind the way in which it is framed. We can infer that it is not a study aimed primarily at a political science or social science audience, because most of that audience would take its starting point for granted: the use of evidence is political, and politics involves values. Yet, Parkhurst feels the need to remind the reader of this point, in specific (“it is worth noting that the US presidency is a decidedly political role”, p43) and general circumstances (‘the nature of policymaking is inherently political’, p65). Throughout, the audience appears to be academics who begin with a desire for ‘evidence based policy’ without fully thinking through the implications, either about the lack of a magic bullet of evidence to solve a policy problem, how we might maintain a political system conducive to democratic principles and good evidence use, how we might design a system to reduce key ‘barriers’ between the supply of evidence by scientists and its demand by policymakers, and why few such designs have taken off.

In other words, the book appeals primarily to scientists trained outside social science, some of whom think about politics in their spare time, or encounter it in dispiriting encounters with policymakers. It appeals to that audience with a statement on the crucial role of high quality evidence in policymaking, highlights barriers to its use, tells scientists that they might be part of the problem, but then provides them with the comforting assurance that we can design better systems to overcome at least some of those barriers. For people trained in policy studies, this concluding discussion seems like a tall order, and I think most would read it with great scepticism.

Policy scientists might also be sceptical about the extent to which scientists from other fields think this way about hierarchies of scientific evidence and the desire to depoliticise politics with a primary focus on ‘what works’. Yet, I too hear this language regularly in interdisciplinary workshops (often while standing next to Justin!), and it is usually accompanied by descriptions of the pathology of policymaking, the rise of post-truth politics and rejection of experts, and the need to focus on the role of objective facts in deciding what policy solutions work best. Indeed, I was impressed recently by the skilled way in which another colleague prepared this audience for some provocative remarks when he suggested that the production and use of evidence is about power, not objectivity. OMG: who knew that policymaking was political and about power?!

So, the insights from this book are useful to a large audience of scientists while, for a smaller audience of policy scientists, they remind us that there is an audience out there for many of the statements that many of us would take for granted. Some evidence advocates use the language of ‘evidence based policymaking’ strategically, to get what they want. Others appear to use it because they believe it can exist. Keep this in mind when you read the book.

Parkhurst

4 Comments

Filed under Evidence Based Policymaking (EBPM)

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

 

 

5 Comments

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

The politics of implementing evidence-based policies

This post by me and Kathryn Oliver appeared in the Guardian political science blog on 27.4.16: If scientists want to influence policymaking, they need to understand it . It builds on this discussion of ‘evidence based best practice’ in Evidence and Policy. There is further reading at the end of the post.

Three things to remember when you are trying to close the ‘evidence-policy gap’

Last week, a new major report on The Science of Using Science: Researching the Use of Research Evidence in Decision-Making suggested that there is very limited evidence of ‘what works’ to turn scientific evidence into policy. There are many publications out there on how to influence policy, but few are proven to work.

This is because scientists think about how to produce the best possible evidence rather than how different policymakers use evidence differently in complex policymaking systems (what the report describes as the ‘capability, motivation, and opportunity’ to use evidence). For example, scientists identify, from their perspective, a cultural gap between them and policymakers. This story tells us that we need to overcome differences in the languages used to communicate findings, the timescales to produce recommendations, and the incentives to engage.

This scientist perspective tends to assume that there is one arena in which policymakers and scientists might engage. Yet, the action takes place in many venues at many levels involving many types of policymaker. So, if we view the process from many different perspectives we see new ways in which to understand the use of evidence.

Examples from the delivery of health and social care interventions show us why we need to understand policymaker perspectives. We identify three main issues to bear in mind.

First, we must choose what counts as ‘the evidence’. In some academic disciplines there is a strong belief that some kinds of evidence are better than others: the best evidence is gathered using randomised control trials and accumulated in systematic reviews. In others, these ideas have limited appeal or are rejected outright, in favour of (say) practitioner experience and service user-based feedback as the knowledge on which to base policies. Most importantly, policymakers may not care about these debates; they tend to beg, borrow, or steal information from readily available sources.

Second, we must choose the lengths to which we are prepared to go ensure that scientific evidence is the primary influence on policy delivery. When we open up the ‘black box’ of policymaking we find a tendency of central governments to juggle many models of government – sometimes directing policy from the centre but often delegating delivery to public, third, and private sector bodies. Those bodies can retain some degree of autonomy during service delivery, often based on governance principles such as ‘localism’ and the need to include service users in the design of public services.

This presents a major dilemma for scientists because policy solutions based on RCTs are likely to come with conditions that limit local discretion. For example, a condition of the UK government’s license of the ‘family nurse partnership’ is that there is ‘fidelity’ to the model, to ensure the correct ‘dosage’ and that an RCT can establish its effect. It contrasts with approaches that focus on governance principles, such as ‘my home life’, in which evidence – as practitioner stories – may or may not be used by new audiences. Policymakers may not care about the profound differences underpinning these approaches, preferring to use a variety of models in different settings rather than use scientific principles to choose between them.

Third, scientists must recognise that these choices are not ours to make. We have our own ideas about the balance between maintaining evidential hierarchies and governance principles, but have no ability to impose these choices on policymakers.

This point has profound consequences for the ways in which we engage in strategies to create impact. A research design to combine scientific evidence and governance seems like a good idea that few pragmatic scientists would oppose. However, this decision does not come close to settling the matter because these compromises look very different when designed by scientists or policymakers.

Take for example the case of ‘improvement science’ in which local practitioners are trained to use evidence to experiment with local pilots and learn and adapt to their experiences. Improvement science-inspired approaches have become very common in health sciences, but in many examples the research agenda is set by research leads and it focuses on how to optimise delivery of evidence-based practice.

In contrast, models such as the Early Years Collaborative reverse this emphasis, using scholarship as one of many sources of information (based partly on scepticism about the practical value of RCTs) and focusing primarily on the assets of practitioners and service users.

Consequently, improvement science appears to offer pragmatic solutions to the gap between divergent approaches, but only because they mean different things to different people. Its adoption is only one step towards negotiating the trade-offs between RCT-driven and story-telling approaches.

These examples help explain why we know so little about how to influence policy. They take us beyond the bland statement – there is a gap between evidence and policy – trotted out whenever scientists try and maximise their own impact. The alternative is to try to understand the policy process, and the likely demand for and uptake of evidence, before working out how to produce evidence that would fit into the process. This different mind-set requires a far more sophisticated knowledge of the policy process than we see in most studies of the evidence-policy gap.  Before trying to influence policymaking, we should try to understand it.

Further reading

The initial further reading uses this table to explore three ways in which policymakers, scientists, and other groups have tried to resolve the problems we discuss:

Table 1 Three ideal types EBBP

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer

You can also explore these links to discussions of EBPM, policy theory, and specific policy fields such as prevention

  1. My academic articles on these topics
  2. The Politics of Evidence Based Policymaking
  3. Key policy theories and concepts in 1000 words
  4. Prevention policy

 

2 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based best practice: 4 messages

Well, it’s really a set of messages, geared towards slightly different audiences, and summed up by this table:

Table 1 Three ideal types EBBP.JPG

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer.

Further reading (links):

My academic articles on these topics

The Politics of Evidence Based Policymaking

Key policy theories and concepts in 1000 words

Prevention policy

13 Comments

Filed under 1000 words, ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), Prevention policy, Scottish politics, UK politics and policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

‘Policy-based evidence’ (PBE) is a great phrase because you instantly know what it means: a policymaker decided what they wanted to do, then sought any old evidence to back up their policy.

‘Evidence-based policymaking’ (EBPM) is a rotten phrase largely because no one knows or agrees what it means. So, you can never be sure if a policy was ‘evidence-based’ even when policymakers support apparently well-evidence programmes.

The binary distinction only works politically, to call something you don’t like PBE when it doesn’t meet a standard of EBPM we can’t properly describe.

To give you a sense of what happens when you try to move away from the binary distinction, consider this large number of examples. Which relate to PBE and which to EBPM?

  1. The evidence on problems and solutions come first, and policymakers select the best interventions according to narrow scientific criteria (e.g. on the effectiveness of an intervention).
  2. The evidence comes first, but policymakers do not select the best interventions according to narrow scientific criteria. For example, their decision may be based primarily on economic factors such as value for money (VFM).
  3. The evidence comes first, but policymakers do not select the best interventions according to narrow scientific criteria or VFM. They have ideological or party political/ electoral reasons for rejecting evidence-based interventions.
  4. The evidence comes first, then policymakers inflate the likely success of interventions. They get ahead of the limited evidence when pushing a programme.
  5. Policymakers first decide what they want to do, then seek evidence to back up their decisions.
  6. Policymakers recognise a problem, the evidence base is not well developed, but policymakers act quickly anyway.
  7. Policymakers recognise a problem, the evidence is highly contested, and policymakers select the recommendations of one group of experts and reject those of another.
  8. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on evidence (on its effectiveness) from randomised control trials.
  9. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on evidence (on its effectiveness) from qualitative data (feedback from service users and professional experience).
  10. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on their personal experience and assessment of what is politically feasible.
  11. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on evidence (on its effectiveness) from randomised control trials.
  12. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on evidence (on its effectiveness) from qualitative data (feedback from service users and professional experience).
  13. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on their personal experience and assessment of what is politically feasible.

I could go on, but you get the idea. Things get more confusing when you find combinations of these descriptions in fluid policies, such as when a programme is built partially on promising evidence (e.g. from pilots), only to be rejected or supported completely when events prompt policymakers to act more quickly than expected. Or, an evidence-based programme may be imported without enough evaluation to check if it works as intended in a new arena.

In a nutshell, the problem is that neither description captures these processes well. So, people use the phrase ‘evidence informed’ policymaking – but does this phrase help us much more? Have a look at the 13 examples and see which ones you’d describe as evidence informed.

Is it 12?

If so, when you use the phrase ‘evidence-informed’ policy or policymaking, which example do you mean?

For more reading on EBPM see: https://paulcairney.wordpress.com/ebpm/

9 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty

There is now a large literature on the gaps between the production of scientific evidence and a policy or policymaking response. However, the literature in key fields – such as health and environmental sciences – does not use policy theory to help explain the gap. In this book, and in work that I am developing with Kathryn Oliver and Adam Wellstead, I explain why this matters by identifying the difference between empirical uncertainty and policy ambiguity. Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to consider all evidence relevant to policy problems. Instead, they employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, and habits to make decisions quickly. This takes place in a complex policymaking system in which policymaker attention can lurch from issue to issue, policy is made routinely in subsystems, and the ‘rules of the game’ take time to learn.

The key problem in the health and environmental sciences is that studies focus only on the first short cut. They identify the problem of uncertainty that arises when policymakers have incomplete information, and seek to solve it by improving the supply of information and encouraging academic-practitioner networks and workshops. They ignore the importance of a wider process of debate, coalition formation, lobbying, and manipulation, to reduce ambiguity and establish a dominant way to frame policy problems. Further, while scientific evidence cannot solve the problem of ambiguity, persuasion and framing can help determine the demand for scientific evidence.

Therefore, the second solution is to engage in a process of framing and persuasion by, for example, forming coalitions with actors with the same aims or beliefs, and accompanying scientific information with simple stories to exploit or adapt to the emotional and ideological biases of policymakers. This is less about packaging information to make it simpler to understand, and more about responding to the ways in which policymakers think – in general, and in relation to emerging issues – and, therefore, how they demand information.

In the book, I present this argument in three steps. First, I bring together a range of insights from policy theory, to show the huge amount of accumulated knowledge of policymaking on which other scientists and evidence advocates should draw. Second, I discuss two systematic reviews – one by Oliver et al, and one that Wellstead and I developed – of the literature on ‘barriers’ to evidence and policy in health and environmental studies. They show that the vast majority of studies in each field employ minimal policy theory and present solutions which focus only on uncertainty. Third, I identify the practical consequences for actors trying to maximize the uptake of scientific evidence within government.

My conclusion has profound implications for the role of science and scientific experts in policymaking. Scientists have a stark choice: to produce information and accept that it will have a limited impact (but that scientists will maintain an often-useful image of objectivity), or to go beyond one’s comfort zone, and expertise, to engage in a normative enterprise that can increase impact at the expense of objectivity.

1 Comment

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty

The case studies of health and environmental policy, discussed in this book, largely confirm the concern that I raise in the introduction: it is too easy to bemoan the lack of evidence-based policymaking without being clear on what it means. There is great potential to conflate a series of problems that should be separated analytically:

  • The lack of reliable or uncontested evidence on the nature of a policy problem. In some cases, (a) complaints that policymakers do not respond quickly or proportionately to ‘the evidence’ go hand in hand with (b) admissions that the evidence of problems is equivocal. In turn, patchy evidence feeds into a wider political process in which actors compete to provide the dominant way to frame or understand policy problems.
  • The tendency of policymakers to pay insufficient attention to pressing, well-evidenced, problems. In other cases, the evidence of a problem is relatively clear, but policymakers are unable to understand it, unwilling to address it, or more likely to pay attention to other problems.
  • The lack of reliable or uncontested evidence on the effectiveness of policy solutions. In some cases, scientists are clear on the size and nature of the problem, but the evidence on policy solutions is patchy. Consequently, policymakers may be reluctant to act, or invest in expensive solutions, even if they recognise that there is a pressing problem to solve.
  • The tendency of policymakers to ignore or reject the most effective or best-evidenced policy solutions.
  • The tendency of policymakers to decide what they want to do, then seek enough evidence, or distort that evidence, to support their decision.

This lack of clarity combines with a lack of appreciation of the key ‘barriers’ to the use of evidence in policymaking. A large part of the literature, produced by health and environmental scientists with limited reference to policy theory, identifies a gulf in cultures between scientists and policymakers, and suggests that to solve this problem is to address a key issue in EBPM. Scientific information, provided in the right way, can address the problem of ‘bounded rationality’ in policymakers. If so, the failure of politicians to act accordingly indicates a lack of ‘political will’ to do the right thing.

Yet, the better translation of scientific evidence contributes primarily to one aspect of bounded rationality: the reduction of empirical uncertainty. It contributes less to a wider process of debate, competition, and persuasion, to reduce ambiguity and establish a dominant way to frame policy problems. Scientific evidence cannot solve the problem of ambiguity, but persuasion and framing can help determine the demand for scientific evidence. To address this second aspect of bounded rationality, we need to understand how policymakers use emotional, ideological, and habitual short cuts to understand policy problems. This is less about packaging information to make it simpler to understand, and more about responding to the ways in which policymakers think and, therefore, how they demand information.

1 Comment

Filed under Evidence Based Policymaking (EBPM)

Can a Policy Fail and Succeed at the Same Time?

In Policy Concepts in 1000 Words: Success and Failure, I argue that evaluation is party political. Parties compete to describe policies as successes or failures based on their beliefs and their selective use of evidence. There is often a lot of room for debate because the aims of policymakers are not always clear. In this post, I argue that this room still exists even if a policymaker’s aims appear to be clear. The complication is that a policy aim consists of an explicit statement of intent plus an often-implicit set of assumptions about what that statement of intent means in practice. This complication is exploited by parties in the same way as they exploit ambiguities and their selective use of evidence.

Let’s take the example of class sizes in Scottish schools, partly because it is often highlighted by opposition parties as a clear example of policy failure. The SNP manifesto 2007 (p52) seems crystal clear:

We will reduce class sizes in Primary 1, 2 and 3 to eighteen pupils or less (sic)

Further, the SNP Scottish Government did not appear to fulfil the spirit of its commitment. There is some wiggle room because it does not say all classes or set a deadline, but it is reasonable to assume that the pledge refers to extensive progress by 2011 (the end of the parliamentary session). Indeed, the lack of progress was seized upon by opposition parties, who seemed to be partly responsible for the removal of the Education Secretary from her post in 2009.  The issue arose again at the end of 2013 when average class sizes appeared to be higher than when the pledge was made.

My magic trick will be to persuade you that, in an important way, the reduction of class sizes was not the SNP’s aim. What I mean is this:

  1. Each policy aim is part of a wider set of aims which may undermine rather than reinforce each other. In general, for example, spending on one aim comes at the expense of another. In this specific case, another SNP aim was to promote a new relationship with local authorities. It sought to set an overall national strategy and fund programmes via local authorities, but not impose policy outputs or outcomes on implementing bodies. Those two aims could be compatible: the Scottish Government could persuade local authorities to share its aims and spend money on achieving them. Or, they could be contradictory, forcing the Scottish Government to pursue one aim at the expense of another: either imposing policy on local authorities, or accepting the partial loss of one aim to secure a particular relationship with local authorities.
  2. Class sizes are not aims in themselves. Instead, they are means to an end, or headline-grabbing proxy measures for performance. The broader aim is to improve learning and/ or education attainment (and to address learning-based inequalities). Further, local authorities may have their own ideas about how to make this happen, perhaps by spending their ‘class size’ money on a different project with the same broader aim (I have not made up this point – a lot of teaching professionals are not keen on these targets). Again, the Scottish Government has a choice: impose their own aim or trust some local authorities to do things their own way – which might produce a lack of implementation of a specific aim but the pursuit of a broader one.
  3. The assumption is always that nothing will go wrong between the promise and the action. Yet, things almost-always go wrong because policy outcomes are often out of the control of policymakers. We like to pretend that governments are infallible so that we can hold them responsible and blame them for being fallible.

Consequently, a key question about policy success is this: how far would you go to achieve it in each case?  Would you sacrifice one aim for another? How do you prioritise a large set of aims which may not be compatible with each other? Would you accept the unintended consequences of a too-rigid attachment to a policy aim? Or, would you set a broad strategy and accept that implementing authorities should have considerable say in how to carry it out?

In this sense, it is possible to succeed and fail simultaneously – either by successfully achieving a narrow policy aim but with unintended consequences, or by accepting a level of defeat for the greater good.*

*Or, I suppose, if you are not of the bright-side persuasion, you can fail and fail.

Further Reading:

Policy Concepts in 1000 Words: Success and Failure (Evaluation)

How Big is the Incentive for Politicians to Look Dishonest or Stupid?

Further Reading: class sizes

Class sizes

http://www.scotsman.com/news/education/hyslop-admits-government-has-failed-on-class-sizes-1-779106

http://www.scotsman.com/news/politics/top-stories/fiona-hyslop-sacked-as-education-secretary-1-1224513

http://www.bbc.co.uk/news/uk-scotland-25332478

http://www.scotland.gov.uk/Topics/Education/Schools/Teaching/classes

http://www.scotland.gov.uk/Topics/Statistics/Browse/School-Education/ClassSizeDatasets

http://www.scotsman.com/news/education/class-sizes-up-as-teacher-numbers-fall-in-scotland-1-3228707

http://www.heraldscotland.com/news/education/dismay-as-primary-class-sizes-larger-since-snp-took-power.22933389

4 Comments

Filed under public policy, Scottish politics

Policy Concepts in 1000 Words: Success and Failure (Evaluation)

(podcast download)

Policy success is in the eye of the beholder. The evaluation of success is political in several ways. It can be party political, when election campaigns focus on the record of the incumbent government. Policy decisions produce winners and losers, prompting disputes about success between actors with different aims. Evaluation can be political in subtler but as-important ways, involving scientific disputes about:

  • How long we wait to evaluate.
  • How well-resourced our evaluation should be.
  • The best way to measure and explain outcomes.
  • The ‘benchmarks’ to use – should we compare outcomes with the past or other countries?
  • How we can separate the effect of policy from other causes, in a complex world where randomised-controlled-trials are often difficult to use.

In this more technical-looking discussion, the trade-off is between the selection of a large mixture of measures that are hard to work with, or a small number of measures that are handpicked and represent no more than crude proxies for success.

Evaluation is political because we set the agenda with the measures we use, by prompting a focus on some aims at the expense of others. A classic example is the aim to reduce healthcare waiting times, which represent a small part of health service activity but generate disproportionate attention and action, partly because outcomes are relatively visible and easy to measure. Many policies are implemented and evaluated using such proxies: the government publishes targets to provide an expectation of implementer behaviour; and, regulatory bodies exist to monitor compliance.

Let’s consider success in terms of the aims of the person responsible for the policy. It raises four interesting issues:

  1. The aims of that policymaker may not be clear. For example, they may not say why they made particular choices, they may have many reasons, their reasons may not be specific enough to be meaningful, and/or they may not be entirely truthful.
  2. Policymaking is a group effort, which magnifies the problem of identifying a single, clear, aim.
  3. Aims are not necessarily noble. Marsh and McConnell describe three types. Process measures success in terms of its popularity among particular groups and its ease of passage through the legislature. Political describes its effect on the government’s popularity. Programmatic describes its implementation in terms of original aims, its effect in terms of intended outcomes, and the extent to which it represented an ‘efficient use of resources’. Elected policymakers may justify their actions in programmatic terms, but be more concerned with politics and process. Or, their aims may be unambitious. We could identify success in their terms but still feel that major problems remain unsolved.
  4. Responsibility is a slippery concept. In a Westminster system, we may hold ministers to be ultimately responsible but, in practice, responsibility is shared with a range of people in various types and levels of government. In multi-level political systems, responsibility may be shared with several elected bodies with their own mandates and claims to pursue distinctive aims.

Traditionally, these responsibility issues were played out in top-down and bottom-up discussions of policy implementation. For the sake of simplicity, the ‘top’ is the policymaker at the heart of central government and we try to explain success or failure according to the extent to which policy implementation met these criteria:

1.   The policy’s objectives are clear, consistent and well communicated and understood.

2.   The policy will work as intended when implemented.

3.   The required resources are committed to the programme.

4.   Policy is implemented by skilful and compliant officials.

5.   Success does not depend on cooperation from many bodies.

6.   Support from influential groups is maintained.

7.   Demographic and socioeconomic conditions, and unpredictable events beyond the control of policymakers, do not significantly undermine the process.

Such explanations for success still have some modern day traction, such as in recommendations by the Institute for Government:

  1. Understand the past and learn from failure.
  2. Open up the policy process.
  3. Be rigorous in analysis and use of evidence.
  4. Take time and build in scope for iteration and adaptation.
  5. Recognise the importance of individual leadership and strong personal relationships.
  6. Create new institutions to overcome policy inertia.
  7. Build a wider constituency of support.

Alternatively, ‘bottom-up’ studies prompted a shift of analysis, towards a larger number of organisations which made policy as they carried it out – and had legitimate reasons to diverge from the aims set at the ‘top’. Indeed, central governments might encourage a bottom up approach, by setting a broad strategy and accepting that other bodies will implement policy in their own way. However, this is difficult to do in Westminster systems, where government success is measured in terms of ministerial and party manifesto aims.

Examples of success and failure?

Many implementation studies focus on failure, including Pressman and Wildavsky’s ‘How Great Expectations in Washington are Dashed in Oakland’ and Marsh & Rhodes’ focus on the ‘implementation gap’ during the Thatcher Government era (1979-90).

In contrast, the IFG report focuses on examples of success, derived partly from a vote by UK political scientists, including: the national minimum wage, Scottish devolution, and privatisation.

Note the respondents’ reasons for declaring success, based on a mix of their personal values and their assessment of process, political and programmatic factors.  They declare success in very narrow terms, as the successful delivery in the short term.

So, privatisation is a success because the government succeeded in raising money, boosting its popularity and international reputation – not because we have established that the nationalized industries work better in the private sector.

Similarly, devolution was a declared a success because it solved a problem (local demand for self-autonomy), not because devolved governments are better at making policy or their policies have improved the lives of the Scottish population (Neil McGarvey and I discuss this here).

Individual policy instruments like the smoking ban are often treated in similar ways – we declare instant success when the bill passes and public compliance is high, then consider the longer term successes (less smoking, less secondhand smoke) later.

Further reading and watching: (1) Can a Policy Fail and Succeed at the Same Time?

(2)  http://blogs.lse.ac.uk/politicsandpolicy/archives/34735

Why should you read and watch this case study? I hesitate to describe UK tobacco control as a success because it instantly looks like I am moralising, and because it is based on a narrow set of policymaking criteria rather than an outcome in the population (it is up to you to decide if the UK’s policies are appropriate and its current level of smoking and health marks policy success). However, it represents a way to explore success in terms of several ‘causal factors’ (Peter John) that arise in each 1000 Words post: institutions, networks, socioeconomic conditions and ideas. Long term tobacco control ‘success’ happened because:

  • the department of health took the policy lead (replacing trade and treasury departments);
  • tobacco is ‘framed’ as a pressing public health problem, not an economic good;
  • public health groups are consulted at the expense of tobacco companies;
  • socioeconomic conditions (including the value of tobacco taxation, and public attitudes to tobacco control) are conducive to policy change;
  • and, the scientific evidence on the harmful effects of smoking and secondhand smoking are ‘set in stone’ within governments.

The ‘take home’ message here is that ‘success’ depends as much on a policy environment conducive to change as the efficacy of political instruments and leadership qualities of politicians.

Update September 2019

I have now written up this UK tobacco discussion in this book:

Paul Cairney (2019) ‘The transformation of UK tobacco control’ in (eds) Mallory Compton and Paul ‘t Hart Great Policy Successes: How Governments Get It Right in a Big Way at Least Some of the Time (Oxford: Oxford University Press) Preview PDF

Each chapter is accompanied by a case study, such as the one on UK tobacco by 

 

30 Comments

Filed under 1000 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy