Tag Archives: EBP

Evidence-based policymaking: political strategies for scientists living in the real world

Note: I wrote the following discussion (last year) to be a Nature Comment but it was not to be!

Nature articles on evidence-based policymaking often present what scientists would like to see: rules to minimise bias caused by the cognitive limits of policymakers, and a simple policy process in which we know how and when to present the best evidence.[1]  What if neither requirement is ever met? Scientists will despair of policymaking while their competitors engage pragmatically and more effectively.[2]

Alternatively, if scientists learned from successful interest groups, or by using insights from policy studies, they could develop three ‘take home messages’: understand and engage with policymaking in the real world; learn how and when evidence ‘wins the day’; and, decide how far you should go to maximise the use of scientific evidence. Political science helps explain this process[3], and new systematic and thematic reviews add new insights.[4] [5] [6] [7]

Understand and engage with policymaking in the real world

Scientists are drawn to the ‘policy cycle’, because it offers a simple – but misleading – model for engagement with policymaking.[3] It identifies a core group of policymakers at the ‘centre’ of government, perhaps giving the impression that scientists should identify the correct ‘stages’ in which to engage (such as ‘agenda setting’ and ‘policy formulation’) to ensure the best use of evidence at the point of authoritative choice. This is certainly the image generated most frequently by health and environmental scientists when they seek insights from policy studies.[8]

Yet, this model does not describe reality. Many policymakers, in many levels and types of government, adopt and implement many measures at different times. For simplicity, we call the result ‘policy’ but almost no modern policy theory retains the linear policy cycle concept. In fact, it is more common to describe counterintuitive processes in which, for example, by the time policymaker attention rises to a policy problem at the ‘agenda setting’ stage, it is too late to formulate a solution. Instead, ‘policy entrepreneurs’ develop technically and politically feasible solutions then wait for attention to rise and for policymakers to have the motive and opportunity to act.[9]

Experienced government science advisors recognise this inability of the policy cycle image to describe real world policymaking. For example, Sir Peter Gluckman presents an amended version of this model, in which there are many interacting cycles in a kaleidoscope of activity, defying attempts to produce simple flow charts or decision trees. He describes the ‘art and craft’ of policy engagement, using simple heuristics to deal with a complex and ‘messy’ policy system.[10]

Policy studies help us identify two such heuristics or simple strategies.

First, respond to policymaker psychology by adapting to the short cuts they use to gather enough information quickly: ‘rational’, via trusted sources of oral and written evidence, and ‘irrational’, via their beliefs, emotions, and habits. Policy theories describe many interest group or ‘advocacy coalition’ strategies, including a tendency to combine evidence with emotional appeals, romanticise their own cause and demonise their opponents, or tell simple emotional stories with a hero and moral to exploit the biases of their audience.[11]

Second, adapt to complex ‘policy environments’ including: many policymakers at many levels and types of government, each with their own rules of evidence gathering, network formation, and ways of understanding policy problems and relevant socioeconomic conditions.[2] For example, advocates of international treaties often find that the evidence-based arguments their international audience takes for granted become hotly contested at national or subnational levels (even if the national government is a signatory), while the same interest groups presenting the same evidence of a problem can be key insiders in one government department but ignored in another.[3]

Learn the conditions under which evidence ‘wins the day’ in policymaking

Consequently, the availability and supply of scientific evidence, on the nature of problems and effectiveness of solutions, is a necessary but insufficient condition for evidence-informed policy. Three others must be met: actors use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems; the policy environment becomes broadly conducive to policy change; and, actors exploit attention to a problem, the availability of a feasible solution, and the motivation of policymakers, during a ‘window of opportunity’ to adopt specific policy instruments.10

Tobacco control represents a ‘best case’ example (box 1) from which we can draw key lessons for ecological and environmental policies, giving us a sense of perspective by highlighting the long term potential for major evidence-informed policy change. However, unlike their colleagues in public health, environmental scientists have not developed a clear sense of how to produce policy instruments that are technically and politically feasible, so the delivery of comparable policy change is not inevitable.[12]

Box 1: Tobacco policy as a best case and cautionary tale of evidence-based policymaking

Tobacco policy is a key example – and useful comparator for ecological and environmental policies – since it represents a best case scenario and cautionary tale.[13] On the one hand, the scientific evidence on the links between smoking, mortality, and preventable death forms the basis for modern tobacco control policy. Leading countries – and the World Health Organisation, which oversees the Framework Convention on Tobacco Control (FCTC) – frame tobacco use as a public health ‘epidemic’ and allow their health departments to take the policy lead. Health departments foster networks with public health and medical groups at the expense of the tobacco industry, and emphasise the socioeconomic conditions – reductions in (a) smoking prevalence, (b) opposition to tobacco control, and (c) economic benefits to tobacco – most supportive of tobacco control. This framing, and conducive policymaking environment, helps give policymakers the motive and opportunity to choose policy instruments, such as bans on smoking in public places, which would otherwise seem politically infeasible.

On the other hand, even in a small handful of leading countries such as the UK, it took twenty to thirty years to go from the supply of the evidence to a proportionate government response: from the early evidence on smoking in the 1950s prompting major changes from the 1980s, to the evidence on passive smoking in the 1980s prompting public bans from the 2000s onwards. In most countries, the production of a ‘comprehensive’ set of policy measures is not yet complete, even though most signed the FCTC.

Decide how far you’ll go to maximise the use of scientific evidence in policymaking

These insights help challenge the naïve position that, if policymaking can change to become less dysfunctional[1], scientists can be ‘honest brokers’[14] and expect policymakers to use their evidence quickly, routinely, and sincerely. Even in the best case scenario, evidence-informed change takes hard work, persistence, and decades to achieve.

Since policymaking will always appear ‘irrational’ and complex’[3], scientists need to think harder about their role, then choose to engage more effectively or accept their lack of influence.

To deal with ‘irrational’ policymakers, they should combine evidence with persuasion, simple stories, and emotional appeals, and frame their evidence to make the implications consistent with policymakers’ beliefs.

To deal with complex environments, they should engage for the long term to work out how to form alliances with influencers who share their beliefs, understand in which ‘venues’ authoritative decisions are made and carried out, the rules of information processing in those venues, and the ‘currency’ used by policymakers when they describe policy problems and feasible solutions.[2] In other words, develop skills that do not come with scientific training, avoid waiting for others to share your scientific mindset or respect for scientific evidence, and plan for the likely eventuality that policymaking will never become ‘evidence based’.

This approach may be taken for granted in policy studies[15], but it raises uncomfortable dilemmas regarding how far scientists should go, to maximise the use of scientific evidence in policy, using persuasion and coalition-building.

These dilemmas are too frequently overshadowed by claims – more comforting to scientists – that politicians are to blame because they do not understand how to generate, analyse, and use the best evidence. Scientists may only become effective in politics if they apply the same critical analysis to themselves.

[1] Sutherland, W.J. & Burgman, M. Nature 526, 317–318 (2015).

[2] Cairney, P. et al. Public Administration Review 76, 3, 399-402 (2016)

[3] Cairney, P. The Politics of Evidence-Based Policy Making (Palgrave Springer, 2016).

[4] Langer, L. et al. The Science of Using Science (EPPI, 2016)

[5] Breckon, J. & Dodson, J. Using Evidence. What Works? (Alliance for Useful Evidence, 2016)

[6] Palgrave Communications series The politics of evidence-based policymaking (ed. Cairney, P.)

[7] Practical lessons from policy theories (eds. Weible, C & Cairney, P.) Policy and Politics April 2018

[8] Oliver, K. et al. Health Research Policy and Systems, 12, 34 (2016)

[9] Kingdon, J. Agendas, Alternatives and Public Policies (Harper Collins, 1984)

[10] Gluckmann, P. Understanding the challenges and opportunities at the science-policy interface

[11] Cairney, P. & Kwiatkowski, R. Palgrave Communications.

[12] Biesbroek et al. Nature Climate Change, 5, 6, 493–494 (2015)

[13] Cairney, P. & Yamazaki, M. Journal of Comparative Policy Analysis

[14] Pielke Jr, R. originated the specific term The honest broker (Cambridge University Press, 2007) but this role is described more loosely by other commentators.

[15] Cairney, P. & Oliver, K. Health Research Policy and Systems 15:35 (2017)

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

The UK government’s imaginative use of evidence to make policy

This post describes a new article published in British Politics (Open Access).

In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.

So I describe two cases in which its evidence-use seems cynical, when:

  1. Declaring complete success in turning around the lives of ‘troubled families’
  2. Exploiting vivid neuroscientific images to support ‘early intervention’

Then I describe more difficult cases in which supportive evidence is not clear:

  1. Family intervention project evaluations are of limited value and only tentatively positive
  2. Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’

As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?

At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:

Cairney 2018 British Politics discussion section

I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.

The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’

I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.

Cairney British Politics 2018 Table 1

As such, it forms part of a far wider body of work …

In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.

See also:

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Why doesn’t evidence win the day in policy and policymaking?

(found by searching for early intervention)

See also:

Here’s why there is always an expectations gap in prevention policy

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

(found by searching for prevention)

Powerpoint for guest lecture: Paul Cairney UK Government Evidence Policy

2 Comments

Filed under Evidence Based Policymaking (EBPM), POLU9UK, Prevention policy, UK politics and policy

The Politics of Evidence revisited

This is a guest post by Dr Justin Parkhurst, responding to a review of our books by Dr Joshua Newman, and my reply to that review.

I really like that Joshua Newman has done this synthesis of 3 recent books covering aspects of evidence use in policy. Too many book reviews these days just describe the content, so some critical comments are welcome, as is the comparative perspective.

I’m also honoured that my book was included in the shortlist (it is available here, free as an ebook: bit.ly/2gGSn0n for interested readers) and I’d like to follow on from Paul to add some discussion points to the debate here – with replies to both Joshua and Paul (hoping first names are acceptable).

Have we heard all this before?

Firstly, I agree with Paul that saying ‘we’ve heard this all before’ risks speaking about a small community of active researchers who study these issues, and not the wider community. But I’d also add that what we’ve heard before is a starting point to many of these books, not where they end up.

In terms of where we start: I’m sure many of us who work in this field are somewhat frustrated at meetings when we hear people making statements that are well established in the literature. Some examples include:

  • “There can be many types of evidence, not just scientific research…”
  • “In the legal field, ‘evidence’ means something different…”
  • “We need evidence-based policy, not policy-based evidence…”
  • “We need to know ‘what works’ to get evidence into policy…”

Thus, I do think there is still a need to cement the foundations of the field more strongly – in essence, to establish a disciplinary baseline that people weighing in on a subject should be expected to know about before providing additional opinions. One way to help do this is for scholars to continue to lay out the basic starting points in our books – typically in the first chapter or two.

Of course, other specialist fields and disciplines have managed to establish their expertise to a point that individuals with opinions on a subject typically have some awareness that there is a field of study out there which they don’t necessarily know about. This is most obvious in the natural sciences (and perhaps in economics). E.g. most people (current presidents of some large North American countries aside?) are aware that don’t know a lot about engineering, medicine, or quantum physics – so they won’t offer speculative or instinctive opinions about why airplanes stay in the air, how to do bypass surgery, or what was wrong with the ‘Ant-Man’ film. Or when individuals do offer views, they are typically expected to know the basics of the subject.

For the topic of evidence and policy, I often point people to Huw Davies, Isabel Walter, and Sandra Nutley’s book Using Evidence, which is a great introduction to much of this field, as well as Carol Weiss’ insights from the late 70s on the many meanings of research utilisation. I also routinely point people to read The Honest Broker by Roger Pielke Jr. (which I, myself, failed to read before writing my book and, as such, end up repeating many of his points – I’ve apologised to him personally).

So yes, I think there is space for work like ours to continue to establish a baseline, even if some of us know this, because the expertise of the field is not yet widely recognised or established. Yet I think is it not accurate for Joshua to argue we end up repeating what is known, considering our books diverge in key ways after laying out some of the core foundations.

Where do we go from there?

More interesting for this discussion, then, is to reflect on what our various books try to do beyond simply laying out the basics of what we know about evidence use and policy. It is here where I would disagree with Joshua’s point claiming we don’t give a clear picture about the ‘problem’ that ‘evidence-based policy’ (his term – one I reject) is meant to address. Speaking only for my own book, I lay out the problem of bias in evidence use as the key motivation driving both advocates of greater evidence use as well as policy scholars critical of (oversimplified) knowledge translation efforts. But I distinguish between two forms of bias: technical bias – whereby evidence is used in ways that do not adhere to scientific best practice and thus produce sub-optimal social outcomes; and issue bias – whereby pieces of evidence, or mechanisms of evidence use, can obscure the important political choices in decision making, skewing policy choices towards those things that have been measured, or are conducive to measurement. Both of these forms of bias are violations of widely held social values – values of scientific fidelity on the one hand, and of democratic representation on the other. As such, for me, these are the problems that I try to consider in my book, exploring the political and cognitive origins of both, in order to inform thinking on how to address them.

That said, I think Joshua is right in some of the distinctions he makes between our works in how we try to take this field forward, or move beyond current challenges in differing ways. Paul takes the position that researchers need to do something, and one thing they can do is better understand politics and policy making. I think Paul’s writings about policy studies for students is superb (see his book and blog posts about policy concepts). But in terms of applying these insights to evidence use, this is where we most often diverge. I feel that keeping the focus on researchers puts too much emphasis on achieving ‘uptake’ of researcher’s own findings. In my view, I would point to three potential (overlapping) problems with this.

  • First – I do not think it is the role or responsibility of researchers to do this, but rather a failure to establish the right system of evidence provision;
  • Second – I feel it leaves unstated the important but oft ignored normative question of how ‘should’ evidence be used to inform policy;
  • Third – I believe these calls rest on often unstated assumptions about the answer to the second point which we may wish to challenge.

In terms of the first point: I’m more of an institutionalist (as Joshua points out). My view is that the problems around non-use or misuse of evidence can be seen as resulting from a failure to establish appropriate systems that govern the use of evidence in policy processes. As such, the solution would have to lie with institutional development and changes (my final chapter advocates for this) that establish systems which serve to achieve the good governance of evidence.

Paul’s response to Joshua says that researchers are demanding action, so he speaks to them. He wants researchers to develop “useful knowledge of the policy process in which they might want to engage” (as he says above).  Yet while some researchers may wish to engage with policy processes, I think it needs to be clear that doing so is inherently a political act – and can take on a role of issue advocacy by promoting those things you researched or measured over other possible policy considerations (points made well by Roger Pielke Jr. in The Honest Broker). The alternative I point towards is to consider what good systems of evidence use would look like. This is the difference between arguing for more uptake of research, vs. arguing for systems through which all policy relevant evidence can be seen and considered in appropriate ways – regardless of the political savvy, networking, or activism of any given researcher (in my book I have chapters reflecting on what appropriate evidence for policy might be, and what a good process for its use might be, based on particular widely shared values).

In terms of the second and third points – my book might be the most explicit in its discussion of the normative values guiding efforts to improve evidence, and I am more critical than some about the assumption that getting researchers work ‘used’ by policymakers is a de-facto good thing. This is why I disagree with Joshua’s conclusion that my work frames the problem as ‘bridging the gap’. Rather I’d say I frame the problem as asking the question of ‘what does a better system of evidence use look like from a political perspective?’ My ‘good governance of evidence’ discussion presents an explicitly normative framework based the two sets of values mentioned above – those around democratic accountability and around fidelity to scientific good practice – both of which have been raised as important in discussions about evidence use in political processes.

Is the onus on researchers?

Finally, I also would argue against Joshua’s conclusion that my work places the burden of resolving the problems on researchers. Paul argues above that he does this but with good reason. I try not to do this. This is again because my book is not making an argument for more evidence to be ‘used’ per se. (and I don’t expect policy makers to just want to use it either). Rather I focus on identifying principles by which we can judge systems of evidence use, calling for guided incremental changes within national systems.

While I think academics can play an important role in establishing ‘best practice’ ideas, I explicitly argue that the mandate to establish, build, or incrementally change evidence advisory systems lies with the representatives of the people. Indeed, I include ‘stewardship’ as a core principle of my good governance of evidence framework to show that it should be those individuals who are accountable to the public that build these systems in different countries. Thus, the burden lies not with academics, but rather with our representatives – and, indirectly with all of us through the demands we make on them – to improve systems of evidence use.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Debating the politics of evidence-based policy

Joshua Newman has provided an interesting review of three recent books on evidence/ policy (click here). One of those books is mine: The Politics of Evidence-Based Policy Making (which you can access here).

His review is very polite, for which I thank him. I hope my brief response can be seen in a similarly positive light (well, I had hoped to make it brief). Maybe we disagree on one or two things, but often these discussions are about the things we emphasize and the way we describe similar points.

There are 5 points to which I respond because I have 5 digits on my right hand. I’d like you to think of me counting them out on my fingers. In doing so, I’ll use ‘Newman’ throughout, because that’s the academic convention, but I’d also like to you imagine me reading my points aloud and whispering ‘Joshua’ before each ‘Newman’.

  1. Do we really need to ‘take the debate forward’ so often?

I use this phrase myself, knowingly, to keep a discussion catchy, but I think it’s often misleading. I suggest not to get your hopes up too high when Newman raises the possibility of taking the debate forward with his concluding questions. We won’t resolve the relationship between evidence, politics & policy by pretending to reframe the same collection of questions about the prospect of political reform that people have been asking for centuries. It is useful to envisage better political systems (the subject of Newman’s concluding remarks) but I don’t think we should pretend that this is a new concern or that it will get us very far.

Indeed, my usual argument is that researchers need to do something (such as improve how we engage in the policy process) while we wait for political system reforms to happen (while doubting if they will ever happen).

Further, Newman does not produce any political reforms to address the problems he raises. Rather, for example, he draws attention to Trump to describe modern democracies as ‘not pluralist utopias’ and to identify examples in which policymakers draw primarily on beliefs, not evidence. By restating these problems, he does not solve them. So, what are researchers supposed to do after they grow tired of complaining that the world does not meet their hopes or expectations?

In other words, for me, (a) promoting political change and (b) acting during its absence are two sides of the same coin. We go round and round more often than we take things forward.

  1. What debate are we renaming?

Newman’s ‘we’ve heard it before’ argument seems more useful, but there is a lot to hear and relatively few people have heard it. I’d warn against the assumption that ‘I’ve heard this before’ can ever equal ‘we’ve heard it before’ (unless ‘we’ refers to a tiny group of specialists talking only to each other).

Rather, one of the most important things we can do as academics is to tell the same story to each other (to check if we understand the same story, in the same way, and if it remains useful) and to wider audiences (in a way that they can pick up and use without dedicating their career to our discipline).

Some of our most important insights endure for decades and they sometimes improve in the retelling. We apply them to new eras, and often come to the same basic conclusions, but it seems unhelpful to criticise a lack of complete novelty in individual texts (particularly when they are often designed to be syntheses). Why not use them to occasionally take a step back to discuss and clarify what we know?

Perhaps more importantly, I don’t think Newman is correct when he says that each book retells the story of the ‘research utilization’ literature. I’m retelling the story of policy theory, which describes how policymakers deal with bounded rationality in a complex policymaking environment. Policy theory’s intellectual histories often provide very different perspectives – of the policymaker trying to make good enough decisions, rather than the researcher trying to improve the uptake of their research – than the agenda inspired by Weiss et al (see for example The New Policy Sciences).

  1. Don’t just ‘get political’; understand the policy process

I draw on policy theory because it helps people understand policymaking. It would be a mistake to conclude from my book that I simply want researchers to ‘get political’. Rather, I want them to develop useful knowledge of the policy process in which they might want to engage. This knowledge is not freely available; it takes time to understand the discipline and reflect on policy dynamics.

Yet, the payoff can be profound, if only because it helps people think about the difference between two analytically separate causes of a notional ‘evidence policy gap’: (a) individuals making choices based on their beliefs and limited information (which is relatively easy to understand but also to caricature), and (b) systemic or ‘environmental’ causes (which are far more difficult to conceptualise and explain, but often more useful to understand).

  1. Don’t throw out the ‘two communities’ phrase without explaining why

Newman criticises the phrase ‘two communities’ as a description of silos in policymaking versus research, partly because (a) many policymakers use research frequently, and (b) the real divide is often between users/ non-users of research within policymaking organisations. In short, there are more than two communities.

I’d back up his published research with my anecdotal experience of giving talks to government audiences: researchers and analysts within government are often very similar in outlook to academics and they often talk in the same way as academics about the disconnect between their (original or synthetic) research and its use by their ‘operational’ colleagues.

Still, I’m not sure why Newman concludes that the ‘two communities’ phrase is ‘deeply flawed and probably counter-productive’. Yes, the world is more nuanced and less binary than ‘two communities’ suggests. Yes, the real divide may be harder to spot. Still, as Newman et al suggest: ‘Policy makers and academics should focus on bridging instruments that can bring their worlds closer together’. This bullet point from their article seems, to me, to be the point of using the phrase ‘two communities’. Maybe Caplan used the phrase differently in 1979, but to assert its historic meaning then reject the phrase’s use in modern discussion seems less useful than simply clarifying the argument in ways such as:

  • There is no simple policymaker/ academic divide but, … note the major difference in requirements between (a) people who produce or distribute research without taking action, which allows them (for example) to be more comfortable with uncertainty, and (b) people who need to make choices despite having incomplete information to hand.
  • You might find a more receptive audience in one part of government (e.g. research/ analytical) than another (e.g. operational), so be careful about generalising from singular experiences.
  1. Should researchers engage in the policy process?

Newman says that each book, ‘unfairly places the burden of resolving the problem in the hands of an ill-equipped group of academics, operating outside the political system’.

I agree with Newman when he says that many researchers do not possess the skills to engage effectively in the policy process. Scientific training does not equip us with political skills. Indeed, I think you could read a few of my blog posts and conclude, reasonably, that you would like nothing more to do with the policy process because you’d be more effective by focusing on research.

The reason I put the onus back on researchers is because I am engaging with arguments like the one expressed by Newman (in other words, part of the meaning comes from the audience). Many people conclude their evidence policy discussions by identifying (or ‘reframing’) the problem primarily as the need for political reform. For me, the focus on other people changing to suit your preferences seems unrealistic and misplaced. In that context, I present the counter-argument that it may be better to adapt effectively to the policy process that exists, not the one you’d like to see. Sometimes it’s more useful to wear a coat than complain about the weather.

See also:  The Politics of Evidence 

The Politics of Evidence revisited

 

Pivot cover

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

Leave a comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy