Tag Archives: defining evidence

Theory and Practice: How to Communicate Policy Research beyond the Academy

Notes for my first talk at the University of Queensland, Wednesday 24th October, 12.30pm, Graduate Centre, room 402.

Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.

I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:

  1. There are many claims to policy relevant knowledge.
  2. Policymakers have to ignore most evidence.
  3. There is no simple policy cycle in which we all know at what stage to provide what evidence.

slide 3 24.10.18

In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.

Under those circumstances, key responses include:

  • framing evidence in terms of the ways in which your audience understands policy problems
  • engaging in networks to identify and exploit the right time to act, and
  • venue shopping to find sympathetic audiences in different parts of political systems.

However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.

In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:

  1. They may not know where to start: who should they contact to start making connections with policymakers?
  2. The incentives and rewards for engagement may not be clear. The UK’s ‘impact’ agenda has changed things, but not to the extent that any engagement is good engagement. Researchers need to tell a convincing story that they made an impact on policy/ policymakers with their published research, so there is a notional tipping point of engagement in which it reaches a scale that makes it worth doing.
  3. The costs are clearer. For example, any time spent doing engagement is time away from writing grant proposals and journal articles (in other words, the things that still make careers).
  4. The rewards and costs are not spread evenly. Put most simply, white male professors may have the most opportunities and face the fewest penalties for engagement in policymaking and social media. Or, the opportunities and rewards may vary markedly by discipline. In some, engagement is routine. In others, it is time away from core work.

In that context, I suggest that CS should:

  • provide clarity on what they expect from academics, and when they need information
  • describe what they can offer in return (which might be as simple as a written and signed acknowledgement of impact, or formal inclusion on an advisory committee).
  • show some flexibility: you may have a tight deadline, but can you reasonably expect an academic to drop what they are doing at short notice?
  • Engage routinely with academics, to help form networks and identify the right people you need at the right time

These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:

  • Technical languages/ jargon to describe their work
  • Timescales to supply and demand information
  • Professional incentives (such as to value scientific novelty in academia but evidential synthesis in government
  • Comfort with uncertainty (often, scientists project relatively high uncertainty and don’t want to get ahead of the evidence; often policymakers need to project certainty and decisiveness)
  • Assessments of the relative value of scientific evidence compared to other forms of policy-relevant information
  • Assessments of the role of values and beliefs (some scientists want to draw the line between providing evidence and advice; some policymakers want them to go much further)

To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge

Figure 1

However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.

In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.

Table 1 3 ideal types of EBBP

In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:

  1. Storytelling

The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact.  Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.

On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.

On the other hand, it is important to not get carried away by the possibilities:

  • My reading of the NPF empirical work is that the most impactful stories are reinforcing the beliefs of the audience – to mobilise them to act – not changing their minds.
  • Also look at the work of the Frameworks Institute which experiments with individual versus thematic stories because people react to them in very different ways. Some might empathise with an individual story; some might judge harshly. For example, they discusse stories about low income families and healthy eating, in which they use the theme of a maze to help people understand the lack of good choices available to people in areas with limited access to healthy food.

See: Storytelling for Policy Change: promise and problems

  1. Evidence for advocacy

The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:

  1. Identifying the policy change they would like to see.
  2. Identifying the powerful actors they need to influence.
  3. A mixture of tactics: insider, outsider, and supporting others by, for example, boosting local civil society organisations.
  4. A mix of ‘evidence types’ for each audience

oxfam table 2

  1. Wider public campaigns to address the political environment in which policymakers consider choices
  2. Engaging stakeholders in the research process (often called the ‘co-production of knowledge’)
  3. Framing: personal stories, ‘killer facts’, visuals, credible messenger
  4. Exploiting ‘windows of opportunity’
  5. Monitoring, learning, trial and error

In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.

See: Using evidence to influence policy: Oxfam’s experience

 

 

 

 

 

 

 

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM)

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.

The event’s description sums up key conclusions in the literature on policy learning and policy transfer:

  1. technology and ‘entrepreneurs’ help ideas spread internationally, and domestic policymakers can use them to be more informed about global policy innovation, but
  2. there can be major unintended consequences to importing ideas, such as the adoption of policy solutions with poorly-evidenced success, or a broader sense of failed transportation caused by factors such as a poor fit between the aims of the exporter/importer.

In this post, I connect these conclusions to broader themes in policy studies, which suggest that:

  1. policy learning and policy transfer are political processes, not ‘rational’ or technical searches for information
  2. the use of evidence to spread policy innovation requires two interconnected choices: what counts as good evidence, and what role central governments should play.
  3. the following ’11 question guide’ to evidence based policy transfer serves more as a way to reflect than a blueprint for action.

As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.

anzog auckland transfer ad

Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?

Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:

  1. ‘Evidence based’ is a highly misleading description of the use of information in policy.
  2. To transfer a policy blueprint completely, in this manner, would require all places and contexts to be the same, and for the policy process to be technocratic and apolitical.
  3. There are general academic guides on how to learn lessons from others systematically – such as Richard Rose’s ‘practical guide’  – but most academic work on learning and transfer does not suggest that policymakers follow this kind of advice.

Rose 10 lessons rotated

Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.

Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:

3 reasons why ‘evidence based’ does not describe policymaking

In a series of ANZSOG talks on ‘evidence based policymaking’ (EBPM), I describe three main factors, all of which are broadly relevant to transfer:

  1. There are many forms of policy-relevant evidence and few policymakers adhere to a strict ‘hierarchy’ of knowledge.

Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.

  1. Policymakers must find ways to ignore most evidence – such as by combining ‘rational’ and ‘irrational’ cognitive shortcuts – to be able to act quickly.

The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.

  1. They do not control the policy process in which they engage.

We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.

The literature on ‘policy learning’ tells a similar story

Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.

We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:

1.It is collective and rule-bound

Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.

2.’Evidence based’ is one of several types of policy learning

  • Epistemic. Primarily by scientific experts transmitting knowledge to policymakers.
  • Reflection. Open dialogue to incorporate diverse forms of knowledge and encourage cooperation.
  • Bargaining. Actors learn how to cooperate and compete effectively.
  • Hierarchy. Actors with authority learn how to impose their aims; others learn the limits to their discretion.

3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.

Their analysis can produce relevant take-home points such as:

  • Experts will be ineffective if they assume that policy learning is epistemic. The assumption will leave them ill-prepared to deal with bargaining.
  • There is more than one legitimate way to learn, such as via deliberative processes that incorporate more perspectives and forms of knowledge.

What does the literature on transfer tell us?

‘Policy transfer’ can describe a spectrum of activity:

  • driven voluntarily, by a desire to learn from the story of another government’s policy’s success. In such cases, importers use shortcuts to learning, such as by restricting their search to systems with which they have something in common (such as geography or ideology), learning via intermediaries such as ‘entrepreneurs’, or limiting their searches for evidence of success.
  • driven by various forms of pressure, including encouragement by central (or supranational) governments, international norms or agreements, ‘spillover’ effects causing one system to respond to innovation by another, or demands by businesses to minimise the cost of doing business.

In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:

  • Failing to generate or use enough evidence on what made the initial policy successful
  • Failing to adapt that policy to local circumstances
  • Failing to back policy change with sufficient resources

However, other studies highlight some major qualifications:

  • If the process is about using ideas about one system to inform another, our attention may shift from ‘transfer’ to ‘translation’ or ‘transformation’, and the idea of ‘successful transfer’ makes less sense
  • Transfer success is not the same as implementation success, which depends on a wider range of factors
  • Nor is it the same as ‘policy success’, which can be assessed by a mix of questions to reflect political reality: did it make the government more re-electable, was the process of change relatively manageable, and did it produce intended outcomes?

The use of evidence to spread policy innovation requires a combination of profound political and governance choices

When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.

For example, consider these ideal-types or models in table 1:

Table 1 3 ideal types of EBBP

In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.

In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.

In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.

Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer  

In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.

  1. What problem did policymakers say they were trying to solve, and why?
  2. What solution(s) did they produce?
  3. Why?

Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2)  ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.

4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.

5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?

6. How do we account for the role of scale, and the different cultures and expectations in each policy field?

Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.

7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?

8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?

9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?

10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?

Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.

11. What will be the relationship between evidence and governance?

Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?

In conclusion

Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.

This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.

Paul Cairney Auckland Policy Transfer 12.10.18

 

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

The Politics of Evidence-Based Policymaking: ANZSOG talks

This post introduces a series of related talks on ‘the politics of evidence-based policymaking’ (EBPM) that I’m giving as part of larger series of talks during this ANZOG-funded/organised trip.

The EBPM talks begin with a discussion of the same three points: what counts as evidence, why we must ignore most of it (and how), and the policy process in which policymakers use some of it. However, the framing of these points, and the ways in which we discuss the implications, varies markedly by audience. So, in this post, I provide a short discussion of the three points, then show how the audience matters (referring to the city as a shorthand for each talk).

The overall take-home points are highly practical, in the same way that critical thinking has many practical applications (in other words, I’m not offering a map, toolbox, or blueprint):

  • If you begin with (a) the question ‘why don’t policymakers use my evidence?’ I like to think you will end with (b) the question ‘why did I ever think they would?’.
  • If you begin by taking the latter as (a) a criticism of politics and policymakers, I hope you will end by taking it as (b) a statement of the inevitability of the trade-offs that must accompany political choice.
  • We may address these issues by improving the supply and use of evidence. However, it is more important to maintain the legitimacy of the politicians and political systems in which policymakers choose to ignore evidence. Technocracy is no substitute for democracy.

3 ways to describe the use of evidence in policymaking

  1. Discussions of the use of evidence in policy often begin as a valence issue: who wouldn’t want to use good evidence when making policy?

However, it only remains a valence issue when we refuse to define evidence and justify what counts as good evidence. After that, you soon see the political choices emerge. A reference to evidence is often a shorthand for scientific research evidence, and good often refers to specific research methods (such as randomised control trials). Or, you find people arguing very strongly in the almost-opposite direction, criticising this shorthand as exclusionary and questioning the ability of scientists to justify claims to superior knowledge. Somewhere in the middle, we find that a focus on evidence is a good way to think about the many forms of information or knowledge on which we might make decisions, including: a wider range of research methods and analyses, knowledge from experience, and data relating to the local context with which policy would interact.

So, what begins as a valence issue becomes a gateway to many discussions about how to understand profound political choices regarding: how we make knowledge claims, how to ‘co-produce’ knowledge via dialogue among many groups, and the relationship between choices about evidence and governance.

  1. It is impossible to pay attention to all policy relevant evidence.

There is far more information about the world than we are able to process. A focus on evidence gaps often gives way to the recognition that we need to find effective ways to ignore most evidence.

There are many ways to describe how individuals combine cognition and emotion to limit their attention enough to make choices, and policy studies (to all intents and purposes) describe equivalent processes – described, for example, as ‘institutions’ or rules – in organisations and systems.

One shortcut between information and choice is to set aims and priorities; to focus evidence gathering on a small number of problems or one way to define a problem, and identify the most reliable or trustworthy sources of evidence (often via evidence ‘synthesis’). Another is to make decisions quickly by relying on emotion, gut instinct, habit, and existing knowledge or familiarity with evidence.

Either way, agenda setting and problem definition are political processes that address uncertainty and ambiguity. We gather evidence to reduce uncertainty, but first we must reduce ambiguity by exercising power to define the problem we seek to solve.

  1. It is impossible to control the policy process in which people use evidence.

Policy textbooks (well, my textbook at least!) provide a contrast between:

  • The model of a ‘policy cycle’ that sums up straightforward policymaking, through a series of stages, over which policymakers have clear control. At each stage, you know where evidence fits in: to help define the problem, generate solutions, and evaluate the results to set the agenda for the next cycle.
  • A more complex ‘policy process’, or policymaking environment, of which policymakers have limited knowledge and even less control. In this environment, it is difficult to know with whom engage, the rules of engagement, or the likely impact of evidence.

Overall, policy theories have much to offer people with an interest in evidence-use in policy, but primarily as a way to (a) manage expectations, to (b) produce more realistic strategies and less dispiriting conclusions. It is useful to frame our aim as to analyse the role of evidence within a policy process that (a) we don’t quite understand, rather than (b) we would like to exist.

The events themselves

Below, you will find a short discussion of the variations of audience and topic. I’ll update and reflect on this discussion (in a revised version of this post) after taking part in the events.

Social science and policy studies: knowledge claims, bounded rationality, and policy theory

For Auckland and Wellington A, I’m aiming for an audience containing a high proportion of people with a background in social science and policy studies. I describe the discussion as ‘meta’ because I am talking about how I talk about EBPM to other audiences, then inviting discussion on key parts of that talk, such as how to conceptualise the policy process and present conceptual insights to people who have no intention of deep dives into policy theory.

I often use the phrase ‘I’ve read it, so you don’t have to’ partly as a joke, but also to stress the importance of disciplinary synthesis when we engage in interdisciplinary (and inter-professional) discussion. If so, it is important to discuss how to produce such ‘synthetic’ accounts.

I tend to describe key components of a policymaking environment quickly: many policy makers and influencers spread across many levels and types of government, institutions, networks, socioeconomic factors and events, and ideas. However, each of these terms represents a shorthand to describe a large and diverse literature. For example, I can describe an ‘institution’ in a few sentences, but the study of institutions contains a variety of approaches.

Background post: I know my audience, but does my other audience know I know my audience?

Academic-practitioner discussions: improving the use of research evidence in policy

For Wellington B and Melbourne, the audience is an academic-practitioner mix. We discuss ways in which we can encourage the greater use of research evidence in policy, perhaps via closer collaboration between suppliers and users.

Discussions with scientists: why do policymakers ignore my evidence?

Sydney UNSW focuses more on researchers in scientific fields (often not in social science).  I frame the question in a way that often seems central to scientific researcher interest: why do policymakers seem to ignore my evidence, and what can I do about it?

Then, I tend to push back on the idea that the fault lies with politics and policymakers, to encourage researchers to think more about the policy process and how to engage effectively in it. If I’m trying to be annoying, I’ll suggest to a scientific audience that they see themselves as ‘rational’ and politicians as ‘irrational’. However, the more substantive discussion involves comparing (a) ‘how to make an impact’ advice drawn from the personal accounts of experienced individuals, giving advice to individuals, and (b) the sort of advice you might draw from policy theories which focus more on systems.

Background post: What can you do when policymakers ignore your evidence?

Early career researchers: the need to build ‘impact’ into career development

Canberra UNSW is more focused on early career researchers. I think this is the most difficult talk because I don’t rely on the same joke about my role: to turn up at the end of research projects to explain why they failed to have a non-academic impact.  Instead, my aim is to encourage intelligent discussion about situating the ‘how to’ advice for individual researchers into a wider discussion of policymaking systems.

Similarly, Brisbane A and B are about how to engage with practitioners, and communicate well to non-academic audiences, when most of your work and training is about something else entirely (such as learning about research methods and how to engage with the technical language of research).

Background posts:

What can you do when policymakers ignore your evidence? Tips from the ‘how to’ literature from the science community

What can you do when policymakers ignore your evidence? Encourage ‘knowledge management for policy’

See also:

European Health Forum Gastein 2018 ‘Policy in Evidence’ (from 6 minutes)

https://webcasting.streamdis.eu/Mediasite/Play/8143157d976146b4afd297897c68be5e1d?catalog=62e4886848394f339ff678a494afd77f21&playFrom=126439&autoStart=true

 

See also:

Evidence-based policymaking and the new policy sciences

 

5 Comments

Filed under Evidence Based Policymaking (EBPM)

Evidence-based policymaking: political strategies for scientists living in the real world

Note: I wrote the following discussion (last year) to be a Nature Comment but it was not to be!

Nature articles on evidence-based policymaking often present what scientists would like to see: rules to minimise bias caused by the cognitive limits of policymakers, and a simple policy process in which we know how and when to present the best evidence.[1]  What if neither requirement is ever met? Scientists will despair of policymaking while their competitors engage pragmatically and more effectively.[2]

Alternatively, if scientists learned from successful interest groups, or by using insights from policy studies, they could develop three ‘take home messages’: understand and engage with policymaking in the real world; learn how and when evidence ‘wins the day’; and, decide how far you should go to maximise the use of scientific evidence. Political science helps explain this process[3], and new systematic and thematic reviews add new insights.[4] [5] [6] [7]

Understand and engage with policymaking in the real world

Scientists are drawn to the ‘policy cycle’, because it offers a simple – but misleading – model for engagement with policymaking.[3] It identifies a core group of policymakers at the ‘centre’ of government, perhaps giving the impression that scientists should identify the correct ‘stages’ in which to engage (such as ‘agenda setting’ and ‘policy formulation’) to ensure the best use of evidence at the point of authoritative choice. This is certainly the image generated most frequently by health and environmental scientists when they seek insights from policy studies.[8]

Yet, this model does not describe reality. Many policymakers, in many levels and types of government, adopt and implement many measures at different times. For simplicity, we call the result ‘policy’ but almost no modern policy theory retains the linear policy cycle concept. In fact, it is more common to describe counterintuitive processes in which, for example, by the time policymaker attention rises to a policy problem at the ‘agenda setting’ stage, it is too late to formulate a solution. Instead, ‘policy entrepreneurs’ develop technically and politically feasible solutions then wait for attention to rise and for policymakers to have the motive and opportunity to act.[9]

Experienced government science advisors recognise this inability of the policy cycle image to describe real world policymaking. For example, Sir Peter Gluckman presents an amended version of this model, in which there are many interacting cycles in a kaleidoscope of activity, defying attempts to produce simple flow charts or decision trees. He describes the ‘art and craft’ of policy engagement, using simple heuristics to deal with a complex and ‘messy’ policy system.[10]

Policy studies help us identify two such heuristics or simple strategies.

First, respond to policymaker psychology by adapting to the short cuts they use to gather enough information quickly: ‘rational’, via trusted sources of oral and written evidence, and ‘irrational’, via their beliefs, emotions, and habits. Policy theories describe many interest group or ‘advocacy coalition’ strategies, including a tendency to combine evidence with emotional appeals, romanticise their own cause and demonise their opponents, or tell simple emotional stories with a hero and moral to exploit the biases of their audience.[11]

Second, adapt to complex ‘policy environments’ including: many policymakers at many levels and types of government, each with their own rules of evidence gathering, network formation, and ways of understanding policy problems and relevant socioeconomic conditions.[2] For example, advocates of international treaties often find that the evidence-based arguments their international audience takes for granted become hotly contested at national or subnational levels (even if the national government is a signatory), while the same interest groups presenting the same evidence of a problem can be key insiders in one government department but ignored in another.[3]

Learn the conditions under which evidence ‘wins the day’ in policymaking

Consequently, the availability and supply of scientific evidence, on the nature of problems and effectiveness of solutions, is a necessary but insufficient condition for evidence-informed policy. Three others must be met: actors use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems; the policy environment becomes broadly conducive to policy change; and, actors exploit attention to a problem, the availability of a feasible solution, and the motivation of policymakers, during a ‘window of opportunity’ to adopt specific policy instruments.10

Tobacco control represents a ‘best case’ example (box 1) from which we can draw key lessons for ecological and environmental policies, giving us a sense of perspective by highlighting the long term potential for major evidence-informed policy change. However, unlike their colleagues in public health, environmental scientists have not developed a clear sense of how to produce policy instruments that are technically and politically feasible, so the delivery of comparable policy change is not inevitable.[12]

Box 1: Tobacco policy as a best case and cautionary tale of evidence-based policymaking

Tobacco policy is a key example – and useful comparator for ecological and environmental policies – since it represents a best case scenario and cautionary tale.[13] On the one hand, the scientific evidence on the links between smoking, mortality, and preventable death forms the basis for modern tobacco control policy. Leading countries – and the World Health Organisation, which oversees the Framework Convention on Tobacco Control (FCTC) – frame tobacco use as a public health ‘epidemic’ and allow their health departments to take the policy lead. Health departments foster networks with public health and medical groups at the expense of the tobacco industry, and emphasise the socioeconomic conditions – reductions in (a) smoking prevalence, (b) opposition to tobacco control, and (c) economic benefits to tobacco – most supportive of tobacco control. This framing, and conducive policymaking environment, helps give policymakers the motive and opportunity to choose policy instruments, such as bans on smoking in public places, which would otherwise seem politically infeasible.

On the other hand, even in a small handful of leading countries such as the UK, it took twenty to thirty years to go from the supply of the evidence to a proportionate government response: from the early evidence on smoking in the 1950s prompting major changes from the 1980s, to the evidence on passive smoking in the 1980s prompting public bans from the 2000s onwards. In most countries, the production of a ‘comprehensive’ set of policy measures is not yet complete, even though most signed the FCTC.

Decide how far you’ll go to maximise the use of scientific evidence in policymaking

These insights help challenge the naïve position that, if policymaking can change to become less dysfunctional[1], scientists can be ‘honest brokers’[14] and expect policymakers to use their evidence quickly, routinely, and sincerely. Even in the best case scenario, evidence-informed change takes hard work, persistence, and decades to achieve.

Since policymaking will always appear ‘irrational’ and complex’[3], scientists need to think harder about their role, then choose to engage more effectively or accept their lack of influence.

To deal with ‘irrational’ policymakers, they should combine evidence with persuasion, simple stories, and emotional appeals, and frame their evidence to make the implications consistent with policymakers’ beliefs.

To deal with complex environments, they should engage for the long term to work out how to form alliances with influencers who share their beliefs, understand in which ‘venues’ authoritative decisions are made and carried out, the rules of information processing in those venues, and the ‘currency’ used by policymakers when they describe policy problems and feasible solutions.[2] In other words, develop skills that do not come with scientific training, avoid waiting for others to share your scientific mindset or respect for scientific evidence, and plan for the likely eventuality that policymaking will never become ‘evidence based’.

This approach may be taken for granted in policy studies[15], but it raises uncomfortable dilemmas regarding how far scientists should go, to maximise the use of scientific evidence in policy, using persuasion and coalition-building.

These dilemmas are too frequently overshadowed by claims – more comforting to scientists – that politicians are to blame because they do not understand how to generate, analyse, and use the best evidence. Scientists may only become effective in politics if they apply the same critical analysis to themselves.

[1] Sutherland, W.J. & Burgman, M. Nature 526, 317–318 (2015).

[2] Cairney, P. et al. Public Administration Review 76, 3, 399-402 (2016)

[3] Cairney, P. The Politics of Evidence-Based Policy Making (Palgrave Springer, 2016).

[4] Langer, L. et al. The Science of Using Science (EPPI, 2016)

[5] Breckon, J. & Dodson, J. Using Evidence. What Works? (Alliance for Useful Evidence, 2016)

[6] Palgrave Communications series The politics of evidence-based policymaking (ed. Cairney, P.)

[7] Practical lessons from policy theories (eds. Weible, C & Cairney, P.) Policy and Politics April 2018

[8] Oliver, K. et al. Health Research Policy and Systems, 12, 34 (2016)

[9] Kingdon, J. Agendas, Alternatives and Public Policies (Harper Collins, 1984)

[10] Gluckmann, P. Understanding the challenges and opportunities at the science-policy interface

[11] Cairney, P. & Kwiatkowski, R. Palgrave Communications.

[12] Biesbroek et al. Nature Climate Change, 5, 6, 493–494 (2015)

[13] Cairney, P. & Yamazaki, M. Journal of Comparative Policy Analysis

[14] Pielke Jr, R. originated the specific term The honest broker (Cambridge University Press, 2007) but this role is described more loosely by other commentators.

[15] Cairney, P. & Oliver, K. Health Research Policy and Systems 15:35 (2017)

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

4 Comments

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

The UK government’s imaginative use of evidence to make policy

This post describes a new article published in British Politics (Open Access).

In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.

So I describe two cases in which its evidence-use seems cynical, when:

  1. Declaring complete success in turning around the lives of ‘troubled families’
  2. Exploiting vivid neuroscientific images to support ‘early intervention’

Then I describe more difficult cases in which supportive evidence is not clear:

  1. Family intervention project evaluations are of limited value and only tentatively positive
  2. Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’

As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?

At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:

Cairney 2018 British Politics discussion section

I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.

The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’

I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.

Cairney British Politics 2018 Table 1

As such, it forms part of a far wider body of work …

In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.

See also:

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Why doesn’t evidence win the day in policy and policymaking?

(found by searching for early intervention)

See also:

Here’s why there is always an expectations gap in prevention policy

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

(found by searching for prevention)

Powerpoint for guest lecture: Paul Cairney UK Government Evidence Policy

3 Comments

Filed under Evidence Based Policymaking (EBPM), POLU9UK, Prevention policy, UK politics and policy

The Politics of Evidence revisited

This is a guest post by Dr Justin Parkhurst, responding to a review of our books by Dr Joshua Newman, and my reply to that review.

I really like that Joshua Newman has done this synthesis of 3 recent books covering aspects of evidence use in policy. Too many book reviews these days just describe the content, so some critical comments are welcome, as is the comparative perspective.

I’m also honoured that my book was included in the shortlist (it is available here, free as an ebook: bit.ly/2gGSn0n for interested readers) and I’d like to follow on from Paul to add some discussion points to the debate here – with replies to both Joshua and Paul (hoping first names are acceptable).

Have we heard all this before?

Firstly, I agree with Paul that saying ‘we’ve heard this all before’ risks speaking about a small community of active researchers who study these issues, and not the wider community. But I’d also add that what we’ve heard before is a starting point to many of these books, not where they end up.

In terms of where we start: I’m sure many of us who work in this field are somewhat frustrated at meetings when we hear people making statements that are well established in the literature. Some examples include:

  • “There can be many types of evidence, not just scientific research…”
  • “In the legal field, ‘evidence’ means something different…”
  • “We need evidence-based policy, not policy-based evidence…”
  • “We need to know ‘what works’ to get evidence into policy…”

Thus, I do think there is still a need to cement the foundations of the field more strongly – in essence, to establish a disciplinary baseline that people weighing in on a subject should be expected to know about before providing additional opinions. One way to help do this is for scholars to continue to lay out the basic starting points in our books – typically in the first chapter or two.

Of course, other specialist fields and disciplines have managed to establish their expertise to a point that individuals with opinions on a subject typically have some awareness that there is a field of study out there which they don’t necessarily know about. This is most obvious in the natural sciences (and perhaps in economics). E.g. most people (current presidents of some large North American countries aside?) are aware that don’t know a lot about engineering, medicine, or quantum physics – so they won’t offer speculative or instinctive opinions about why airplanes stay in the air, how to do bypass surgery, or what was wrong with the ‘Ant-Man’ film. Or when individuals do offer views, they are typically expected to know the basics of the subject.

For the topic of evidence and policy, I often point people to Huw Davies, Isabel Walter, and Sandra Nutley’s book Using Evidence, which is a great introduction to much of this field, as well as Carol Weiss’ insights from the late 70s on the many meanings of research utilisation. I also routinely point people to read The Honest Broker by Roger Pielke Jr. (which I, myself, failed to read before writing my book and, as such, end up repeating many of his points – I’ve apologised to him personally).

So yes, I think there is space for work like ours to continue to establish a baseline, even if some of us know this, because the expertise of the field is not yet widely recognised or established. Yet I think is it not accurate for Joshua to argue we end up repeating what is known, considering our books diverge in key ways after laying out some of the core foundations.

Where do we go from there?

More interesting for this discussion, then, is to reflect on what our various books try to do beyond simply laying out the basics of what we know about evidence use and policy. It is here where I would disagree with Joshua’s point claiming we don’t give a clear picture about the ‘problem’ that ‘evidence-based policy’ (his term – one I reject) is meant to address. Speaking only for my own book, I lay out the problem of bias in evidence use as the key motivation driving both advocates of greater evidence use as well as policy scholars critical of (oversimplified) knowledge translation efforts. But I distinguish between two forms of bias: technical bias – whereby evidence is used in ways that do not adhere to scientific best practice and thus produce sub-optimal social outcomes; and issue bias – whereby pieces of evidence, or mechanisms of evidence use, can obscure the important political choices in decision making, skewing policy choices towards those things that have been measured, or are conducive to measurement. Both of these forms of bias are violations of widely held social values – values of scientific fidelity on the one hand, and of democratic representation on the other. As such, for me, these are the problems that I try to consider in my book, exploring the political and cognitive origins of both, in order to inform thinking on how to address them.

That said, I think Joshua is right in some of the distinctions he makes between our works in how we try to take this field forward, or move beyond current challenges in differing ways. Paul takes the position that researchers need to do something, and one thing they can do is better understand politics and policy making. I think Paul’s writings about policy studies for students is superb (see his book and blog posts about policy concepts). But in terms of applying these insights to evidence use, this is where we most often diverge. I feel that keeping the focus on researchers puts too much emphasis on achieving ‘uptake’ of researcher’s own findings. In my view, I would point to three potential (overlapping) problems with this.

  • First – I do not think it is the role or responsibility of researchers to do this, but rather a failure to establish the right system of evidence provision;
  • Second – I feel it leaves unstated the important but oft ignored normative question of how ‘should’ evidence be used to inform policy;
  • Third – I believe these calls rest on often unstated assumptions about the answer to the second point which we may wish to challenge.

In terms of the first point: I’m more of an institutionalist (as Joshua points out). My view is that the problems around non-use or misuse of evidence can be seen as resulting from a failure to establish appropriate systems that govern the use of evidence in policy processes. As such, the solution would have to lie with institutional development and changes (my final chapter advocates for this) that establish systems which serve to achieve the good governance of evidence.

Paul’s response to Joshua says that researchers are demanding action, so he speaks to them. He wants researchers to develop “useful knowledge of the policy process in which they might want to engage” (as he says above).  Yet while some researchers may wish to engage with policy processes, I think it needs to be clear that doing so is inherently a political act – and can take on a role of issue advocacy by promoting those things you researched or measured over other possible policy considerations (points made well by Roger Pielke Jr. in The Honest Broker). The alternative I point towards is to consider what good systems of evidence use would look like. This is the difference between arguing for more uptake of research, vs. arguing for systems through which all policy relevant evidence can be seen and considered in appropriate ways – regardless of the political savvy, networking, or activism of any given researcher (in my book I have chapters reflecting on what appropriate evidence for policy might be, and what a good process for its use might be, based on particular widely shared values).

In terms of the second and third points – my book might be the most explicit in its discussion of the normative values guiding efforts to improve evidence, and I am more critical than some about the assumption that getting researchers work ‘used’ by policymakers is a de-facto good thing. This is why I disagree with Joshua’s conclusion that my work frames the problem as ‘bridging the gap’. Rather I’d say I frame the problem as asking the question of ‘what does a better system of evidence use look like from a political perspective?’ My ‘good governance of evidence’ discussion presents an explicitly normative framework based the two sets of values mentioned above – those around democratic accountability and around fidelity to scientific good practice – both of which have been raised as important in discussions about evidence use in political processes.

Is the onus on researchers?

Finally, I also would argue against Joshua’s conclusion that my work places the burden of resolving the problems on researchers. Paul argues above that he does this but with good reason. I try not to do this. This is again because my book is not making an argument for more evidence to be ‘used’ per se. (and I don’t expect policy makers to just want to use it either). Rather I focus on identifying principles by which we can judge systems of evidence use, calling for guided incremental changes within national systems.

While I think academics can play an important role in establishing ‘best practice’ ideas, I explicitly argue that the mandate to establish, build, or incrementally change evidence advisory systems lies with the representatives of the people. Indeed, I include ‘stewardship’ as a core principle of my good governance of evidence framework to show that it should be those individuals who are accountable to the public that build these systems in different countries. Thus, the burden lies not with academics, but rather with our representatives – and, indirectly with all of us through the demands we make on them – to improve systems of evidence use.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized