Tag Archives: evidence

Evidence-based policymaking and the ‘new policy sciences’

image policy process round 2 25.10.18

[I wasn’t happy with the first version, so this is version 2]

In the ‘new policy sciences’, Chris Weible and I advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

However, there is a lot of policy theory out there, and we can’t put policy theory together like Lego to produce consistent insights to inform policy analysis.

Rather, each concept in my image of the policy process represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events.

What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process.

However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

On that basis, I’d encourage you to think of these attempts to synthesise as stories. I tell these stories a lot, but someone else could describe theory very differently (perhaps by relying on fewer male authors or US-derived theories in which there is a very specific reference points and positivism is represented well).

The example of EBPM

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

Further, policy theories/ studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

As described, this focus on the new policy sciences and synthesising insights helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

From one story to many?

However, I tell these stories without my audience having the time to look further into each theory and its individual insights. If they do have a little more time, I go into the possible contribution of individual insights to debate.

For example, they adapt insights from psychology in different ways …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… even though the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

They also present different conceptions of the policymaking environment in which actors make choices. See this post for more on this discussion in relation to EBPM.

My not-brilliant conclusion is that:

  1. Policy theory/ policy studies has a lot to offer other disciplines and professions, particularly in field like EBPM in which we need to account for politics and, more importantly, policymaking systems, but
  2. Beware any policy theory story that presents the source literature as coherent and consistent.
  3. Rather, any story of the field involves a series of choices about what counts as a good theory and good insight.
  4. In other words, the exhortation to think more about what counts as ‘good evidence’ applies just as much to political science as any other.

Postscript: well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one and see it as a sequel to this one!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy

Evidence-based policymaking and the ‘new policy sciences’

Circle image policy process 24.10.18

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

In most cases, we don’t have time to discuss a more fundamental issue (at least for researchers using policy theory and political science concepts):

From where did these concepts come, and how well do we know them?

To cut a long story short, each concept represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events. What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process. However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

The new policy sciences

More recently, in the ‘new policy sciences’, Chris Weible and I present a more provocative story of these efforts, in which we advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

This focus on psychology is not new …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… but the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

Perhaps more importantly, policy studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

Then, have a look at this discussion of ‘synthetic’ policy theories, designed to prompt people to consider how far they would go to get their evidence into policy.

Theory-driven policy analysis

As described, this focus on the new policy sciences helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

Epilogue

Well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one in Auckland and see it as a sequel to this one in Brisbane!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Theory and Practice: How to Communicate Policy Research beyond the Academy

Notes for my first talk at the University of Queensland, Wednesday 24th October, 12.30pm, Graduate Centre, room 402.

Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.

I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:

  1. There are many claims to policy relevant knowledge.
  2. Policymakers have to ignore most evidence.
  3. There is no simple policy cycle in which we all know at what stage to provide what evidence.

slide 3 24.10.18

In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.

Under those circumstances, key responses include:

  • framing evidence in terms of the ways in which your audience understands policy problems
  • engaging in networks to identify and exploit the right time to act, and
  • venue shopping to find sympathetic audiences in different parts of political systems.

However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.

In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:

  1. They may not know where to start: who should they contact to start making connections with policymakers?
  2. The incentives and rewards for engagement may not be clear. The UK’s ‘impact’ agenda has changed things, but not to the extent that any engagement is good engagement. Researchers need to tell a convincing story that they made an impact on policy/ policymakers with their published research, so there is a notional tipping point of engagement in which it reaches a scale that makes it worth doing.
  3. The costs are clearer. For example, any time spent doing engagement is time away from writing grant proposals and journal articles (in other words, the things that still make careers).
  4. The rewards and costs are not spread evenly. Put most simply, white male professors may have the most opportunities and face the fewest penalties for engagement in policymaking and social media. Or, the opportunities and rewards may vary markedly by discipline. In some, engagement is routine. In others, it is time away from core work.

In that context, I suggest that CS should:

  • provide clarity on what they expect from academics, and when they need information
  • describe what they can offer in return (which might be as simple as a written and signed acknowledgement of impact, or formal inclusion on an advisory committee).
  • show some flexibility: you may have a tight deadline, but can you reasonably expect an academic to drop what they are doing at short notice?
  • Engage routinely with academics, to help form networks and identify the right people you need at the right time

These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:

  • Technical languages/ jargon to describe their work
  • Timescales to supply and demand information
  • Professional incentives (such as to value scientific novelty in academia but evidential synthesis in government
  • Comfort with uncertainty (often, scientists project relatively high uncertainty and don’t want to get ahead of the evidence; often policymakers need to project certainty and decisiveness)
  • Assessments of the relative value of scientific evidence compared to other forms of policy-relevant information
  • Assessments of the role of values and beliefs (some scientists want to draw the line between providing evidence and advice; some policymakers want them to go much further)

To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge

Figure 1

However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.

In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.

Table 1 3 ideal types of EBBP

In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:

  1. Storytelling

The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact.  Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.

On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.

On the other hand, it is important to not get carried away by the possibilities:

  • My reading of the NPF empirical work is that the most impactful stories are reinforcing the beliefs of the audience – to mobilise them to act – not changing their minds.
  • Also look at the work of the Frameworks Institute which experiments with individual versus thematic stories because people react to them in very different ways. Some might empathise with an individual story; some might judge harshly. For example, they discusse stories about low income families and healthy eating, in which they use the theme of a maze to help people understand the lack of good choices available to people in areas with limited access to healthy food.

See: Storytelling for Policy Change: promise and problems

  1. Evidence for advocacy

The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:

  1. Identifying the policy change they would like to see.
  2. Identifying the powerful actors they need to influence.
  3. A mixture of tactics: insider, outsider, and supporting others by, for example, boosting local civil society organisations.
  4. A mix of ‘evidence types’ for each audience

oxfam table 2

  1. Wider public campaigns to address the political environment in which policymakers consider choices
  2. Engaging stakeholders in the research process (often called the ‘co-production of knowledge’)
  3. Framing: personal stories, ‘killer facts’, visuals, credible messenger
  4. Exploiting ‘windows of opportunity’
  5. Monitoring, learning, trial and error

In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.

See: Using evidence to influence policy: Oxfam’s experience

 

 

 

 

 

 

 

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM)

Taking lessons from policy theory into practice: 3 examples

Notes for ANZSOG/ ANU Crawford School/ UNSW Canberra workshop. Powerpoint here. The recording of the lecture (skip to 2m30) and Q&A is here (right click to download mp3 or dropbox link):

The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?

The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).

However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:

  1. Central government policymakers may accept the descriptive accuracy of policy theories emphasising limited central control, but not the recommendation that they should let go, share power, and describe their limits to the public.
  2. Scientists may accept key limitations to ‘evidence based policymaking’ but reject the idea that they should respond by becoming better storytellers or more manipulative operators.
  3. Researchers and practitioners struggle to resolve hard choices when combining evidence and ‘coproduction’ while ‘scaling up’ policy interventions. Evidence choice is political choice. Can we do more than merely encourage people to accept this point?

I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.

Example 1: the policy cycle endures despite its descriptive inaccuracy

cycle

The policy cycle does not describe and explain the policy process well:

  • If we insist on keeping the cycle metaphor, it is more accurate to see the process as a huge set of policy cycles that connect with each other in messy and unpredictable ways.
  • The cycle approach also links strongly to the idea of ‘comprehensive rationality’ in which a small group of policymakers and analysts are in full possession of the facts and full control of the policy process. They carry out their aims through a series of stages.

Policy theories provide more descriptive and explanatory usefulness. Their insights include:

  • Limited choice. Policymakers inherit organisations, rules, and choices. Most ‘new’ choice is a revision of the old.
  • Limited attention. Policymakers must ignore almost all of the policy problems for which they are formally responsible. They pay attention to some, and delegate most responsibility to civil servants. Bureaucrats rely on other actors for information and advice, and they build relationships on trust and information exchange.
  • Limited central control. Policy may appear to be made at the ‘top’ or in the ‘centre’, but in practice policymaking responsibility is spread across many levels and types of government (many ‘centres’). ‘Street level’ actors make policy as they deliver. Policy outcomes appear to ‘emerge’ locally despite central government attempts to control their fate.
  • Limited policy change. Most policy change is minor, made and influenced by actors who interpret new evidence through the lens of their beliefs. Well-established beliefs limit the opportunities of new solutions. Governments tend to rely on trial-and-error, based on previous agreements, rather than radical policy change based on a new agenda. New solutions succeed only during brief and infrequent windows of opportunity.

However, the cycle metaphor endures because:

  • It provides a simple model of policymaking with stages that map onto important policymaking functions.
  • It provides a way to project policymaking to the public. You know how we make policy, and that we are in charge, so you know who to hold to account.

In that context, we may want to be pragmatic about our advice:

  1. One option is via complexity theory, in which scholars generally encourage policymakers to accept and describe their limits:
  • Accept routine error, reduce short-term performance management, engage more in trial and error, and ‘let go’ to allow local actors the flexibility to adapt and respond to their context.
  • However, would a government in the Westminster tradition really embrace this advice? No. They need to balance (a) pragmatic policymaking, and (b) an image of governing competence.
  1. Another option is to try to help improve an existing approach.

Further reading (blog posts):

The language of complexity does not mix well with the language of Westminster-style accountability

Making Sense of Policymaking: why it’s always someone else’s fault and nothing ever changes

Two stories of British politics: the Westminster model versus Complex Government

Example 2: how to deal with a lack of ‘evidence based policymaking’

I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.

I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.

So, I wrote The Politics of Evidence-Based Policymaking and one zillion blog posts to help identify the ways in which policy theories could help explain the relationship between evidence and policy.

Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.

But how exactly do we turn empirical descriptions of policymaking into practical advice?

For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:

  1. they have a broader view on what counts as good evidence (see ANZSOG description)
  2. they have to ignore almost all information (a nice way into bounded rationality and policymaker psychology)
  3. they do not understand or control the process in which they seek to use evidence (a way into ‘the policy process’)

Cairney 2017 image of the policy process

We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?

Examples include:

  • ‘How to do it’ advice. I compare tips for individuals (from experienced practitioners) with tips based on policy concepts. They are quite similar-looking tips – e.g. find out where the action is, learn the rules, tell good stories, engage allies, seek windows of opportunity – but I describe mine as 5 impossible tasks!
  • Organisational reform. I describe work with the European Commission Joint Research Centre to identify 8 skills or functions of an organisation bringing together the supply/demand of knowledge.
  • Ethical dilemmas. I use key policy theories to ask people how far they want to go to privilege evidence in policy. It’s fun to talk about these things with the type of scientist who sees any form of storytelling as manipulation.

Further reading:

Is Evidence-Based Policymaking the same as good policymaking?

A 5-step strategy to make evidence count

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Principles of science advice to government: key problems and feasible solutions

Example 3: how to encourage realistic evidence-informed policy transfer

This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.

I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!

For me, the context is potentially overwhelming:

So, when we start to focus on sharing lessons, we will have three things to discover:

  1. What is the evidence for success, and from where does it come? Governments often project success without backing it up.
  2. What story do policymakers tell about the problem they are trying to solve, the solutions they produced, and why? Two different governments may be framing and trying to solve the same problem in very different ways.
  3. Was the policy introduced in a comparable policymaking system? People tend to focus on political system comparability (e.g. is it unitary or federal?), but I think the key is in policymaking system comparability (e.g. what are the rules and dominant ideas?).

To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.

 

The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018

I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), IMAJINE, Policy learning and transfer

The Politics of Evidence-Based Policymaking: ANZSOG talks

This post introduces a series of related talks on ‘the politics of evidence-based policymaking’ (EBPM) that I’m giving as part of larger series of talks during this ANZOG-funded/organised trip.

The EBPM talks begin with a discussion of the same three points: what counts as evidence, why we must ignore most of it (and how), and the policy process in which policymakers use some of it. However, the framing of these points, and the ways in which we discuss the implications, varies markedly by audience. So, in this post, I provide a short discussion of the three points, then show how the audience matters (referring to the city as a shorthand for each talk).

The overall take-home points are highly practical, in the same way that critical thinking has many practical applications (in other words, I’m not offering a map, toolbox, or blueprint):

  • If you begin with (a) the question ‘why don’t policymakers use my evidence?’ I like to think you will end with (b) the question ‘why did I ever think they would?’.
  • If you begin by taking the latter as (a) a criticism of politics and policymakers, I hope you will end by taking it as (b) a statement of the inevitability of the trade-offs that must accompany political choice.
  • We may address these issues by improving the supply and use of evidence. However, it is more important to maintain the legitimacy of the politicians and political systems in which policymakers choose to ignore evidence. Technocracy is no substitute for democracy.

3 ways to describe the use of evidence in policymaking

  1. Discussions of the use of evidence in policy often begin as a valence issue: who wouldn’t want to use good evidence when making policy?

However, it only remains a valence issue when we refuse to define evidence and justify what counts as good evidence. After that, you soon see the political choices emerge. A reference to evidence is often a shorthand for scientific research evidence, and good often refers to specific research methods (such as randomised control trials). Or, you find people arguing very strongly in the almost-opposite direction, criticising this shorthand as exclusionary and questioning the ability of scientists to justify claims to superior knowledge. Somewhere in the middle, we find that a focus on evidence is a good way to think about the many forms of information or knowledge on which we might make decisions, including: a wider range of research methods and analyses, knowledge from experience, and data relating to the local context with which policy would interact.

So, what begins as a valence issue becomes a gateway to many discussions about how to understand profound political choices regarding: how we make knowledge claims, how to ‘co-produce’ knowledge via dialogue among many groups, and the relationship between choices about evidence and governance.

  1. It is impossible to pay attention to all policy relevant evidence.

There is far more information about the world than we are able to process. A focus on evidence gaps often gives way to the recognition that we need to find effective ways to ignore most evidence.

There are many ways to describe how individuals combine cognition and emotion to limit their attention enough to make choices, and policy studies (to all intents and purposes) describe equivalent processes – described, for example, as ‘institutions’ or rules – in organisations and systems.

One shortcut between information and choice is to set aims and priorities; to focus evidence gathering on a small number of problems or one way to define a problem, and identify the most reliable or trustworthy sources of evidence (often via evidence ‘synthesis’). Another is to make decisions quickly by relying on emotion, gut instinct, habit, and existing knowledge or familiarity with evidence.

Either way, agenda setting and problem definition are political processes that address uncertainty and ambiguity. We gather evidence to reduce uncertainty, but first we must reduce ambiguity by exercising power to define the problem we seek to solve.

  1. It is impossible to control the policy process in which people use evidence.

Policy textbooks (well, my textbook at least!) provide a contrast between:

  • The model of a ‘policy cycle’ that sums up straightforward policymaking, through a series of stages, over which policymakers have clear control. At each stage, you know where evidence fits in: to help define the problem, generate solutions, and evaluate the results to set the agenda for the next cycle.
  • A more complex ‘policy process’, or policymaking environment, of which policymakers have limited knowledge and even less control. In this environment, it is difficult to know with whom engage, the rules of engagement, or the likely impact of evidence.

Overall, policy theories have much to offer people with an interest in evidence-use in policy, but primarily as a way to (a) manage expectations, to (b) produce more realistic strategies and less dispiriting conclusions. It is useful to frame our aim as to analyse the role of evidence within a policy process that (a) we don’t quite understand, rather than (b) we would like to exist.

The events themselves

Below, you will find a short discussion of the variations of audience and topic. I’ll update and reflect on this discussion (in a revised version of this post) after taking part in the events.

Social science and policy studies: knowledge claims, bounded rationality, and policy theory

For Auckland and Wellington A, I’m aiming for an audience containing a high proportion of people with a background in social science and policy studies. I describe the discussion as ‘meta’ because I am talking about how I talk about EBPM to other audiences, then inviting discussion on key parts of that talk, such as how to conceptualise the policy process and present conceptual insights to people who have no intention of deep dives into policy theory.

I often use the phrase ‘I’ve read it, so you don’t have to’ partly as a joke, but also to stress the importance of disciplinary synthesis when we engage in interdisciplinary (and inter-professional) discussion. If so, it is important to discuss how to produce such ‘synthetic’ accounts.

I tend to describe key components of a policymaking environment quickly: many policy makers and influencers spread across many levels and types of government, institutions, networks, socioeconomic factors and events, and ideas. However, each of these terms represents a shorthand to describe a large and diverse literature. For example, I can describe an ‘institution’ in a few sentences, but the study of institutions contains a variety of approaches.

Background post: I know my audience, but does my other audience know I know my audience?

Academic-practitioner discussions: improving the use of research evidence in policy

For Wellington B and Melbourne, the audience is an academic-practitioner mix. We discuss ways in which we can encourage the greater use of research evidence in policy, perhaps via closer collaboration between suppliers and users.

Discussions with scientists: why do policymakers ignore my evidence?

Sydney UNSW focuses more on researchers in scientific fields (often not in social science).  I frame the question in a way that often seems central to scientific researcher interest: why do policymakers seem to ignore my evidence, and what can I do about it?

Then, I tend to push back on the idea that the fault lies with politics and policymakers, to encourage researchers to think more about the policy process and how to engage effectively in it. If I’m trying to be annoying, I’ll suggest to a scientific audience that they see themselves as ‘rational’ and politicians as ‘irrational’. However, the more substantive discussion involves comparing (a) ‘how to make an impact’ advice drawn from the personal accounts of experienced individuals, giving advice to individuals, and (b) the sort of advice you might draw from policy theories which focus more on systems.

Background post: What can you do when policymakers ignore your evidence?

Early career researchers: the need to build ‘impact’ into career development

Canberra UNSW is more focused on early career researchers. I think this is the most difficult talk because I don’t rely on the same joke about my role: to turn up at the end of research projects to explain why they failed to have a non-academic impact.  Instead, my aim is to encourage intelligent discussion about situating the ‘how to’ advice for individual researchers into a wider discussion of policymaking systems.

Similarly, Brisbane A and B are about how to engage with practitioners, and communicate well to non-academic audiences, when most of your work and training is about something else entirely (such as learning about research methods and how to engage with the technical language of research).

Background posts:

What can you do when policymakers ignore your evidence? Tips from the ‘how to’ literature from the science community

What can you do when policymakers ignore your evidence? Encourage ‘knowledge management for policy’

See also:

European Health Forum Gastein 2018 ‘Policy in Evidence’ (from 6 minutes)

https://webcasting.streamdis.eu/Mediasite/Play/8143157d976146b4afd297897c68be5e1d?catalog=62e4886848394f339ff678a494afd77f21&playFrom=126439&autoStart=true

 

See also:

Evidence-based policymaking and the new policy sciences

 

5 Comments

Filed under Evidence Based Policymaking (EBPM)

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in a forthcoming article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Evidence-based policymaking: political strategies for scientists living in the real world

Note: I wrote the following discussion (last year) to be a Nature Comment but it was not to be!

Nature articles on evidence-based policymaking often present what scientists would like to see: rules to minimise bias caused by the cognitive limits of policymakers, and a simple policy process in which we know how and when to present the best evidence.[1]  What if neither requirement is ever met? Scientists will despair of policymaking while their competitors engage pragmatically and more effectively.[2]

Alternatively, if scientists learned from successful interest groups, or by using insights from policy studies, they could develop three ‘take home messages’: understand and engage with policymaking in the real world; learn how and when evidence ‘wins the day’; and, decide how far you should go to maximise the use of scientific evidence. Political science helps explain this process[3], and new systematic and thematic reviews add new insights.[4] [5] [6] [7]

Understand and engage with policymaking in the real world

Scientists are drawn to the ‘policy cycle’, because it offers a simple – but misleading – model for engagement with policymaking.[3] It identifies a core group of policymakers at the ‘centre’ of government, perhaps giving the impression that scientists should identify the correct ‘stages’ in which to engage (such as ‘agenda setting’ and ‘policy formulation’) to ensure the best use of evidence at the point of authoritative choice. This is certainly the image generated most frequently by health and environmental scientists when they seek insights from policy studies.[8]

Yet, this model does not describe reality. Many policymakers, in many levels and types of government, adopt and implement many measures at different times. For simplicity, we call the result ‘policy’ but almost no modern policy theory retains the linear policy cycle concept. In fact, it is more common to describe counterintuitive processes in which, for example, by the time policymaker attention rises to a policy problem at the ‘agenda setting’ stage, it is too late to formulate a solution. Instead, ‘policy entrepreneurs’ develop technically and politically feasible solutions then wait for attention to rise and for policymakers to have the motive and opportunity to act.[9]

Experienced government science advisors recognise this inability of the policy cycle image to describe real world policymaking. For example, Sir Peter Gluckman presents an amended version of this model, in which there are many interacting cycles in a kaleidoscope of activity, defying attempts to produce simple flow charts or decision trees. He describes the ‘art and craft’ of policy engagement, using simple heuristics to deal with a complex and ‘messy’ policy system.[10]

Policy studies help us identify two such heuristics or simple strategies.

First, respond to policymaker psychology by adapting to the short cuts they use to gather enough information quickly: ‘rational’, via trusted sources of oral and written evidence, and ‘irrational’, via their beliefs, emotions, and habits. Policy theories describe many interest group or ‘advocacy coalition’ strategies, including a tendency to combine evidence with emotional appeals, romanticise their own cause and demonise their opponents, or tell simple emotional stories with a hero and moral to exploit the biases of their audience.[11]

Second, adapt to complex ‘policy environments’ including: many policymakers at many levels and types of government, each with their own rules of evidence gathering, network formation, and ways of understanding policy problems and relevant socioeconomic conditions.[2] For example, advocates of international treaties often find that the evidence-based arguments their international audience takes for granted become hotly contested at national or subnational levels (even if the national government is a signatory), while the same interest groups presenting the same evidence of a problem can be key insiders in one government department but ignored in another.[3]

Learn the conditions under which evidence ‘wins the day’ in policymaking

Consequently, the availability and supply of scientific evidence, on the nature of problems and effectiveness of solutions, is a necessary but insufficient condition for evidence-informed policy. Three others must be met: actors use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems; the policy environment becomes broadly conducive to policy change; and, actors exploit attention to a problem, the availability of a feasible solution, and the motivation of policymakers, during a ‘window of opportunity’ to adopt specific policy instruments.10

Tobacco control represents a ‘best case’ example (box 1) from which we can draw key lessons for ecological and environmental policies, giving us a sense of perspective by highlighting the long term potential for major evidence-informed policy change. However, unlike their colleagues in public health, environmental scientists have not developed a clear sense of how to produce policy instruments that are technically and politically feasible, so the delivery of comparable policy change is not inevitable.[12]

Box 1: Tobacco policy as a best case and cautionary tale of evidence-based policymaking

Tobacco policy is a key example – and useful comparator for ecological and environmental policies – since it represents a best case scenario and cautionary tale.[13] On the one hand, the scientific evidence on the links between smoking, mortality, and preventable death forms the basis for modern tobacco control policy. Leading countries – and the World Health Organisation, which oversees the Framework Convention on Tobacco Control (FCTC) – frame tobacco use as a public health ‘epidemic’ and allow their health departments to take the policy lead. Health departments foster networks with public health and medical groups at the expense of the tobacco industry, and emphasise the socioeconomic conditions – reductions in (a) smoking prevalence, (b) opposition to tobacco control, and (c) economic benefits to tobacco – most supportive of tobacco control. This framing, and conducive policymaking environment, helps give policymakers the motive and opportunity to choose policy instruments, such as bans on smoking in public places, which would otherwise seem politically infeasible.

On the other hand, even in a small handful of leading countries such as the UK, it took twenty to thirty years to go from the supply of the evidence to a proportionate government response: from the early evidence on smoking in the 1950s prompting major changes from the 1980s, to the evidence on passive smoking in the 1980s prompting public bans from the 2000s onwards. In most countries, the production of a ‘comprehensive’ set of policy measures is not yet complete, even though most signed the FCTC.

Decide how far you’ll go to maximise the use of scientific evidence in policymaking

These insights help challenge the naïve position that, if policymaking can change to become less dysfunctional[1], scientists can be ‘honest brokers’[14] and expect policymakers to use their evidence quickly, routinely, and sincerely. Even in the best case scenario, evidence-informed change takes hard work, persistence, and decades to achieve.

Since policymaking will always appear ‘irrational’ and complex’[3], scientists need to think harder about their role, then choose to engage more effectively or accept their lack of influence.

To deal with ‘irrational’ policymakers, they should combine evidence with persuasion, simple stories, and emotional appeals, and frame their evidence to make the implications consistent with policymakers’ beliefs.

To deal with complex environments, they should engage for the long term to work out how to form alliances with influencers who share their beliefs, understand in which ‘venues’ authoritative decisions are made and carried out, the rules of information processing in those venues, and the ‘currency’ used by policymakers when they describe policy problems and feasible solutions.[2] In other words, develop skills that do not come with scientific training, avoid waiting for others to share your scientific mindset or respect for scientific evidence, and plan for the likely eventuality that policymaking will never become ‘evidence based’.

This approach may be taken for granted in policy studies[15], but it raises uncomfortable dilemmas regarding how far scientists should go, to maximise the use of scientific evidence in policy, using persuasion and coalition-building.

These dilemmas are too frequently overshadowed by claims – more comforting to scientists – that politicians are to blame because they do not understand how to generate, analyse, and use the best evidence. Scientists may only become effective in politics if they apply the same critical analysis to themselves.

[1] Sutherland, W.J. & Burgman, M. Nature 526, 317–318 (2015).

[2] Cairney, P. et al. Public Administration Review 76, 3, 399-402 (2016)

[3] Cairney, P. The Politics of Evidence-Based Policy Making (Palgrave Springer, 2016).

[4] Langer, L. et al. The Science of Using Science (EPPI, 2016)

[5] Breckon, J. & Dodson, J. Using Evidence. What Works? (Alliance for Useful Evidence, 2016)

[6] Palgrave Communications series The politics of evidence-based policymaking (ed. Cairney, P.)

[7] Practical lessons from policy theories (eds. Weible, C & Cairney, P.) Policy and Politics April 2018

[8] Oliver, K. et al. Health Research Policy and Systems, 12, 34 (2016)

[9] Kingdon, J. Agendas, Alternatives and Public Policies (Harper Collins, 1984)

[10] Gluckmann, P. Understanding the challenges and opportunities at the science-policy interface

[11] Cairney, P. & Kwiatkowski, R. Palgrave Communications.

[12] Biesbroek et al. Nature Climate Change, 5, 6, 493–494 (2015)

[13] Cairney, P. & Yamazaki, M. Journal of Comparative Policy Analysis

[14] Pielke Jr, R. originated the specific term The honest broker (Cambridge University Press, 2007) but this role is described more loosely by other commentators.

[15] Cairney, P. & Oliver, K. Health Research Policy and Systems 15:35 (2017)

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy