Tag Archives: story telling

Theory and Practice: How to Communicate Policy Research beyond the Academy

Notes (and audio) for my first talk at the University of Queensland, Wednesday 24th October, 12.30pm, Graduate Centre, room 402.

Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.

I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:

  1. There are many claims to policy relevant knowledge.
  2. Policymakers have to ignore most evidence.
  3. There is no simple policy cycle in which we all know at what stage to provide what evidence.

slide 3 24.10.18

In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.

Under those circumstances, key responses include:

  • framing evidence in terms of the ways in which your audience understands policy problems
  • engaging in networks to identify and exploit the right time to act, and
  • venue shopping to find sympathetic audiences in different parts of political systems.

However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.

In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:

  1. They may not know where to start: who should they contact to start making connections with policymakers?
  2. The incentives and rewards for engagement may not be clear. The UK’s ‘impact’ agenda has changed things, but not to the extent that any engagement is good engagement. Researchers need to tell a convincing story that they made an impact on policy/ policymakers with their published research, so there is a notional tipping point of engagement in which it reaches a scale that makes it worth doing.
  3. The costs are clearer. For example, any time spent doing engagement is time away from writing grant proposals and journal articles (in other words, the things that still make careers).
  4. The rewards and costs are not spread evenly. Put most simply, white male professors may have the most opportunities and face the fewest penalties for engagement in policymaking and social media. Or, the opportunities and rewards may vary markedly by discipline. In some, engagement is routine. In others, it is time away from core work.

In that context, I suggest that CS should:

  • provide clarity on what they expect from academics, and when they need information
  • describe what they can offer in return (which might be as simple as a written and signed acknowledgement of impact, or formal inclusion on an advisory committee).
  • show some flexibility: you may have a tight deadline, but can you reasonably expect an academic to drop what they are doing at short notice?
  • Engage routinely with academics, to help form networks and identify the right people you need at the right time

These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:

  • Technical languages/ jargon to describe their work
  • Timescales to supply and demand information
  • Professional incentives (such as to value scientific novelty in academia but evidential synthesis in government
  • Comfort with uncertainty (often, scientists project relatively high uncertainty and don’t want to get ahead of the evidence; often policymakers need to project certainty and decisiveness)
  • Assessments of the relative value of scientific evidence compared to other forms of policy-relevant information
  • Assessments of the role of values and beliefs (some scientists want to draw the line between providing evidence and advice; some policymakers want them to go much further)

To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge

Figure 1

However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.

In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.

Table 1 3 ideal types of EBBP

In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:

  1. Storytelling

The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact.  Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.

On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.

On the other hand, it is important to not get carried away by the possibilities:

  • My reading of the NPF empirical work is that the most impactful stories are reinforcing the beliefs of the audience – to mobilise them to act – not changing their minds.
  • Also look at the work of the Frameworks Institute which experiments with individual versus thematic stories because people react to them in very different ways. Some might empathise with an individual story; some might judge harshly. For example, they discusse stories about low income families and healthy eating, in which they use the theme of a maze to help people understand the lack of good choices available to people in areas with limited access to healthy food.

See: Storytelling for Policy Change: promise and problems

  1. Evidence for advocacy

The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:

  1. Identifying the policy change they would like to see.
  2. Identifying the powerful actors they need to influence.
  3. A mixture of tactics: insider, outsider, and supporting others by, for example, boosting local civil society organisations.
  4. A mix of ‘evidence types’ for each audience

oxfam table 2

  1. Wider public campaigns to address the political environment in which policymakers consider choices
  2. Engaging stakeholders in the research process (often called the ‘co-production of knowledge’)
  3. Framing: personal stories, ‘killer facts’, visuals, credible messenger
  4. Exploiting ‘windows of opportunity’
  5. Monitoring, learning, trial and error

In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.

See: Using evidence to influence policy: Oxfam’s experience

 

 

 

 

 

 

 

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM)

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

Sometimes, in politics, people know and agree about basic facts. This agreement provides the basis on which they can articulate their values, debate policy choices, and sell their choices to a reasonably well informed public. There are winners and losers from the choices, but at least it is based on a process in which facts or evidence play a major part.

Sometimes, people don’t seem to agree on anything. The extent to which they disagree seems wacky (as in the devil shift). So, there is no factual basis for a debate. Instead, people tell stories to each other and the debate hinges on the extent to which (a) someone tells a persuasive story, and (b) you already agree with its ‘moral’ and/ or the person telling you the story.*

Silly example one: the Scottish rate of income tax (SRIT)

The SRIT is a great example because it shows you that people can’t even agree on how to describe the arithmetic underpinning policy choices. My favourite example is here, on how to describe % increases on percentages:

cashley blair m srit

This problem amplifies the more important problem: the income tax is toxic, few politicians want to touch it, and they would rather show you the dire effects of other people using it. Currently, the best way to do this is to worry about the effect of any tax rise on the pay of nurses (almost always the heroes of the NHS and most-uncalled-for victims of policy change). So, if you combine the arithmetic debate with the focus on nurses, you get this:

cashley et al nurses srit

What you make of it will, I think, depend largely on who you trust, such as Calum C for the SNP/ Yes versus Blair M for Labour/ No. Then if you want to read more you can, for example, choose to read some Scottish Labour-friendly analysis of its plans to increase the SRIT by 1p while compensating lower earners (a, b), see it as a disaster not criticised enough by the BBC, or take your pick of two stories on the extent to which it did a ‘U-turn’.

This is before we even get to the big debate! What could have been a values-driven discussion about the benefits and costs of raising income tax to fund services, or about who should win and lose from taxation changes, has generally turned into a pedantic and (deliberately?) confusing debate about the meaning of ‘progressive’ taxation (David Eiser describes a rise in SRIT as ‘slightly progressive’), the likely income from each 1p change in taxation, and the unintended consequences of greater higher-rate taxation in Scotland.

So, your choice is to (a) do a lot of reading and critical analysis to get your head around the SRIT, or (b) decide who to trust to tell you what’s what.

Silly example two: who should you give your ‘second vote’ to?

The SNP will gain a majority in the Scottish Parliament despite an electoral system (‘mixed-member proportional’) designed to be far more proportional than the plurality system of Westminster: 56 seats from regional lists, using the d’Hondt divisor, offset some of the distribution of the 73 constituency seats determined by a plurality vote. Yet, they only make it more proportional. The SNP’s 50% share of the vote secured 56 of 59 MPs (95%) in the 2015 UK General election. If, as seems likely from the polls, it can maintain that level of support in constituency votes, it might already secure a majority before the regional votes are counted.

So, if the SNP wins almost all of the constituency seats, the competition for votes has taken on an unusual dimension: all the other parties will be getting all or most of their seats from the regional vote.

This situation has prompted some debate about the extent to which SNP-voters should (a) vote SNP twice (#bothvotessnp) to secure a very small number of extra seats in the regions where they don’t win all constituency contests, or (b) give their ‘second’/regional vote to a Yes-supporting smaller party like the Scottish Greens or RISE.

Here comes the silly bit. When John Curtice sort-of-seemed-not-really to suggest that people should choose option b (see original report by the ERS, described in The Herald) you’d think that he’d put a bag of shit on the SNP’s doorstep and the ERS had set fire to it and rung the doorbell.

So, unless you are willing to read about the kind of sophisticated calculations discussed in the ERS report, your next choice is to listen to a story about (a) people out to get the heroic SNP by duping voters into increasing the chances of more Union-loving MSPs (e.g. Labour or Conservative) getting in through the back door, or (b) those plucky heroes, such as the Greens or RISE, standing up to the villainous SNP.

In both cases, it is inevitable that many people will base their decisions on such stories, which is why they look so silly but matter so much.

 

*For the most part, the cause is the ‘complexity’ of the world and our need to adapt to it by ignoring most of it. To do so, we (just like policymakers) use major cognitive short cuts – including our emotional, gut, and habitual responses – to turn too-much information into a manageable amount. This process helps make us susceptible to ‘framing’ when people present that information to us in a particular way.

4 Comments

Filed under Evidence Based Policymaking (EBPM), Scottish politics

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

(podcast download)

We can generate new insights on policymaking by connecting the dots between many separate concepts. However, don’t underestimate some major obstacles or how hard these dot-connecting exercises are to understand. They may seem clear in your head, but describing them (and getting people to go along with your description) is another matter. You need to set out these links clearly and in a set of logical steps. I give one example – of the links between evidence and policy transfer – which I have been struggling with for some time.

In this post, I combine three concepts – policy transfer, bounded rationality, and ‘evidence-based policymaking’ – to identify the major dilemmas faced by central government policymakers when they use evidence to identify a successful policy solution and consider how to import it and ‘scale it up’ within their jurisdiction. For example, do they use randomised control trials (RCTs) to establish the effectiveness of interventions and require uniform national delivery (to ensure the correct ‘dosage’), or tell stories of good practice and invite people to learn and adapt to local circumstances? I use these examples to demonstrate that our judgement of good evidence influences our judgement on the mode of policy transfer.

Insights from each concept

From studies of policy transfer, we know that central governments (a) import policies from other countries and/ or (b) encourage the spread (‘diffusion’) of successful policies which originated in regions within their country: but how do they use evidence to identify success and decide how to deliver programs?

From studies of ‘evidence-based policymaking’ (EBPM), we know that providers of scientific evidence identify an ‘evidence-policy gap’ in which policymakers ignore the evidence of a problem and/ or do not select the best evidence-based solution: but can policymakers simply identify the ‘best’ evidence and ‘roll-out’ the ‘best’ evidence-based solutions?

From studies of bounded rationality and the policy cycle (compared with alternative theories, such as multiple streams analysis or the advocacy coalition framework), we know that it is unrealistic to think that a policymaker at the heart of government can simply identify then select a perfect solution, click their fingers, and see it carried out. This limitation is more pronounced when we identify multi-level governance, or the diffusion of policymaking power across many levels and types of government. Even if they were not limited by bounded rationality, they would still face: (a) practical limits to their control of the policy process, and (b) a normative dilemma about how far you should seek to control subnational policymaking to ensure the delivery of policy solutions.

The evidence-based policy transfer dilemma

If we combine these insights we can identify a major policy transfer dilemma for central government policymakers:

  1. If subject to bounded rationality, they need to use short cuts to identify (what they perceive to be) the best sources of evidence on the policy problem and its solution.
  2. At the same time, they need to determine if there is convincing evidence of success elsewhere, to allow them to: (a) import policy from another country, and/ or (b) ‘scale up’ a solution that seems to be successful in one of its regions.
  3. Then they need to decide how to ‘spread success’, either by (a) ensuring that the best policy is adopted by all regions within its jurisdiction, or (b) accepting that their role in policy transfer is limited: they identify ‘best practice’ and merely encourage subnational governments to adopt particular policies.

Note how closely connected these concerns are: our judgement of the ‘best evidence’ can produce a judgement on how to ‘scale up’ success

Here are three ideal-type approaches to using evidence to transfer or ‘scale up’ successful interventions. In at least two cases, the choice of ‘best evidence’ seems linked inextricably to the choice of transfer strategy:

3 ideal types EBPM

With approach 1, you gather evidence of effectiveness with reference to a hierarchy of evidence, with systematic reviews and RCTs at the top (see pages 4, 15, 33). This has a knock-on effect for ‘scaling up’: you introduce the same model in each area, requiring ‘fidelity’ to the model to ensure you administer the correct ‘dosage’ and measure its effectiveness with RCTs.

With approach 2, you reject this hierarchy and place greater value on practitioner and service user testimony. You do not necessarily ‘scale up’. Instead, you identify good practice (or good governance principles) by telling stories based on your experience and inviting other people to learn from them.

With approach 3, you gather evidence of effectiveness based on a mix of evidence. You seek to ‘scale up’ best practice through local experimentation and continuous data gathering (by practitioners trained in ‘improvement methods’).

The comparisons between approaches 1 and 2 (in particular) show us the strong link between a judgement on evidence and transfer. Approach 1 requires particular methods to gather evidence and high policy uniformity when you transfer solutions, while approach 2 places more faith in the knowledge and judgement of practitioners.

Therefore, our choice of what counts as EBPM can determine our policy transfer strategy. Or, a different transfer strategy may – if you adhere to an evidential hierarchy – preclude EBPM.

Further reading

I describe these issues, with concrete examples of each approach here, and in far more depth here:

Evidence-based best practice is more political than it looks: ‘National governments use evidence selectively to argue that a successful policy intervention in one local area should be emulated in others (‘evidence-based best practice’). However, the value of such evidence is always limited because there is: disagreement on the best way to gather evidence of policy success, uncertainty regarding the extent to which we can draw general conclusions from specific evidence, and local policymaker opposition to interventions not developed in local areas. How do governments respond to this dilemma? This article identifies the Scottish Government response: it supports three potentially contradictory ways to gather evidence and encourage emulation’.

Both articles relate to ‘prevention policy’ and the examples (so far) are from my research in Scotland, but in a future paper I’ll try to convince you that the issues are ‘universal’

 

 

7 Comments

Filed under 1000 words, Evidence Based Policymaking (EBPM), Prevention policy, public policy