Tag Archives: Storytelling

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

18 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Theory and Practice: How to Communicate Policy Research beyond the Academy

Notes (and audio) for my first talk at the University of Queensland, Wednesday 24th October, 12.30pm, Graduate Centre, room 402.

Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.

I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:

  1. There are many claims to policy relevant knowledge.
  2. Policymakers have to ignore most evidence.
  3. There is no simple policy cycle in which we all know at what stage to provide what evidence.

slide 3 24.10.18

In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.

Under those circumstances, key responses include:

  • framing evidence in terms of the ways in which your audience understands policy problems
  • engaging in networks to identify and exploit the right time to act, and
  • venue shopping to find sympathetic audiences in different parts of political systems.

However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.

In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:

  1. They may not know where to start: who should they contact to start making connections with policymakers?
  2. The incentives and rewards for engagement may not be clear. The UK’s ‘impact’ agenda has changed things, but not to the extent that any engagement is good engagement. Researchers need to tell a convincing story that they made an impact on policy/ policymakers with their published research, so there is a notional tipping point of engagement in which it reaches a scale that makes it worth doing.
  3. The costs are clearer. For example, any time spent doing engagement is time away from writing grant proposals and journal articles (in other words, the things that still make careers).
  4. The rewards and costs are not spread evenly. Put most simply, white male professors may have the most opportunities and face the fewest penalties for engagement in policymaking and social media. Or, the opportunities and rewards may vary markedly by discipline. In some, engagement is routine. In others, it is time away from core work.

In that context, I suggest that CS should:

  • provide clarity on what they expect from academics, and when they need information
  • describe what they can offer in return (which might be as simple as a written and signed acknowledgement of impact, or formal inclusion on an advisory committee).
  • show some flexibility: you may have a tight deadline, but can you reasonably expect an academic to drop what they are doing at short notice?
  • Engage routinely with academics, to help form networks and identify the right people you need at the right time

These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:

  • Technical languages/ jargon to describe their work
  • Timescales to supply and demand information
  • Professional incentives (such as to value scientific novelty in academia but evidential synthesis in government
  • Comfort with uncertainty (often, scientists project relatively high uncertainty and don’t want to get ahead of the evidence; often policymakers need to project certainty and decisiveness)
  • Assessments of the relative value of scientific evidence compared to other forms of policy-relevant information
  • Assessments of the role of values and beliefs (some scientists want to draw the line between providing evidence and advice; some policymakers want them to go much further)

To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge

Figure 1

However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.

In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.

Table 1 3 ideal types of EBBP

In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:

  1. Storytelling

The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact.  Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.

On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.

On the other hand, it is important to not get carried away by the possibilities:

  • My reading of the NPF empirical work is that the most impactful stories are reinforcing the beliefs of the audience – to mobilise them to act – not changing their minds.
  • Also look at the work of the Frameworks Institute which experiments with individual versus thematic stories because people react to them in very different ways. Some might empathise with an individual story; some might judge harshly. For example, they discusse stories about low income families and healthy eating, in which they use the theme of a maze to help people understand the lack of good choices available to people in areas with limited access to healthy food.

See: Storytelling for Policy Change: promise and problems

  1. Evidence for advocacy

The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:

  1. Identifying the policy change they would like to see.
  2. Identifying the powerful actors they need to influence.
  3. A mixture of tactics: insider, outsider, and supporting others by, for example, boosting local civil society organisations.
  4. A mix of ‘evidence types’ for each audience

oxfam table 2

  1. Wider public campaigns to address the political environment in which policymakers consider choices
  2. Engaging stakeholders in the research process (often called the ‘co-production of knowledge’)
  3. Framing: personal stories, ‘killer facts’, visuals, credible messenger
  4. Exploiting ‘windows of opportunity’
  5. Monitoring, learning, trial and error

In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.

See: Using evidence to influence policy: Oxfam’s experience

 

 

 

 

 

 

 

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM)

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Telling Stories that Shape Public Policy

This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.

But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story.  Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place.  The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.

Why focus on stories?

The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.

The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.

Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.

The 5 Steps of Good Policy Narrating

  1. Tell a Story: Remember, facts never speak for themselves. If you are presenting best practices, relaying scientific information, or detailing cost/benefit analyses, you are telling or contributing to a story.  Engage your storytelling deliberately.
  2. Set the Stage: Policy narratives have a setting and in this setting you will find specific evidence, geography, legal parameters, and other policy consequential items and information.  Think of these setting items as props.  Not all stages can hold every relevant prop.  Be true to science; be true to your craft, but set your stage with props that maximize the potency of your story, which always includes making your setting amenable to your audience.
  3. Establish the Plot: In public policy plots usually define the problem (and polices do not exist without at least a potential problem). Define your problem. Doing so determines the causes, which establishes blame.
  4. Cast the Characters:  Having established a plot and defined your problem, the roles you will need your characters to play become apparent. Determine who the victim is (who is harmed by the problem), who is responsible (the villain) and who can bring relief (the hero). Cast characters your audience will appreciate in their roles.
  5. Clearly Specify the Moral: Postmodern films might get away without having a point.  Policy narratives usually do not. Let your audience know what the solution is.

Public Policy Intervention Points

There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.

These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.

Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.

When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.

  1. Who are you and what are your goals? Are you an outsider trying to affect change to solve a problem or push an agency to do something it might not be inclined to do?  Are you an insider trying to evaluate and improve policy making and implementation? Understanding your role and your goals is essential to both selecting an appropriate intervention point and optimizing your narrative therein.
  2. Carefully consider your audience. Who are they and what is their posture towards your overall goal? Understanding your audience’s values and beliefs is essential for avoiding invoking defensiveness.
  3. There is the intervention point itself – what is the best way to reach your audience? What are the rules for the type of communication you plan to use? For example, media communications can be done with lengthy press releases, interviews with the press, or in the confines of a simple tweet.  All of these methods have both formal and informal constraints that will determine what you can and can’t do.

Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.

On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.

 

4 Comments

Filed under public policy, Storytelling

Writing for Impact: what you need to know, and 5 ways to know it

This is a post for my talk at the ‘Politheor: European Policy Network’ event Write For Impact: Training In Op-Ed Writing For Policy Advocacy. There are other speakers with more experience of, and advice on, ‘op-ed’ writing. My aim is to describe key aspects of politics and policymaking to help the audience learn why they should write op-eds in a particular way for particular audiences.

A key rule in writing is to ‘know your audience’, but it’s easier said than done if you seek many sympathetic audiences in many parts of a complex policy process. Two simple rules should help make this process somewhat clearer:

  1. Learn how policymakers simplify their world, and
  2. Learn how policy environments influence their attention and choices.

We can use the same broad concepts to help explain both processes, in which many policymakers and influencers interact across many levels and types of government to produce what we call ‘policy’:

  1. Policymaker psychology: tell an evidence-informed story

Policymakers receive too much information, and seek ways to ignore most of it while making decisions. To do so, they use ‘rational’ and ‘irrational’ means: selecting a limited number of regular sources of information, and relying on emotion, gut instinct, habit, and familiarity with information. In other words, your audience combines cognition and emotion to deal with information, and they can ignore information for long periods then quickly shift their attention towards it, even if that information has not really changed.

Consequently, an op-ed focusing solely ‘the facts’ can be relatively ineffective compared to an evidence-informed story, perhaps with a notional setting, plot, hero, and moral. Your aim shifts from providing more and more evidence to reduce uncertainty about a problem, to providing a persuasive reason to reduce ambiguity. Ambiguity relates to the fact that policymakers can understand a policy problem in many different ways – such as tobacco as an economic good, issue of civil liberties, or public health epidemic – but often pay exclusive attention to one.

So, your aim may be to influence the simple ways in which people understand the world, to influence their demand for more information. An emotional appeal can transform a factual case, but only if you know how people engage emotionally with information. Sometimes, the same story can succeed with one audience but fail with another.

  1. Institutions: learn the ‘rules of the game’

Institutions are the rules people use in policymaking, including the formal, written down, and well understood rules setting out who is responsible for certain issues, and the informal, unwritten, and unclear rules informing action. The rules used by policymakers can help define the nature of a policy problem, who is best placed to solve it, who should be consulted routinely, and who can safely be ignored. These rules can endure for long periods and become like habits, particularly if policymakers pay little attention to a problem or why they define it in a particular way.

  1. Networks and coalitions: build coalitions and establish trust

Such informal rules, about how to understand a problem and who to speak with about it, can be reinforced in networks of policymakers and influencers.

‘Policy community’ partly describes a sense that most policymaking is processed out of the public spotlight, often despite minimal high level policymaker interest. Senior policymakers delegate responsibility for policymaking to bureaucrats, who seek information and advice from groups. Groups exchange information for access to, and potential influence within, government, and policymakers have ‘standard operating procedures’ that favour particular sources of evidence and some participants over others

‘Policy community’ also describes a sense that the network seems fairly stable, built on high levels of trust between participants, based on factors such as reliability (the participant was a good source of information, and did not complain too much in public about decisions), a common aim or shared understanding of the problem, or the sense that influencers represent important groups.

So, the same policy case can have a greater impact if told by a well trusted actor in a policy community. Or, that community member may use networks to build key coalitions behind a case, use information from the network to understand which cases will have most impact, or know which audiences to seek.

  1. Ideas: learn the ‘currency’ of policy argument

This use of networks relates partly to learning the language of policy debate in particular ‘venues’, to learn what makes a convincing case. This language partly reflects a well-established ‘world view’ or the ‘core beliefs’ shared by participants. For example, a very specific ‘evidence-based’ language is used frequently in public health, while treasury departments look for some recognition of ‘value for money’ (according to a particular understanding of how you determine VFM). So, knowing your audience is knowing the terms of debate that are often so central to their worldview that they take them for granted and, in contrast, the forms of argument that are more difficult to pursue because they are challenging or unfamiliar to some audiences. Imagine a case that challenges completely someone’s world view, or one which is entirely consistent with it.

  1. Socioeconomic factors and events: influence how policymakers see the outside world

Some worldviews can be shattered by external events or crises, but this is a rare occurrence. It may be possible to generate a sense of crisis with reference to socioeconomic changes or events, but people will interpret these developments through the ‘lens’ of their own beliefs. In some cases, events seem impossible to ignore but we may not agree on their implications for action. In others, an external event only matters if policymakers pay attention to them. Indeed, we began this discussion with the insight that policymakers have to ignore almost all such information available to them.

Know your audience revisited: practical lessons from policy theories

To take into account all of these factors, while trying to make a very short and persuasive case, may seem impossible. Instead, we might pick up some basic rules of thumb from particular theories or approaches. We can discuss a few examples from ongoing work on ‘practical lessons from policy theories’.

Storytelling for policy impact

If you are telling a story with a setting, plot, hero, and moral, it may be more effective to focus on a hero than villain. More importantly, imagine two contrasting audiences: one is moved by your personal and story told to highlight some structural barriers to the wellbeing of key populations; another is unmoved, judges that person harshly, and thinks they would have done better in their shoes (perhaps they prefer to build policy on stereotypes of target populations). ‘Knowing your audience’ may involve some trial-and-error to determine which stories work under which circumstances.

Appealing to coalitions

Or, you may decide that it is impossible to write anything to appeal to all relevant audiences. Instead, you might tailor it to one, to reinforce its beliefs and encourage people to act. The ‘advocacy coalition framework’ describes such activities as routine: people go into politics to translate their beliefs into policy, they interpret the world through those beliefs, and they romanticise their own cause while demonising their opponents. If so, would a bland op-ed have much effect on any audience?

Learning from entrepreneurs

Policy entrepreneurs’ draw on three rules, two of which seem counterintuitive:

  1. Don’t focus on bombarding policymakers with evidence. Scientists focus on making more evidence to reduce uncertainty, but put people off with too much information. Entrepreneurs tell a good story, grab the audience’s interest, and the audience demands information.
  2. By the time people pay attention to a problem it’s too late to produce a solution. So, you produce your solution then chase problems.
  3. When your environment changes, your strategy changes. For example, in the US federal level, you’re in the sea, and you’re a surfer waiting for the big wave. In the smaller subnational level, on a low attention and low budget issue, you can be Poseidon moving the ‘streams’. In the US federal level, you need to ‘soften’ up solutions over a long time to generate support. In subnational or other countries, you have more opportunity to import and adapt ready-made solutions.

It all adds up to one simple piece of advice – timing and luck matters when making a policy case – but policy entrepreneurs know how to influence timing and help create their own luck.

On the day, we can use such concepts to help us think through the factors that you might think about while writing op-eds, even though it is very unlikely that you would mention them in your written work.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

Storytelling for Policy Change: promise and problems

I went to a fantastic workshop on storytelling for policy change. It was hosted by Open Society Foundations New York (25/6 October), and brought together a wide range of people from different backgrounds: Narativ, people experienced in telling their own story, advocacy and professional groups using stories to promote social or policy change, major funders, journalists, and academics. There was already a lot of goodwill in the room at the beginning, and by the end there was more than a lot!

The OSF plans to write up a summary of the whole discussion, so my aim is to highlight the relevance for ‘evidence-based policymaking’ and for scientists and academics seeking more ‘impact’ for their research. In short, although I recommend that scientists ‘turn a large amount of scientific evidence into simple and effective stories that appeal to the biases of policymakers’, it’s easier said than done, and not something scientists are trained in. Good storytellers might enthuse people already committed to the idea of storytelling for policy, but what about scientists more committed to the language of scientific evidence and perhaps sceptical about the need to develop this new skill (particularly those who describe stories pejoratively as ‘anecdata’)? What would make them take a leap in the dark, to give up precious research time to develop skills in storytelling?

So, let me tell you why I thought the workshop was brilliant – including outlining its key insights – and why you might not!

Why I thought it was brilliant

Academic conferences can be horrible: a seemingly never-ending list of panels with 4-5 paper givers and a discussant, taking up almost all of the talking time with too-long and often-self-indulgent and long-winded PowerPoint presentations and little time for meaningful discussion. It’s a test of meeting deadlines for the presenter and an endurance test for the listener.

This workshop was different: the organisers thought about what it means to talk and listen, and therefore how to encourage people to talk in an interesting way and encourage high attention and engagement.

There were three main ‘listening exercises’: a personal exercise in which you closed your eyes and thought about the obstacles to listening (I confess that I cheated on that one); a paired exercise in which one person listened and thought of three poses to sum up the other’s short story; and a group exercise in which people paired up, told and then summarised each other’s stories, and spoke as a group about the implications.

This final exercise was powerful: we told often-revealing stories to strangers, built up trust very quickly, and became emotionally involved in each other’s accounts. It was interesting to watch how quickly we could become personally invested in each other’s discussion, form networks, and listen intently to each other.

For me, it was a good exercise in demonstrating what you need in a policymaker audience: ideally, they should care about the problem you raise, be personally invested in trying to solve it, and trust you and therefore your description of the most feasible solutions. If it helps recreate these conditions, a storytelling scientist may be more effective than an ‘honest broker’. Without a good story to engage your audience, your evidence will be like a drop in the ocean and your audience might be checking its email or playing Pokemon Go while you present.

Key insights and impressions

Most participants expressed strong optimism about the effect of stories on society and policy, particularly when the aim is more expressive than instrumental: the act itself of telling one’s story and being heard can be empowering, particularly within marginalised groups from which we hear few voices. It can also be remarkably powerful, remarkably quickly: most of us were crying or laughing instantly and frequently as we heard many moving stories about many issues. It’s hard to overstate just how effective many of these stories were when you heard them in person.

When discussing more instrumental concerns – can we use a story to get what we want? – the optimism was more cautious and qualified. Key themes included:

  • The balance between effective and ethical storytelling, accepting that if a story is a commodity it can be used by people less sympathetic to our aims, and exploring the ethics of using the lens of sympathetic characters (e.g. white grandparents) to make the case for marginalised groups.
  • This ethical dimension was reinforced continuously by stories of vulnerable storytellers and the balance between telling their story and protecting their safety.
  • The importance of context: the same story may have more or less impact depending on the nature of the audience; many examples conveyed the sense that a story with huge impact now would have failed 10 or 20 years ago and/ or in a different region.
  • The importance of tailoring stories to the biases of audiences and trying to reframe the implications of your audience’s beliefs (one particularly interesting example was of portraying equal marriage in Ireland as the Christian thing to do).
  • Many campaigns used humour and positive stories with heroes, based on the assumption that audiences would be put off by depressing stories of problems with no obvious solution but energised by a message of new possibilities.
  • Many warned against stories that were too personal, identifying the potential for an audience to want to criticise or fix an individual’s life rather than solve a systemic problem (and this individualistic interpretation was most pronounced among people identifying with right-wing parties).
  • Many described the need to experiment/ engage in trial-and-error to identify what works with each audience, including the length of written messages, choice of media, and choice of ‘thin’ stories with clear messages to generate quick attention or ‘thick’ stories which might be more memorable if we have the resources to tell them and people take the time to listen.

Many of these points will seem familiar if you study psychology or the psychology of policymaking. So, the benefit of these experiences is that they tell us how people have applied such insights and how it has helped their cause. Most speakers were confident that they were making an impact.

Why you may not be as impressed: two reasons

The first barrier to getting you enthusiastic is that you weren’t there. If emotional engagement is such a key part of storytelling, and you weren’t there to hear it, why would you care? So, a key barrier to making an ‘impact’ with storytelling is that it is difficult to increase its scale. You might persuade someone if they spent enough time with you, but what if you only had a few seconds in which to impress them or, worse still, you couldn’t impress them because they weren’t interested in the first place? Our worry may be that we can only influence people who are already open to our idea. This isn’t the end of the world, since a key political aim may be to enthuse people who share your beliefs and get them to act (for example, to spread the word to their friends). However, it prompts us to wonder about the varying effect of the same message and the extent to which our message’s power comes from our audience rather than our story.

The second barrier is that, when the question is framed for an academic audience – what is the scientific evidence on the impact of stories? – the answer is not clear.

On the panel devoted to this question (and in a previous session), there were some convincing accounts of the impact of initiatives such as: the Women’s Policy Institute ‘grass roots’ training in California (leading to advocacy prompting 2 dozen bills to be signed over 13 years); Purpose’s branding campaign for the White Helmets (including the miracle baby video which has received tens of millions of views); and, the Frame Works Institute’s ability to change minds with very brief interventions (for example, getting people to think in terms of problem systems more than problem individuals in areas like criminal justice).

However, the academic analysis – with contributions from Francesca Polletta, Jeff Niederdeppe, Douglas Storey, Michael Jones – tended to stress caution or note limited effects:

  • Stories work when they create an empathic reaction, but intensely personal experiences are difficult to recreate when mediated (soap operas and ‘edutainment’ come closest).
  • Randomised control trials suggest that stories have a measurable effect, but it’s small and compared to no intervention at all rather than a competing approach such as providing evidence in reports (note that most experimental studies do not draw on skilful storytellers)
  • Stories work most clearly when they reinforce the beliefs of your allies (and when you refer to heroes, not villains), but the effects are indirect at best with your opponents.

More research required?!

So, you might want more convincing evidence before you take that giant leap to train to become a skilful storyteller: why go for it when its effects are so unclear and difficult to measure?

For me, that response might seem sensible but is also a cop out: unequivocal evidence may never arrive and good science often involves researching as you go. A key insight into policymaking regards a continuous sense of urgency to solve problems: policymakers don’t wait for the evidence to become unequivocal before they act, partly because that sense of clarity may never happen. They feel the need to act on the basis of available evidence. Perhaps scientists should at least think about doing the same when they seek to act on research rather than simply do the research: how long should you postpone potentially valuable action with the old cliché ‘more research required’?

listening-new-york-1-11-16

See also: the OSF summary of the workshop

10 Comments

Filed under Evidence Based Policymaking (EBPM), public policy, Storytelling