Almost. I have sent a full draft following external feedback and review (next stage: copy-editing). All going well, it will be out in November 2019.
Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.
I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:
In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.
Under those circumstances, key responses include:
However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.
In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:
In that context, I suggest that CS should:
These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:
To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge
However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.
In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.
In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:
The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact. Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.
On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.
On the other hand, it is important to not get carried away by the possibilities:
The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:
In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.
Let’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.
Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.
A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:
In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.
Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.
You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.
One compromise is to keep the cycle then show how messy it is in practice:
However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.
Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.
People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.
The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.
In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.
Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways
The table helps us think through the tensions between models, built on very different principles of good evidence and governance.
In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.
I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.
However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.
The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.
*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control? Here are 7 different ‘answers’.
Powerpoint Paul Cairney @ GES GSRS 2017
This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.
Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.
But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story. Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place. The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.
The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.
The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.
Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.
There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.
These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.
Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.
When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.
Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.
On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.
This is a post for my talk at the ‘Politheor: European Policy Network’ event Write For Impact: Training In Op-Ed Writing For Policy Advocacy. There are other speakers with more experience of, and advice on, ‘op-ed’ writing. My aim is to describe key aspects of politics and policymaking to help the audience learn why they should write op-eds in a particular way for particular audiences.
A key rule in writing is to ‘know your audience’, but it’s easier said than done if you seek many sympathetic audiences in many parts of a complex policy process. Two simple rules should help make this process somewhat clearer:
We can use the same broad concepts to help explain both processes, in which many policymakers and influencers interact across many levels and types of government to produce what we call ‘policy’:
Policymakers receive too much information, and seek ways to ignore most of it while making decisions. To do so, they use ‘rational’ and ‘irrational’ means: selecting a limited number of regular sources of information, and relying on emotion, gut instinct, habit, and familiarity with information. In other words, your audience combines cognition and emotion to deal with information, and they can ignore information for long periods then quickly shift their attention towards it, even if that information has not really changed.
Consequently, an op-ed focusing solely ‘the facts’ can be relatively ineffective compared to an evidence-informed story, perhaps with a notional setting, plot, hero, and moral. Your aim shifts from providing more and more evidence to reduce uncertainty about a problem, to providing a persuasive reason to reduce ambiguity. Ambiguity relates to the fact that policymakers can understand a policy problem in many different ways – such as tobacco as an economic good, issue of civil liberties, or public health epidemic – but often pay exclusive attention to one.
So, your aim may be to influence the simple ways in which people understand the world, to influence their demand for more information. An emotional appeal can transform a factual case, but only if you know how people engage emotionally with information. Sometimes, the same story can succeed with one audience but fail with another.
Institutions are the rules people use in policymaking, including the formal, written down, and well understood rules setting out who is responsible for certain issues, and the informal, unwritten, and unclear rules informing action. The rules used by policymakers can help define the nature of a policy problem, who is best placed to solve it, who should be consulted routinely, and who can safely be ignored. These rules can endure for long periods and become like habits, particularly if policymakers pay little attention to a problem or why they define it in a particular way.
Such informal rules, about how to understand a problem and who to speak with about it, can be reinforced in networks of policymakers and influencers.
‘Policy community’ partly describes a sense that most policymaking is processed out of the public spotlight, often despite minimal high level policymaker interest. Senior policymakers delegate responsibility for policymaking to bureaucrats, who seek information and advice from groups. Groups exchange information for access to, and potential influence within, government, and policymakers have ‘standard operating procedures’ that favour particular sources of evidence and some participants over others
‘Policy community’ also describes a sense that the network seems fairly stable, built on high levels of trust between participants, based on factors such as reliability (the participant was a good source of information, and did not complain too much in public about decisions), a common aim or shared understanding of the problem, or the sense that influencers represent important groups.
So, the same policy case can have a greater impact if told by a well trusted actor in a policy community. Or, that community member may use networks to build key coalitions behind a case, use information from the network to understand which cases will have most impact, or know which audiences to seek.
This use of networks relates partly to learning the language of policy debate in particular ‘venues’, to learn what makes a convincing case. This language partly reflects a well-established ‘world view’ or the ‘core beliefs’ shared by participants. For example, a very specific ‘evidence-based’ language is used frequently in public health, while treasury departments look for some recognition of ‘value for money’ (according to a particular understanding of how you determine VFM). So, knowing your audience is knowing the terms of debate that are often so central to their worldview that they take them for granted and, in contrast, the forms of argument that are more difficult to pursue because they are challenging or unfamiliar to some audiences. Imagine a case that challenges completely someone’s world view, or one which is entirely consistent with it.
Some worldviews can be shattered by external events or crises, but this is a rare occurrence. It may be possible to generate a sense of crisis with reference to socioeconomic changes or events, but people will interpret these developments through the ‘lens’ of their own beliefs. In some cases, events seem impossible to ignore but we may not agree on their implications for action. In others, an external event only matters if policymakers pay attention to them. Indeed, we began this discussion with the insight that policymakers have to ignore almost all such information available to them.
Know your audience revisited: practical lessons from policy theories
To take into account all of these factors, while trying to make a very short and persuasive case, may seem impossible. Instead, we might pick up some basic rules of thumb from particular theories or approaches. We can discuss a few examples from ongoing work on ‘practical lessons from policy theories’.
Storytelling for policy impact
If you are telling a story with a setting, plot, hero, and moral, it may be more effective to focus on a hero than villain. More importantly, imagine two contrasting audiences: one is moved by your personal and story told to highlight some structural barriers to the wellbeing of key populations; another is unmoved, judges that person harshly, and thinks they would have done better in their shoes (perhaps they prefer to build policy on stereotypes of target populations). ‘Knowing your audience’ may involve some trial-and-error to determine which stories work under which circumstances.
Appealing to coalitions
Or, you may decide that it is impossible to write anything to appeal to all relevant audiences. Instead, you might tailor it to one, to reinforce its beliefs and encourage people to act. The ‘advocacy coalition framework’ describes such activities as routine: people go into politics to translate their beliefs into policy, they interpret the world through those beliefs, and they romanticise their own cause while demonising their opponents. If so, would a bland op-ed have much effect on any audience?
Learning from entrepreneurs
‘Policy entrepreneurs’ draw on three rules, two of which seem counterintuitive:
It all adds up to one simple piece of advice – timing and luck matters when making a policy case – but policy entrepreneurs know how to influence timing and help create their own luck.
On the day, we can use such concepts to help us think through the factors that you might think about while writing op-eds, even though it is very unlikely that you would mention them in your written work.
I went to a fantastic workshop on storytelling for policy change. It was hosted by Open Society Foundations New York (25/6 October), and brought together a wide range of people from different backgrounds: Narativ, people experienced in telling their own story, advocacy and professional groups using stories to promote social or policy change, major funders, journalists, and academics. There was already a lot of goodwill in the room at the beginning, and by the end there was more than a lot!
The OSF plans to write up a summary of the whole discussion, so my aim is to highlight the relevance for ‘evidence-based policymaking’ and for scientists and academics seeking more ‘impact’ for their research. In short, although I recommend that scientists ‘turn a large amount of scientific evidence into simple and effective stories that appeal to the biases of policymakers’, it’s easier said than done, and not something scientists are trained in. Good storytellers might enthuse people already committed to the idea of storytelling for policy, but what about scientists more committed to the language of scientific evidence and perhaps sceptical about the need to develop this new skill (particularly those who describe stories pejoratively as ‘anecdata’)? What would make them take a leap in the dark, to give up precious research time to develop skills in storytelling?
So, let me tell you why I thought the workshop was brilliant – including outlining its key insights – and why you might not!
Why I thought it was brilliant
Academic conferences can be horrible: a seemingly never-ending list of panels with 4-5 paper givers and a discussant, taking up almost all of the talking time with too-long and often-self-indulgent and long-winded PowerPoint presentations and little time for meaningful discussion. It’s a test of meeting deadlines for the presenter and an endurance test for the listener.
This workshop was different: the organisers thought about what it means to talk and listen, and therefore how to encourage people to talk in an interesting way and encourage high attention and engagement.
There were three main ‘listening exercises’: a personal exercise in which you closed your eyes and thought about the obstacles to listening (I confess that I cheated on that one); a paired exercise in which one person listened and thought of three poses to sum up the other’s short story; and a group exercise in which people paired up, told and then summarised each other’s stories, and spoke as a group about the implications.
This final exercise was powerful: we told often-revealing stories to strangers, built up trust very quickly, and became emotionally involved in each other’s accounts. It was interesting to watch how quickly we could become personally invested in each other’s discussion, form networks, and listen intently to each other.
For me, it was a good exercise in demonstrating what you need in a policymaker audience: ideally, they should care about the problem you raise, be personally invested in trying to solve it, and trust you and therefore your description of the most feasible solutions. If it helps recreate these conditions, a storytelling scientist may be more effective than an ‘honest broker’. Without a good story to engage your audience, your evidence will be like a drop in the ocean and your audience might be checking its email or playing Pokemon Go while you present.
Key insights and impressions
Most participants expressed strong optimism about the effect of stories on society and policy, particularly when the aim is more expressive than instrumental: the act itself of telling one’s story and being heard can be empowering, particularly within marginalised groups from which we hear few voices. It can also be remarkably powerful, remarkably quickly: most of us were crying or laughing instantly and frequently as we heard many moving stories about many issues. It’s hard to overstate just how effective many of these stories were when you heard them in person.
When discussing more instrumental concerns – can we use a story to get what we want? – the optimism was more cautious and qualified. Key themes included:
Many of these points will seem familiar if you study psychology or the psychology of policymaking. So, the benefit of these experiences is that they tell us how people have applied such insights and how it has helped their cause. Most speakers were confident that they were making an impact.
Why you may not be as impressed: two reasons
The first barrier to getting you enthusiastic is that you weren’t there. If emotional engagement is such a key part of storytelling, and you weren’t there to hear it, why would you care? So, a key barrier to making an ‘impact’ with storytelling is that it is difficult to increase its scale. You might persuade someone if they spent enough time with you, but what if you only had a few seconds in which to impress them or, worse still, you couldn’t impress them because they weren’t interested in the first place? Our worry may be that we can only influence people who are already open to our idea. This isn’t the end of the world, since a key political aim may be to enthuse people who share your beliefs and get them to act (for example, to spread the word to their friends). However, it prompts us to wonder about the varying effect of the same message and the extent to which our message’s power comes from our audience rather than our story.
The second barrier is that, when the question is framed for an academic audience – what is the scientific evidence on the impact of stories? – the answer is not clear.
On the panel devoted to this question (and in a previous session), there were some convincing accounts of the impact of initiatives such as: the Women’s Policy Institute ‘grass roots’ training in California (leading to advocacy prompting 2 dozen bills to be signed over 13 years); Purpose’s branding campaign for the White Helmets (including the miracle baby video which has received tens of millions of views); and, the Frame Works Institute’s ability to change minds with very brief interventions (for example, getting people to think in terms of problem systems more than problem individuals in areas like criminal justice).
However, the academic analysis – with contributions from Francesca Polletta, Jeff Niederdeppe, Douglas Storey, Michael Jones – tended to stress caution or note limited effects:
More research required?!
So, you might want more convincing evidence before you take that giant leap to train to become a skilful storyteller: why go for it when its effects are so unclear and difficult to measure?
For me, that response might seem sensible but is also a cop out: unequivocal evidence may never arrive and good science often involves researching as you go. A key insight into policymaking regards a continuous sense of urgency to solve problems: policymakers don’t wait for the evidence to become unequivocal before they act, partly because that sense of clarity may never happen. They feel the need to act on the basis of available evidence. Perhaps scientists should at least think about doing the same when they seek to act on research rather than simply do the research: how long should you postpone potentially valuable action with the old cliché ‘more research required’?
See also: the OSF summary of the workshop