Category Archives: Storytelling

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.


That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence




Leave a comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Policy Concepts in 500 Words: Social Construction and Policy Design

Why would a democratic political system produce ‘degenerative’ policy that undermines democracy? Social Construction and Policy Design (SCPD) describes two main ways in which policymaking alienates many citizens:

1. The Social Construction of Target Populations

High profile politics and electoral competition can cause alienation:

  1. Political actors compete to tell ‘stories’ to assign praise or blame to groups of people. For example, politicians describe value judgements about who should be rewarded or punished by government. They base them on stereotypes of ‘target populations’, by (a) exploiting the ways in which many people think about groups, or (b) making emotional and superficial judgements, backed up with selective use of facts.
  2. These judgements have a ‘feed-forward’ effect: they are reproduced in policies, practices, and institutions. Such ‘policy designs’ can endure for years or decades. The distribution of rewards and sanctions is cumulative and difficult to overcome.
  3. Policy design has an impact on citizens, who participate in politics according to how they are characterised by government. Many know they will be treated badly; their engagement will be dispiriting.

Some groups have the power to challenge the way they are described by policymakers (and the media and public), and receive benefits behind the scenes despite their poor image. However, many people feel powerless, become disenchanted with politics, and do not engage in the democratic process.

SCTP depicts this dynamic with a 2-by-2 table in which target populations are described positively/ negatively and more or less able to respond:

SCPD 500 words 2 by 2

2. Bureaucratic and expert politics

Most policy issues are not salient and politicised in this way. Yet, low salience can exacerbate problems of citizen exclusion. Policies dominated by bureaucratic interests often alienate citizens receiving services. Or a small elite dominates policymaking when there is high acceptance that (a) the best policy is ‘evidence based’, and (b) the evidence should come from experts.

Overall, SCPD describes a political system with major potential to diminish democracy, containing key actors (a) politicising issues to reward or punish populations or (b) depoliticising issues with reference to science and objectivity. In both cases, policy design is not informed by routine citizen participation.

Take home message for students: SCPD began as Schneider and Ingram’s description of the US political system’s failure to solve major problems including inequality, poverty, crime, racism, sexism, and effective universal healthcare and education. Think about how its key drivers apply elsewhere: (1) some people make and exploit quick and emotional judgements for political gain, and others refer to expertise to limit debate; (2) these judgements inform policy design; and, (3) policy design sends signals to citizens which can diminish or boost their incentive to engage in politics.

For more, see the 1000-word and 5000-word versions. The latter has a detailed guide to further reading.





Leave a comment

Filed under 500 words, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Evidence based policymaking: 7 key themes

7 themes of EBPM

I looked back at my blog posts on the politics of ‘evidence based policymaking’ and found that I wrote quite a lot (particularly from 2016). Here is a list based on 7 key themes.

1. Use psychological insights to influence the use of evidence

My most-current concern. The same basic theme is that (a) people (including policymakers) are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you (b) bombard them with information, or (c) call them idiots.

Three ways to communicate more effectively with policymakers (shows how to use psychological insights to promote evidence in policymaking)

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid? (yes)

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself? (older paper, linking studies of psychology with studies of EBPM)

Older posts on the same theme:

Is there any hope for evidence in emotional debates and chaotic government? (yes)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

These complaints about ignoring science seem biased and naïve – and too easy to dismiss

How can we close the ‘cultural’ gap between the policymakers and scientists who ‘just don’t get it’?

2. How to use policy process insights to influence the use of evidence

I try to simplify key insights about the policy process to show to use evidence in it. One key message is to give up on the idea of an orderly policy process described by the policy cycle model. What should you do if a far more complicated process exists?

The Politics of Evidence Based Policymaking: 3 messages (3 ways to say that you should engage with the policy process that exists, not a mythical process that will never exist)

Three habits of successful policy entrepreneurs (shows how entrepreneurs are influential in politics)

Why doesn’t evidence win the day in policy and policymaking? and What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco and There is no blueprint for evidence-based policy, so what do you do? (3 posts describing the conditions that must be met for evidence to ‘win the day’)

Writing for Impact: what you need to know, and 5 ways to know it (explains how our knowledge of the policy process helps communicate to policymakers)

How can political actors take into account the limitations of evidence-based policy-making? 5 key points (presentation to European Parliament-European University Institute ‘Policy Roundtable’ 2016)

Evidence Based Policy Making: 5 things you need to know and do (presentation to Open Society Foundations New York 2016)

What 10 questions should we put to evidence for policy experts? (part of a series of videos produced by the European Commission)

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

My argument here is that EBPM is about deciding at the same time what is: (1) good evidence, and (2) a good way to make and deliver policy. If you just focus on one at a time – or consider one while ignoring the other – you cannot produce a defendable way to promote evidence-informed policy delivery.

Kathryn Oliver and I have just published an article on the relationship between evidence and policy (summary of and link to our article on this very topic)

We all want ‘evidence based policy making’ but how do we do it? (presentation to the Scottish Government on 2016)

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

What Works (in a complex policymaking system)?

How Far Should You Go to Make Sure a Policy is Delivered?

4. Face up to your need to make profound choices to pursue EBPM

These posts have arisen largely from my attendance at academic-practitioner conferences on evidence and policy. Many participants tell the same story about the primacy of scientific evidence challenged by post-truth politics and emotional policymakers. I don’t find this argument convincing or useful. So, in many posts, I challenge these participants to think about more pragmatic ways to sum up and do something effective about their predicament.

Political science improves our understanding of evidence-based policymaking, but does it produce better advice? (shows how our knowledge of policymaking clarifies dilemmas about engagement)

The role of ‘standards for evidence’ in ‘evidence informed policymaking’ (argues that a strict adherence to scientific principles may help you become a good researcher but not an effective policy influencer)

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators (you have to make profound ethical and strategic choices when seeking to maximise the use of evidence in policy)

Principles of science advice to government: key problems and feasible solutions (calling yourself an ‘honest broker’ while complaining about ‘post-truth politics’ is a cop out)

What sciences count in government science advice? (political science, obvs)

I know my audience, but does my other audience know I know my audience? (compares the often profoundly different ways in which scientists and political scientists understand and evaluate EBPM – this matters because, for example, we rarely discuss power in scientist-led debates)

Is Evidence-Based Policymaking the same as good policymaking? (no)

Idealism versus pragmatism in politics and policymaking: … evidence-based policymaking (how to decide between idealism and pragmatism when engaging in politics)

Realistic ‘realist’ reviews: why do you need them and what might they look like? (if you privilege impact you need to build policy relevance into systematic reviews)

‘Co-producing’ comparative policy research: how far should we go to secure policy impact? (describes ways to build evidence advocacy into research design)

The Politics of Evidence (review of – and link to – Justin Parkhurt’s book on the ‘good governance’ of evidence production and use)


5. For students and researchers wanting to read/ hear more

These posts are relatively theory-heavy, linking quite clearly to the academic study of public policy. Hopefully they provide a simple way into the policy literature which can, at times, be dense and jargony.

‘Evidence-based Policymaking’ and the Study of Public Policy

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Practical Lessons from Policy Theories (series of posts on the policy process, offering potential lessons for advocates of evidence use in policy)

Writing a policy paper and blog post 

12 things to know about studying public policy

Can you want evidence based policymaking if you don’t really know what it is? (defines each word in EBPM)

Can you separate the facts from your beliefs when making policy? (no, very no)

Policy Concepts in 1000 Words: Success and Failure (Evaluation) (using evidence to evaluate policy is inevitably political)

Policy Concepts in 1000 Words: Policy Transfer and Learning (so is learning from the experience of others)

Four obstacles to evidence based policymaking (EBPM)

What is ‘Complex Government’ and what can we do about it? (read about it)

How Can Policy Theory Have an Impact on Policy Making? (on translating policy theories into useful advice)

The role of evidence in UK policymaking after Brexit (argues that many challenges/ opportunities for evidence advocates will not change after Brexit)

Why is there more tobacco control policy than alcohol control policy in the UK? (it’s not just because there is more evidence of harm)

Evidence Based Policy Making: If You Want to Inject More Science into Policymaking You Need to Know the Science of Policymaking and The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty and Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty and The barriers to evidence based policymaking in environmental policy (early versions of what became the chapters of the book)

6. Using storytelling to promote evidence use

This is increasingly a big interest for me. Storytelling is key to the effective conduct and communication of scientific research. Let’s not pretend we’re objective people just stating the facts (which is the least convincing story of all). So far, so good, except to say that the evidence on the impact of stories (for policy change advocacy) is limited. The major complication is that (a) the story you want to tell and have people hear interacts with (b) the story that your audience members tell themselves.

Combine Good Evidence and Emotional Stories to Change the World

Storytelling for Policy Change: promise and problems

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

7. The major difficulties in using evidence for policy to reduce inequalities

These posts show how policymakers think about how to combine (a) often-patchy evidence with (b) their beliefs and (c) an electoral imperative to produce policies on inequalities, prevention, and early intervention. I suggest that it’s better to understand and engage with this process than complain about policy-based-evidence from the side-lines. If you do the latter, policymakers will ignore you.

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

Two myths about the politics of inequality in Scotland

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

A ‘decisive shift to prevention’: how do we turn an idea into evidence based policy?

Can the Scottish Government pursue ‘prevention policy’ without independence?

Note: these issues are discussed in similar ways in many countries. One example that caught my eye today:


All of this discussion can be found under the EBPM category:

See also the special issue on maximizing the use of evidence in policy

Palgrave C special

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Storytelling, UK politics and policy

Telling Stories that Shape Public Policy

This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.

But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story.  Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place.  The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.

Why focus on stories?

The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.

The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.

Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.

The 5 Steps of Good Policy Narrating

  1. Tell a Story: Remember, facts never speak for themselves. If you are presenting best practices, relaying scientific information, or detailing cost/benefit analyses, you are telling or contributing to a story.  Engage your storytelling deliberately.
  2. Set the Stage: Policy narratives have a setting and in this setting you will find specific evidence, geography, legal parameters, and other policy consequential items and information.  Think of these setting items as props.  Not all stages can hold every relevant prop.  Be true to science; be true to your craft, but set your stage with props that maximize the potency of your story, which always includes making your setting amenable to your audience.
  3. Establish the Plot: In public policy plots usually define the problem (and polices do not exist without at least a potential problem). Define your problem. Doing so determines the causes, which establishes blame.
  4. Cast the Characters:  Having established a plot and defined your problem, the roles you will need your characters to play become apparent. Determine who the victim is (who is harmed by the problem), who is responsible (the villain) and who can bring relief (the hero). Cast characters your audience will appreciate in their roles.
  5. Clearly Specify the Moral: Postmodern films might get away without having a point.  Policy narratives usually do not. Let your audience know what the solution is.

Public Policy Intervention Points

There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.

These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.

Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.

When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.

  1. Who are you and what are your goals? Are you an outsider trying to affect change to solve a problem or push an agency to do something it might not be inclined to do?  Are you an insider trying to evaluate and improve policy making and implementation? Understanding your role and your goals is essential to both selecting an appropriate intervention point and optimizing your narrative therein.
  2. Carefully consider your audience. Who are they and what is their posture towards your overall goal? Understanding your audience’s values and beliefs is essential for avoiding invoking defensiveness.
  3. There is the intervention point itself – what is the best way to reach your audience? What are the rules for the type of communication you plan to use? For example, media communications can be done with lengthy press releases, interviews with the press, or in the confines of a simple tweet.  All of these methods have both formal and informal constraints that will determine what you can and can’t do.

Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.

On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.


1 Comment

Filed under public policy, Storytelling

Three habits of successful policy entrepreneurs

This post is one part of a series – called Practical Lessons from Policy Theories and it summarizes this paper. 

Policy entrepreneurs’ invest their time wisely for future reward, and possess key skills that help them adapt particularly well to their environments. They are the agents for policy change who possess the knowledge, power, tenacity, and luck to be able to exploit key opportunities. They draw on three strategies:

1. Don’t focus on bombarding policymakers with evidence.

Scientists focus on making more evidence to reduce uncertainty, but put people off with too much information. Entrepreneurs tell a good story, grab the audience’s interest, and the audience demands information.

Table 1

2. By the time people pay attention to a problem it’s too late to produce a solution.

So, you produce your solution then chase problems.

Table 2

3. When your environment changes, your strategy changes.

For example, in the US federal level, you’re in the sea, and you’re a surfer waiting for the big wave. In the smaller subnational level, on a low attention and low budget issue, you can be Poseidon moving the ‘streams’. In the US federal level, you need to ‘soften’ up solutions over a long time to generate support. In subnational or other countries, you have more opportunity to import and adapt ready-made solutions.

Table 3

It all adds up to one simple piece of advice – timing and luck matters when making a policy case – but policy entrepreneurs know how to influence timing and help create their own luck.

Click for the full paper

For more on ‘multiple streams’ see:

Paul Cairney and Michael Jones (2016) ‘Kingdon’s Multiple Streams Approach: What Is the Empirical Impact of this Universal Theory?’ Policy Studies Journal, 44, 1, 37-58 PDF (Annex to Cairney Jones 2016) (special issue of PSJ)

Paul Cairney and Nikos Zahariadis (2016) ‘Multiple streams analysis: A flexible metaphor presents an opportunity to operationalize agenda setting processes’ in Zahariadis, N. (eds) Handbook of Public Policy Agenda-Setting (Cheltenham: Edward Elgar) PDF see also

I use a space launch metaphor in the paper. If you prefer different images, have a look at 5 images of the policy process. If you prefer a watery metaphor (it’s your life, I suppose), click Policy Concepts in 1000 Words: Multiple Streams Analysis


Filed under agenda setting, Evidence Based Policymaking (EBPM), Folksy wisdom, public policy, Storytelling

Writing for Impact: what you need to know, and 5 ways to know it

This is a post for my talk at the ‘Politheor: European Policy Network’ event Write For Impact: Training In Op-Ed Writing For Policy Advocacy. There are other speakers with more experience of, and advice on, ‘op-ed’ writing. My aim is to describe key aspects of politics and policymaking to help the audience learn why they should write op-eds in a particular way for particular audiences.

A key rule in writing is to ‘know your audience’, but it’s easier said than done if you seek many sympathetic audiences in many parts of a complex policy process. Two simple rules should help make this process somewhat clearer:

  1. Learn how policymakers simplify their world, and
  2. Learn how policy environments influence their attention and choices.

We can use the same broad concepts to help explain both processes, in which many policymakers and influencers interact across many levels and types of government to produce what we call ‘policy’:

  1. Policymaker psychology: tell an evidence-informed story

Policymakers receive too much information, and seek ways to ignore most of it while making decisions. To do so, they use ‘rational’ and ‘irrational’ means: selecting a limited number of regular sources of information, and relying on emotion, gut instinct, habit, and familiarity with information. In other words, your audience combines cognition and emotion to deal with information, and they can ignore information for long periods then quickly shift their attention towards it, even if that information has not really changed.

Consequently, an op-ed focusing solely ‘the facts’ can be relatively ineffective compared to an evidence-informed story, perhaps with a notional setting, plot, hero, and moral. Your aim shifts from providing more and more evidence to reduce uncertainty about a problem, to providing a persuasive reason to reduce ambiguity. Ambiguity relates to the fact that policymakers can understand a policy problem in many different ways – such as tobacco as an economic good, issue of civil liberties, or public health epidemic – but often pay exclusive attention to one.

So, your aim may be to influence the simple ways in which people understand the world, to influence their demand for more information. An emotional appeal can transform a factual case, but only if you know how people engage emotionally with information. Sometimes, the same story can succeed with one audience but fail with another.

  1. Institutions: learn the ‘rules of the game’

Institutions are the rules people use in policymaking, including the formal, written down, and well understood rules setting out who is responsible for certain issues, and the informal, unwritten, and unclear rules informing action. The rules used by policymakers can help define the nature of a policy problem, who is best placed to solve it, who should be consulted routinely, and who can safely be ignored. These rules can endure for long periods and become like habits, particularly if policymakers pay little attention to a problem or why they define it in a particular way.

  1. Networks and coalitions: build coalitions and establish trust

Such informal rules, about how to understand a problem and who to speak with about it, can be reinforced in networks of policymakers and influencers.

‘Policy community’ partly describes a sense that most policymaking is processed out of the public spotlight, often despite minimal high level policymaker interest. Senior policymakers delegate responsibility for policymaking to bureaucrats, who seek information and advice from groups. Groups exchange information for access to, and potential influence within, government, and policymakers have ‘standard operating procedures’ that favour particular sources of evidence and some participants over others

‘Policy community’ also describes a sense that the network seems fairly stable, built on high levels of trust between participants, based on factors such as reliability (the participant was a good source of information, and did not complain too much in public about decisions), a common aim or shared understanding of the problem, or the sense that influencers represent important groups.

So, the same policy case can have a greater impact if told by a well trusted actor in a policy community. Or, that community member may use networks to build key coalitions behind a case, use information from the network to understand which cases will have most impact, or know which audiences to seek.

  1. Ideas: learn the ‘currency’ of policy argument

This use of networks relates partly to learning the language of policy debate in particular ‘venues’, to learn what makes a convincing case. This language partly reflects a well-established ‘world view’ or the ‘core beliefs’ shared by participants. For example, a very specific ‘evidence-based’ language is used frequently in public health, while treasury departments look for some recognition of ‘value for money’ (according to a particular understanding of how you determine VFM). So, knowing your audience is knowing the terms of debate that are often so central to their worldview that they take them for granted and, in contrast, the forms of argument that are more difficult to pursue because they are challenging or unfamiliar to some audiences. Imagine a case that challenges completely someone’s world view, or one which is entirely consistent with it.

  1. Socioeconomic factors and events: influence how policymakers see the outside world

Some worldviews can be shattered by external events or crises, but this is a rare occurrence. It may be possible to generate a sense of crisis with reference to socioeconomic changes or events, but people will interpret these developments through the ‘lens’ of their own beliefs. In some cases, events seem impossible to ignore but we may not agree on their implications for action. In others, an external event only matters if policymakers pay attention to them. Indeed, we began this discussion with the insight that policymakers have to ignore almost all such information available to them.

Know your audience revisited: practical lessons from policy theories

To take into account all of these factors, while trying to make a very short and persuasive case, may seem impossible. Instead, we might pick up some basic rules of thumb from particular theories or approaches. We can discuss a few examples from ongoing work on ‘practical lessons from policy theories’.

Storytelling for policy impact

If you are telling a story with a setting, plot, hero, and moral, it may be more effective to focus on a hero than villain. More importantly, imagine two contrasting audiences: one is moved by your personal and story told to highlight some structural barriers to the wellbeing of key populations; another is unmoved, judges that person harshly, and thinks they would have done better in their shoes (perhaps they prefer to build policy on stereotypes of target populations). ‘Knowing your audience’ may involve some trial-and-error to determine which stories work under which circumstances.

Appealing to coalitions

Or, you may decide that it is impossible to write anything to appeal to all relevant audiences. Instead, you might tailor it to one, to reinforce its beliefs and encourage people to act. The ‘advocacy coalition framework’ describes such activities as routine: people go into politics to translate their beliefs into policy, they interpret the world through those beliefs, and they romanticise their own cause while demonising their opponents. If so, would a bland op-ed have much effect on any audience?

Learning from entrepreneurs

Policy entrepreneurs’ draw on three rules, two of which seem counterintuitive:

  1. Don’t focus on bombarding policymakers with evidence. Scientists focus on making more evidence to reduce uncertainty, but put people off with too much information. Entrepreneurs tell a good story, grab the audience’s interest, and the audience demands information.
  2. By the time people pay attention to a problem it’s too late to produce a solution. So, you produce your solution then chase problems.
  3. When your environment changes, your strategy changes. For example, in the US federal level, you’re in the sea, and you’re a surfer waiting for the big wave. In the smaller subnational level, on a low attention and low budget issue, you can be Poseidon moving the ‘streams’. In the US federal level, you need to ‘soften’ up solutions over a long time to generate support. In subnational or other countries, you have more opportunity to import and adapt ready-made solutions.

It all adds up to one simple piece of advice – timing and luck matters when making a policy case – but policy entrepreneurs know how to influence timing and help create their own luck.

On the day, we can use such concepts to help us think through the factors that you might think about while writing op-eds, even though it is very unlikely that you would mention them in your written work.


Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed



1 Comment

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling