Tag Archives: story telling

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

Sometimes, in politics, people know and agree about basic facts. This agreement provides the basis on which they can articulate their values, debate policy choices, and sell their choices to a reasonably well informed public. There are winners and losers from the choices, but at least it is based on a process in which facts or evidence play a major part.

Sometimes, people don’t seem to agree on anything. The extent to which they disagree seems wacky (as in the devil shift). So, there is no factual basis for a debate. Instead, people tell stories to each other and the debate hinges on the extent to which (a) someone tells a persuasive story, and (b) you already agree with its ‘moral’ and/ or the person telling you the story.*

Silly example one: the Scottish rate of income tax (SRIT)

The SRIT is a great example because it shows you that people can’t even agree on how to describe the arithmetic underpinning policy choices. My favourite example is here, on how to describe % increases on percentages:

cashley blair m srit

This problem amplifies the more important problem: the income tax is toxic, few politicians want to touch it, and they would rather show you the dire effects of other people using it. Currently, the best way to do this is to worry about the effect of any tax rise on the pay of nurses (almost always the heroes of the NHS and most-uncalled-for victims of policy change). So, if you combine the arithmetic debate with the focus on nurses, you get this:

cashley et al nurses srit

What you make of it will, I think, depend largely on who you trust, such as Calum C for the SNP/ Yes versus Blair M for Labour/ No. Then if you want to read more you can, for example, choose to read some Scottish Labour-friendly analysis of its plans to increase the SRIT by 1p while compensating lower earners (a, b), see it as a disaster not criticised enough by the BBC, or take your pick of two stories on the extent to which it did a ‘U-turn’.

This is before we even get to the big debate! What could have been a values-driven discussion about the benefits and costs of raising income tax to fund services, or about who should win and lose from taxation changes, has generally turned into a pedantic and (deliberately?) confusing debate about the meaning of ‘progressive’ taxation (David Eiser describes a rise in SRIT as ‘slightly progressive’), the likely income from each 1p change in taxation, and the unintended consequences of greater higher-rate taxation in Scotland.

So, your choice is to (a) do a lot of reading and critical analysis to get your head around the SRIT, or (b) decide who to trust to tell you what’s what.

Silly example two: who should you give your ‘second vote’ to?

The SNP will gain a majority in the Scottish Parliament despite an electoral system (‘mixed-member proportional’) designed to be far more proportional than the plurality system of Westminster: 56 seats from regional lists, using the d’Hondt divisor, offset some of the distribution of the 73 constituency seats determined by a plurality vote. Yet, they only make it more proportional. The SNP’s 50% share of the vote secured 56 of 59 MPs (95%) in the 2015 UK General election. If, as seems likely from the polls, it can maintain that level of support in constituency votes, it might already secure a majority before the regional votes are counted.

So, if the SNP wins almost all of the constituency seats, the competition for votes has taken on an unusual dimension: all the other parties will be getting all or most of their seats from the regional vote.

This situation has prompted some debate about the extent to which SNP-voters should (a) vote SNP twice (#bothvotessnp) to secure a very small number of extra seats in the regions where they don’t win all constituency contests, or (b) give their ‘second’/regional vote to a Yes-supporting smaller party like the Scottish Greens or RISE.

Here comes the silly bit. When John Curtice sort-of-seemed-not-really to suggest that people should choose option b (see original report by the ERS, described in The Herald) you’d think that he’d put a bag of shit on the SNP’s doorstep and the ERS had set fire to it and rung the doorbell.

So, unless you are willing to read about the kind of sophisticated calculations discussed in the ERS report, your next choice is to listen to a story about (a) people out to get the heroic SNP by duping voters into increasing the chances of more Union-loving MSPs (e.g. Labour or Conservative) getting in through the back door, or (b) those plucky heroes, such as the Greens or RISE, standing up to the villainous SNP.

In both cases, it is inevitable that many people will base their decisions on such stories, which is why they look so silly but matter so much.


*For the most part, the cause is the ‘complexity’ of the world and our need to adapt to it by ignoring most of it. To do so, we (just like policymakers) use major cognitive short cuts – including our emotional, gut, and habitual responses – to turn too-much information into a manageable amount. This process helps make us susceptible to ‘framing’ when people present that information to us in a particular way.


Filed under Evidence Based Policymaking (EBPM), Scottish politics

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

(podcast download)

We can generate new insights on policymaking by connecting the dots between many separate concepts. However, don’t underestimate some major obstacles or how hard these dot-connecting exercises are to understand. They may seem clear in your head, but describing them (and getting people to go along with your description) is another matter. You need to set out these links clearly and in a set of logical steps. I give one example – of the links between evidence and policy transfer – which I have been struggling with for some time.

In this post, I combine three concepts – policy transfer, bounded rationality, and ‘evidence-based policymaking’ – to identify the major dilemmas faced by central government policymakers when they use evidence to identify a successful policy solution and consider how to import it and ‘scale it up’ within their jurisdiction. For example, do they use randomised control trials (RCTs) to establish the effectiveness of interventions and require uniform national delivery (to ensure the correct ‘dosage’), or tell stories of good practice and invite people to learn and adapt to local circumstances? I use these examples to demonstrate that our judgement of good evidence influences our judgement on the mode of policy transfer.

Insights from each concept

From studies of policy transfer, we know that central governments (a) import policies from other countries and/ or (b) encourage the spread (‘diffusion’) of successful policies which originated in regions within their country: but how do they use evidence to identify success and decide how to deliver programs?

From studies of ‘evidence-based policymaking’ (EBPM), we know that providers of scientific evidence identify an ‘evidence-policy gap’ in which policymakers ignore the evidence of a problem and/ or do not select the best evidence-based solution: but can policymakers simply identify the ‘best’ evidence and ‘roll-out’ the ‘best’ evidence-based solutions?

From studies of bounded rationality and the policy cycle (compared with alternative theories, such as multiple streams analysis or the advocacy coalition framework), we know that it is unrealistic to think that a policymaker at the heart of government can simply identify then select a perfect solution, click their fingers, and see it carried out. This limitation is more pronounced when we identify multi-level governance, or the diffusion of policymaking power across many levels and types of government. Even if they were not limited by bounded rationality, they would still face: (a) practical limits to their control of the policy process, and (b) a normative dilemma about how far you should seek to control subnational policymaking to ensure the delivery of policy solutions.

The evidence-based policy transfer dilemma

If we combine these insights we can identify a major policy transfer dilemma for central government policymakers:

  1. If subject to bounded rationality, they need to use short cuts to identify (what they perceive to be) the best sources of evidence on the policy problem and its solution.
  2. At the same time, they need to determine if there is convincing evidence of success elsewhere, to allow them to: (a) import policy from another country, and/ or (b) ‘scale up’ a solution that seems to be successful in one of its regions.
  3. Then they need to decide how to ‘spread success’, either by (a) ensuring that the best policy is adopted by all regions within its jurisdiction, or (b) accepting that their role in policy transfer is limited: they identify ‘best practice’ and merely encourage subnational governments to adopt particular policies.

Note how closely connected these concerns are: our judgement of the ‘best evidence’ can produce a judgement on how to ‘scale up’ success

Here are three ideal-type approaches to using evidence to transfer or ‘scale up’ successful interventions. In at least two cases, the choice of ‘best evidence’ seems linked inextricably to the choice of transfer strategy:

3 ideal types EBPM

With approach 1, you gather evidence of effectiveness with reference to a hierarchy of evidence, with systematic reviews and RCTs at the top (see pages 4, 15, 33). This has a knock-on effect for ‘scaling up’: you introduce the same model in each area, requiring ‘fidelity’ to the model to ensure you administer the correct ‘dosage’ and measure its effectiveness with RCTs.

With approach 2, you reject this hierarchy and place greater value on practitioner and service user testimony. You do not necessarily ‘scale up’. Instead, you identify good practice (or good governance principles) by telling stories based on your experience and inviting other people to learn from them.

With approach 3, you gather evidence of effectiveness based on a mix of evidence. You seek to ‘scale up’ best practice through local experimentation and continuous data gathering (by practitioners trained in ‘improvement methods’).

The comparisons between approaches 1 and 2 (in particular) show us the strong link between a judgement on evidence and transfer. Approach 1 requires particular methods to gather evidence and high policy uniformity when you transfer solutions, while approach 2 places more faith in the knowledge and judgement of practitioners.

Therefore, our choice of what counts as EBPM can determine our policy transfer strategy. Or, a different transfer strategy may – if you adhere to an evidential hierarchy – preclude EBPM.

Further reading

I describe these issues, with concrete examples of each approach here, and in far more depth here:

Evidence-based best practice is more political than it looks: ‘National governments use evidence selectively to argue that a successful policy intervention in one local area should be emulated in others (‘evidence-based best practice’). However, the value of such evidence is always limited because there is: disagreement on the best way to gather evidence of policy success, uncertainty regarding the extent to which we can draw general conclusions from specific evidence, and local policymaker opposition to interventions not developed in local areas. How do governments respond to this dilemma? This article identifies the Scottish Government response: it supports three potentially contradictory ways to gather evidence and encourage emulation’.

Both articles relate to ‘prevention policy’ and the examples (so far) are from my research in Scotland, but in a future paper I’ll try to convince you that the issues are ‘universal’




Filed under 1000 words, Evidence Based Policymaking (EBPM), Prevention policy, public policy