Category Archives: Evidence Based Policymaking (EBPM)

What 10 questions should we put to evidence for policy experts?

The European Commission’s Joint Research Centre’s Science Hub is making some videos about evidence and policy, asking 10 questions. Here are my answers (the video will come later):

  1. Who are you?

Paul Cairney, Professor of Politics and Public Policy, University of Stirling. I write about public policy, applying theoretical insight to issues such as ‘the politics of EBPM’.

  1. How did you become interested in evidence for policy?

It was always in the back of my mind because it is the latest version of a long-standing interest (in policy studies) about the absence of ‘comprehensive rationality’: what do policymakers do when they can’t consider all information, and what are the consequences for politics and policy? Do they use ‘irrational’ shortcuts? Does their attention tend to lurch? Does policy become incremental or ‘punctuated’? There are many different answers, explored in this ‘1000 Words series’.

  1. Why is evidence-informed policy important?

It’s part of the broader importance of inclusive policymaking based on a diversity of voices and the generation of knowledge about how the world works (alongside a debate about how it should work).

  1. What is the most common misconception about evidence-informed policy?

I think that many scientists are too quick to dismiss politics – and identify ‘policy based evidence’ driven by ideological and emotional politicians – rather than understand the ever-present limits to the use of evidence in policy. I think many also exaggerate the lack of scientific influence on policy by focusing on the most salient issues.

  1. What are the most common mistakes made by researchers or policymakers?

The classic mistake by researchers is to think that you make a good argument by bombarding people with a lot of information without thinking about how they’ll receive it. An important mistake that policymakers can make is to rely too much on the experts they know and trust, rather than seeking ways to identify diverse and ‘state of the art’ sources of information.

  1. What is the single most important advice to researchers/scientists who want to have policy impact?

Think about your audience and how they demand information: get their attention with a simple story, describe the problem in ways they understand (and think about the world), and show that your solution is technically and politically feasible.

  1. How do you change minds with facts and evidence?

Engage for the long term, recognising your ‘enlightenment’ role. Something dramatic would have to happen to change minds immediately and dramatically – it would be akin to a religious conversion. Or, in politics, it’s about finding a sympathetic audience (different minds) in another policymaking venue or hoping for a change of government. In other words, this is about the power of participant as much as the power of evidence and ideas.

  1. How should you communicate uncertainty about the evidence?

Since I study politics, I’d focus on the political choices here. You can communicate uncertainty in academic journals via ‘limitations’ sections and expect robust challenge on your evidence from your peers. In politics, if you show uncertainty – and your competitor does not – you may be at a disadvantage, and may need to do some soul searching about how much uncertainty you hold back. As soon as you become a scientist and advocate, the rules change.

  1. How do you measure the policy impact of evidence?

In ways that are not conducive to ‘impact’ measurement by research bodies! For example, with colleagues, I tracked the influence of evidence on smoking harms on policy. In ‘leading countries’ it took 2-3 decades, and depended on three conditions: (1) key actors ‘frame’ the evidence to set a policy agenda; (2) the policy environment is generally conducive to evidence-informed change; and (3) key actors exploit ‘windows of opportunity’ for each policy change. In most countries, policy change of this scale has not happened. In such cases, we can never say that evidence simply wins the day.

  1. Who or What are your “must-reads”?

I partly took more notice of this topic after reading two articles by Kathryn Oliver and colleagues:

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

I was struck by the argument here, that policymakers often fund sophisticated models for evidence-based policymaking but don’t understand or use them:

Nilsson, M., Jordan, A., Turnpenny, J., Hertin, J., Nykvist, B. and Russel, D. (2008) ‘The use and non-use of policy appraisal tools in public policy making: an analysis of three European countries and the European Union’, Policy Sciences, 41, 4, 335-55

It’s also worth reading this account, which shows that policymakers don’t have the same respect for a ‘hierarchy’ of evidence/ methods as many scientists:

Bédard, P. and Ouimet, M. (2012) ‘Cognizance and consultation of randomized controlled trials among ministerial policy analysts’ Review of Policy Research, 29, 5, 625-644

 

 

For more information, start with my EBPM page

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Why doesn’t evidence win the day in policy and policymaking?

cairney-southampton-evidence-win-the-dayPolitics has a profound influence on the use of evidence in policy, but we need to look ‘beyond the headlines’ for a sense of perspective on its impact.

It is tempting for scientists to identify the pathological effect of politics on policymaking, particularly after high profile events such as the ‘Brexit’ vote in the UK and the election of Donald Trump as US President. We have allegedly entered an era of ‘post-truth politics’ in which ideology and emotion trumps evidence and expertise (a story told many times at events like this), particularly when issues are salient.

Yet, most policy is processed out of this public spotlight, because the flip side of high attention to one issue is minimal attention to most others. Science has a crucial role in this more humdrum day-to-day business of policymaking which is far more important than visible. Indeed, this lack of public visibility can help many actors secure a privileged position in the policy process (and further exclude citizens).

In some cases, experts are consulted routinely. There is often a ‘logic’ of consultation with the ‘usual suspects’, including the actors most able to provide evidence-informed advice. In others, scientific evidence is often so taken for granted that it is part of the language in which policymakers identify problems and solutions.

In that context, we need better explanations of an ‘evidence-policy’ gap than the pathologies of politics and egregious biases of politicians.

To understand this process, and appearance of contradiction between excluded versus privileged experts, consider the role of evidence in politics and policymaking from three different perspectives.

The perspective of scientists involved primarily in the supply of evidence

Scientists produce high quality evidence only for politicians often ignore it or, even worse, distort its message to support their ideologically-driven policies. If they expect ‘evidence-based policymaking’ they soon become disenchanted and conclude that ‘policy-based evidence’ is more likely. This perspective has long been expressed in scientific journals and commentaries, but has taken on new significance following ‘Brexit’ and Trump.

The perspective of elected politicians

Elected politicians are involved primarily in managing government and maximising public and organisational support for policies. So, scientific evidence is one piece of a large puzzle. They may begin with a manifesto for government and, if elected, feel an obligation to carry it out. Evidence may play a part in that process but the search for evidence on policy solutions is not necessarily prompted by evidence of policy problems.

Further, ‘evidence based policy’ is one of many governance principles that politicians should feel the need to juggle. For example, in Westminster systems, ministers may try to delegate policymaking to foster ‘localism’ and/ or pragmatic policymaking, but also intervene to appear to be in control of policy, to foster a sense of accountability built on an electoral imperative. The likely mix of delegation and intervention seems almost impossible to predict, and this dynamic has a knock-on effect for evidence-informed policy. In some cases, central governments roll out the same basic policy intervention and limit local discretion; in others, it identifies broad outcomes and invites other bodies to gather evidence on how best to meet them. These differences in approach can have profound consequences on the models of evidence-informed policy available to us (see the example of Scottish policymaking).

Political science and policy studies provide a third perspective

Policy theories help us identify the relationship between evidence and policy by showing that a modern focus on ‘evidence-based policymaking’ (EBPM) is one of many versions of the same fairy tale – about ‘rational’ policymaking – that have developed in the post-war period. We talk about ‘bounded rationality’ to identify key ways in which policymakers or organisations could not achieve ‘comprehensive rationality’:

  1. They cannot separate values and facts.
  2. They have multiple, often unclear, objectives which are difficult to rank in any meaningful way.
  3. They have to use major shortcuts to gather a limited amount of information in a limited time.
  4. They can’t make policy from the ‘top down’ in a cycle of ordered and linear stages.

Limits to ‘rational’ policymaking: two shortcuts to make decisions

We can sum up the first three bullet points with one statement: policymakers have to try to evaluate and solve many problems without the ability to understand what they are, how they feel about them as a whole, and what effect their actions will have.

To do so, they use two shortcuts: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly.

Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing issues to produce or reinforce a dominant way to define policy problems. Successful actors combine evidence and emotional appeals or simple stories to capture policymaker attention, and/ or help policymakers interpret information through the lens of their strongly-held beliefs.

Scientific evidence plays its part, but scientists often make the mistake of trying to bombard policymakers with evidence when they should be trying to (a) understand how policymakers understand problems, so that they can anticipate their demand for evidence, and (b) frame their evidence according to the cognitive biases of their audience.

Policymaking in ‘complex systems’ or multi-level policymaking environments

Policymaking takes place in less ordered, less hierarchical, and less predictable environment than suggested by the image of the policy cycle. Such environments are made up of:

  1. a wide range of actors (individuals and organisations) influencing policy at many levels of government
  2. a proliferation of rules and norms followed by different levels or types of government
  3. close relationships (‘networks’) between policymakers and powerful actors
  4. a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

These five properties – plus a ‘model of the individual’ built on a discussion of ‘bounded rationality’ – make up the building blocks of policy theories (many of which I summarise in 1000 Word posts). I say this partly to aid interdisciplinary conversation: of course, each theory has its own literature and jargon, and it is difficult to compare and combine their insights, but if you are trained in a different discipline it’s unfair to ask you devote years of your life to studying policy theory to end up at this point.

To show that policy theories have a lot to offer, I have been trying to distil their collective insights into a handy guide – using this same basic format – that you can apply to a variety of different situations, from explaining painfully slow policy change in some areas but dramatic change in others, to highlighting ways in which you can respond effectively.

We can use this approach to help answer many kinds of questions. With my Southampton gig in mind, let’s use some examples from public health and prevention.

Why doesn’t evidence win the day in tobacco policy?

My colleagues and I try to explain why it takes so long for the evidence on smoking and health to have a proportionate impact on policy. Usually, at the back of my mind, is a public health professional audience trying to work out why policymakers don’t act quickly or effectively enough when presented with unequivocal scientific evidence. More recently, they wonder why there is such uneven implementation of a global agreement – the WHO Framework Convention on Tobacco Control – that almost every country in the world has signed.

We identify three conditions under which evidence will ‘win the day’:

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems. In leading countries, it took decades to command attention to the health effects of smoking, reframe tobacco primarily as a public health epidemic (not an economic good), and generate support for the most effective evidence-based solutions.
  2. The policy environment becomes conducive to policy change. A new and dominant frame helps give health departments (often in multiple venues) a greater role; health departments foster networks with public health and medical groups at the expense of the tobacco industry; and, they emphasise the socioeconomic conditions – reductions in smoking prevalence, opposition to tobacco control, and economic benefits to tobacco – supportive of tobacco control.
  3. Actors exploit ‘windows of opportunity’ successfully. A supportive frame and policy environment maximises the chances of high attention to a public health epidemic and provides the motive and opportunity of policymakers to select relatively restrictive policy instruments.

So, scientific evidence is a necessary but insufficient condition for major policy change. Key actors do not simply respond to new evidence: they use it as a resource to further their aims, to frame policy problems in ways that will generate policymaker attention, and underpin technically and politically feasible solutions that policymakers will have the motive and opportunity to select. This remains true even when the evidence seems unequivocal and when countries have signed up to an international agreement which commits them to major policy change. Such commitments can only be fulfilled over the long term, when actors help change the policy environment in which these decisions are made and implemented. So far, this change has not occurred in most countries (or, in other aspects of public health in the UK, such as alcohol policy).

Why doesn’t evidence win the day in prevention and early intervention policy?

UK and devolved governments draw on health and economic evidence to make a strong and highly visible commitment to preventive policymaking, in which the aim is to intervene earlier in people’s lives to improve wellbeing and reduce socioeconomic inequalities and/ or public sector costs. This agenda has existed in one form or another for decades without the same signs of progress we now associate with areas like tobacco control. Indeed, the comparison is instructive, since prevention policy rarely meets the three conditions outlined above:

  1. Prevention is a highly ambiguous term and many actors make sense of it in many different ways. There is no equivalent to a major shift in problem definition for prevention policy as a whole, and little agreement on how to determine the most effective or cost-effective solutions.
  2. A supportive policy environment is far harder to identify. Prevention policy cross-cuts many policymaking venues at many levels of government, with little evidence of ‘ownership’ by key venues. Consequently, there are many overlapping rules on how and from whom to seek evidence. Networks are diffuse and hard to manage. There is no dominant way of thinking across government (although the Treasury’s ‘value for money’ focus is key currency across departments). There are many socioeconomic indicators of policy problems but little agreement on how to measure or which measures to privilege (particularly when predicting future outcomes).
  3. The ‘window of opportunity’ was to adopt a vague solution to an ambiguous policy problem, providing a limited sense of policy direction. There have been several ‘windows’ for more specific initiatives, but their links to an overarching policy agenda are unclear.

These limitations help explain slow progress in key areas. The absence of an unequivocal frame, backed strongly by key actors, leaves policy change vulnerable to successful opposition, especially in areas where early intervention has major implications for redistribution (taking from existing services to invest in others) and personal freedom (encouraging or obliging behavioural change). The vagueness and long term nature of policy aims – to solve problems that often seem intractable – makes them uncompetitive, and often undermined by more specific short term aims with a measurable pay-off (as when, for example, funding for public health loses out to funding to shore up hospital management). It is too easy to reframe existing policy solutions as preventive if the definition of prevention remains slippery, and too difficult to demonstrate the population-wide success of measures generally applied to high risk groups.

What happens when attitudes to two key principles – evidence based policy and localism – play out at the same time?

A lot of discussion of the politics of EBPM assumes that there is something akin to a scientific consensus on which policymakers do not act proportionately. Yet, in many areas – such as social policy and social work – there is great disagreement on how to generate and evaluate the best evidence. Broadly speaking, a hierarchy of evidence built on ‘evidence based medicine’ – which has randomised control trials and their systematic review at the top, and practitioner knowledge and service user feedback at the bottom – may be completely subverted by other academics and practitioners. This disagreement helps produce a spectrum of ways in which we might roll-out evidence based interventions, from an RCT-driven roll-out of the same basic intervention to a storytelling driven pursuit of tailored responses built primarily on governance principles (such as to co-produce policy with users).

At the same time, governments may be wrestling with their own governance principles, including EBPM but also regarding the most appropriate balance between centralism and localism.

If you put both concerns together, you have a variety of possible outcomes (and a temptation to ‘let a thousand flowers bloom’) and a set of competing options (outlined in table 1), all under the banner of ‘evidence based’ policymaking.

Table 1 Three ideal types EBBP

What happens when a small amount of evidence goes a very long way?

So, even if you imagine a perfectly sincere policymaker committed to EBPM, you’d still not be quite sure what they took it to mean in practice. If you assume this commitment is a bit less sincere, and you add in the need to act quickly to use the available evidence and satisfy your electoral audience, you get all sorts of responses based in some part on a reference to evidence.

One fascinating case is of the UK Government’s ‘troubled families’ programme which combined bits and pieces of evidence with ideology and a Westminster-style-accountability imperative, to produce:

  • The argument that the London riots were caused by family breakdown and bad parenting.
  • The use of proxy measures to identify the most troubled families
  • The use of superficial performance management to justify notionally extra expenditure for local authorities
  • The use of evidence in a problematic way, from exaggerating the success of existing ‘family intervention projects’ to sensationalising neuroscientific images related to brain development in deprived children …

normal brain

…but also

In other words, some governments feel the need to dress up their evidence-informed policies in a language appropriate to Westminster politics. Unless we understand this language, and the incentives for elected policymakers to use it, we will fail to understand how to act effectively to influence those policymakers.

What can you do to maximise the use of evidence?

When you ask the generic question you can generate a set of transferable strategies to engage in policymaking:

how-to-be-heard

ebpm-5-things-to-do

Yet, as these case studies of public health and social policy suggest, the question lacks sufficient meaning when applied to real world settings. Would you expect the advice that I give to (primarily) natural scientists (primarily in the US) to be identical to advice for social scientists in specific fields (in, say, the UK)?

No, you’d expect me to end with a call for more research! See for example this special issue in which many scholars from many disciplines suggest insights on how to maximise the use of evidence in policy.

Palgrave C special

3 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy

The Science of Evidence-based Policymaking: How to Be Heard

I was interviewed in Science, on the topic of evidence-based policymaking, and we discussed some top tips for people seeking to maximise the use of evidence in a complex policy process (or, perhaps, feel less dispirited about the lack of EBPM in many cases). If it sparks your interest, I have some other work on this topic:

I am editing a series of forthcoming articles on maximising the use of scientific evidence in policy, and the idea is that health and environmental scientists can learn from many other disciplines about how to, for example, anticipate policymaker psychology, find the right policymaking venue, understand its rules and ‘currency’ (the language people use, to reflect dominant ways of thinking about problems), and tell effective stories to the right people.

Palgrave C special

I have also completed a book, some journal articles (PAR, E&P), and some blog posts on the ‘politics of evidence-based policymaking’.

Pivot cover

Two posts appear in the Guardian political science blog (me, me and Kathryn Oliver).

One post, for practitioners, has ‘5 things you need to know’, and it links to presentations on the same theme to different audiences (Scotland, US, EU).

ebpm-5-things-to-do

In this post, I’m trying to think through in more detail what we do with such insights.

The insights I describe come from policy theory, and I have produced 25 posts which introduce each of them in 1000 words (or, if you are super busy, 500 words). For example, the Science interview mentions a spirograph of many cycles, which is a reference to the idea of a policy cycle. Also look out for the 1000-word posts on framing and narrative and think about how they relate to the use of storytelling in policy.

If you like what you see, and want to see more, have a look at my general list of offerings (home page) or list of books and articles with links to theirs PDFs (CV).

how-to-be-heard

2 Comments

Filed under Evidence Based Policymaking (EBPM), public policy, Storytelling

Combine Good Evidence and Emotional Stories to Change the World

total-exposure

This is my 2-page pitch for the Political Studies Association’s Total Exposure 2017 event on Thursday:

People are too quick to criticise the negative role of ideology, emotion, and manipulation in politics, especially after ‘Brexit’ and the rise of Donald Trump. Yet, a good positive and emotional story with a hero or convincing theme is just as important as ‘the evidence’ to social and policy change. This programme gives examples, shows you how to do it, and identifies what stories work. Through first person narrative, it describes the experiences of people telling their own stories, or of their heroes, to generate political attention and support for their cause. It provides additional narrative by experts on storytelling as a craft, and on the science of storytelling effectiveness, to connect powerful stories with the evidence on their role in politics. The end result is a programme which is entertaining, socially relevant, and informative. It will be backed up by further (accessible) reading for people inspired by its message and keen to learn and act accordingly.

Background: telling positive stories for political change

The Brexit vote, and the election of Donald Trump US President, really knocked the stuffing out of people who believe in the primacy of science. Many scientists seem shocked by what they perceive to be ‘post-truth politics’ in which ideology beats evidence.

This post-truth theme has begun to dominate academic discussions on social media and academic conferences. It is a timely issue in which a clear theme has emerged among scientific circles. Its message is dangerous, with the potential to further alienate scientists from politicians and members of the public. It could undermine the prospect of pragmatic debates, in which there is meaningful conversation between people with different points of view, and instead reinforce a tendency for people to speak only with the people whose beliefs they already share.

Too much ‘post-truth politics’ discussion is self-indulgent. Too many academics are quick to demonise the cynical world of politics and politicians and to romanticise their own causes or objectivity. They need to acknowledge that ideological and emotional thinking is a natural part of life, and a part of life to which they are not immune. ‘Experts’ are storytellers for their own cause, and they tell each other the same story about post-truth politics. What separates them from their competitors is that the latter are better at telling effective stories which manipulate the beliefs and emotions of their audience.

So, what can they do about it? Tell good, positive stories, combining scientific evidence with emotional hooks, to help people understand and care about important political issues.

Tell Good Stories to Get What You Want in Politics

So, this programme portrays storytelling in a more positive light, demonstrating how to tell a story with political impact. Its main themes will be ‘hope’ and ‘fear’, to contrast two strategies:

  1. Examples like Brexit and Donald Trump’s campaign are associated with fear, to identify villains (such as immigrants and terrorists) and describe political or policy change as a way to punish them. As short term strategies they are difficult to counter with reference to ‘the evidence’. Some opponents of Brexit and Trump are coming to terms with this strategic problem, but often do not have the knowledge or skills on which to base an effective response.
  2. Examples from other fields are associated with hope, often to identify heroes as symbols of positive political change, or to identify the broad political themes that these heroes represent. The programme can draw on a long list of well-established and promising stories – in fields such as sex work, LGBT rights, immigrant rights, HIV, mental health, criminal justice, disaster relief, and climate change – in which there are skilled advocates for major social and political change.

The discussions would be interspersed with expert commentary, on how to tell stories well and on the evidence of storytelling effectiveness, to provide key ‘take home’ messages on what makes an emotionally engaging story that people talk about and act on.

The Project’s Incredible Timeliness

The project is timely as a political issue. It is also timely in organisational terms. I have been discussing with Brett Davidson of Open Society Foundations (New York) this theme of storytelling, and how it relates to ‘evidence based policymaking’ (as part of a special collection of academic articles on evidence and policy). Davidson is an experienced radio journalist and convenor of a recent 2-day workshop on storytelling. My confidence in the theme of storytelling, and of identifying key stories and commentary, resulted from attending that workshop and hearing their stories and evidence.

The Large International Market

Brexit and Trump provide UK and US hooks that receive high global attention. The stories on which we can draw include personal experiences from people in South-East Asia, South Africa, the US, Latin America, and Eastern Europe (and indirect experiences in places like Syria and Palestine). As a whole, they provide global appeal and a range of salient topics. The immediate market for our academic work is academic, but the themes are wider and should appeal to audiences of radio shows/ podcasts like This American Life. Indeed, given the theme of the programme, it would be useful to follow a similar storytelling format. It could find, for example, a BBC Radio 4 audience but also be marketed as a podcast with international appeal.

The Major Long Term Impact: producing a new generation of storytelling scientists

My ‘high bar’ aim is to prompt a process in this order:

  1. We produce a programme which captures the attention of (the public and) academics and scientists.
  2. They listen to the show and recognise the impact of stories on them: their piqued interest is followed by their emotional engagement.
  3. They recognise the role that storytelling might have in their own research.
  4. We address their initial scepticism by providing some detail on the evidence of the impact of good stories on social and policy change.
  5. Many get in touch with us, and work with us to help them become scientist storytellers.

Bullet 5 is the main legacy. I have begun to work with Jerome Deroy, CEO of Narativ: The Listening and Storytelling Company, to explore the idea of training scientists to be effective storytellers. It could provide a follow-up show in which scientists explain the relevance of their evidence to pressing social and political problems.

3 Comments

Filed under Evidence Based Policymaking (EBPM), Storytelling

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

caspi-et-al-abstract

Avshalom Caspi and colleagues have used the 45-year ‘Dunedin’ study in New Zealand to identify the ‘large economic burden’ associated with ‘a small segment of the population’. They don’t quite achieve the 20%-causes-80% mark, but suggest that 22% of the population account disproportionately for the problems that most policymakers would like to solve, including unhealthy, economically inactive, and criminal behaviour. Most importantly, they discuss some success in predicting such outcomes from a 45-minute diagnostic test of 3 year olds.

Of course, any such publication will prompt major debates about how we report, interpret, and deal with such information, and these debates tend to get away from the original authors as soon as they publish and others report (follow the tweet thread):

This is true even though the authors have gone to unusual lengths to show the many ways in which you could interpret their figures. Theirs is a politically aware report, using some of the language of elected politicians but challenging simple responses. You can see this in their discussion which has a lengthy list of points about the study’s limitations.

The ambiguity dilemma: more evidence does not produce more agreement

‘The most costly adults in our cohort started the race of life from a starting block somewhere behind the rest, and while carrying a heavy handicap in brain health’.

The first limitation is that evidence does not help us adjudicate between competing attempts to define the problem. For some, it reinforces the idea of an ‘underclass’ or small collection of problem/ troubled families that should be blamed for society’s ills (it’s the fault of families and individuals). For others, it reinforces the idea that socio-economic inequalities harm the life chances of people as soon as they are born (it is out of the control of individuals).

The intervention dilemma: we know more about the problem than its solution

The second limitation is that this study tells us a lot about a problem but not its solution. Perhaps there is some common ground on the need to act, and to invest in similar interventions, but:

  1. The evidence on the effectiveness of solutions is not as strong or systematic as this new evidence on the problem.
  2. There are major dilemmas involved in ‘scaling up’ such solutions and transferring them from one area to another.
  3. The overall ‘tone’ of debate still matters to policy delivery, to determine for example if any intervention should be punitive and compulsory (you will cause the problem, so you have to engage with the solution) or supportive and voluntary (you face disadvantages, so we’ll try to help you if you let us).

The moral dilemma: we may only pay attention to the problem if there is a feasible solution

Prevention and early intervention policy agendas often seem to fail because the issues they raise seem too difficult to solve. Governments make the commitment to ‘prevention’ in the abstract but ‘do not know what it means or appreciate scale of their task’.

A classic policymaker heuristic described by Kingdon is that policymakers only pay attention to problems they think they can solve. So, they might initially show enthusiasm, only to lose interest when problems seem intractable or there is high opposition to specific solutions.

This may be true of most policies, but prevention and early intervention also seem to magnify the big moral question that can stop policy in its tracks: to what extent is it appropriate to intervene in people’s lives to change their behaviour?

Some may vocally oppose interventions based on their concern about the controlling nature of the state, particularly when it intervenes to prevent (say, criminal) behaviour that will not necessarily occur. It may be easier to make the case for intervening to help children, but difficult to look like you are not second guessing their parents.

Others may quietly oppose interventions based on an unresolved economic question: does it really save money to intervene early? Put bluntly, a key ‘economic burden’ relates to population longevity; the ‘20%’ may cause economic problems in their working years but die far earlier than the 80%. Put less bluntly by the authors:

This is an important question because the health-care burden of developed societies concentrates in older age groups. To the extent that factors such as smoking, excess weight and health problems during midlife foretell health-care burden and social dependency, findings here should extend to later life (keeping in mind that midlife smoking, weight problems and health problems also forecast premature mortality)’.

So, policymakers find initially that ‘early intervention’ a valence issue only in the abstract – who wouldn’t want to intervene as early as possible in a child’s life to protect them or improve their life chances? – but not when they try to deliver concrete policies.

The evidence-based policymaking dilemma

Overall, we are left with the sense that even the best available evidence of a problem may not help us solve it. Choosing to do nothing may be just as ‘evidence based’ as choosing a solution with minimal effects. Choosing to do something requires us to use far more limited evidence of solution effectiveness and to act in the face of high uncertainty. Add into the mix that prevention policy does not seem to be particularly popular and you might wonder why any policymaker would want to do anything with the best evidence of a profound societal problem.

 

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid?

One of the most dispiriting parts of fierce political debate is the casual use of mental illness or old and new psychiatric terms to undermine an opponent: she is mad, he is crazy, she is a nutter, they are wearing tin foil hats, get this guy a straitjacket and the men in white coats because he needs to lie down in a dark room, she is hysterical, his position is bipolar, and so on. This kind of statement reflects badly on the campaigner rather than their opponent.

I say this because, while doing some research on a paper on the psychology of politics and policymaking (this time with Richard Kwiatkowski, as part of this special collection), there are potentially useful concepts that seem difficult to insulate from such political posturing. There is great potential to use them cynically against opponents rather than benefit from their insights.

The obvious ‘live’ examples relate to ‘rational’ versus ‘irrational’ policymaking. For example, one might argue that, while scientists develop facts and evidence rationally, using tried and trusted and systematic methods, politicians act irrationally, based on their emotions, ideologies, and groupthink. So, we as scientists are the arbiters of good sense and they are part of a pathological political process that contributes to ‘post truth’ politics.

The obvious problem with such accounts is that we all combine cognitive and emotional processes to think and act. We are all subject to bias in the gathering and interpretation of evidence. So, the more positive, but less tempting, option is to consider how this process works – when both competing sides act ‘rationally’ and emotionally – and what we can realistically do to mitigate the worst excesses of such exchanges. Otherwise, we will not get beyond demonising our opponents and romanticising our own cause. It gives us the warm and fuzzies on twitter and in academic conferences but contributes little to political conversations.

A less obvious example comes from modern work on the links between genes and attitudes. There is now a research agenda which uses surveys of adult twins to compare the effect of genes and environment on political attitudes. For example, Oskarsson et al (2015: 650) argue that existing studies ‘report that genetic factors account for 30–50% of the variation in issue orientations, ideology, and party identification’. One potential mechanism is cognitive ability: put simply, and rather cautiously and speculatively, with a million caveats, people with lower cognitive ability are more likely to see ‘complexity, novelty, and ambiguity’ as threatening and to respond with fear, risk aversion, and conservatism (2015: 652).

My immediate thought, when reading this stuff, is about how people would use it cynically, even at this relatively speculative stage in testing and evidence gathering: my opponent’s genes make him stupid, which makes him fearful of uncertainty and ambiguity, and therefore anxious about change and conservative in politics (in other words, the Yoda hypothesis applied only to stupid people). It’s not his fault, but his stupidity is an obstacle to progressive politics. If you add in some psychological biases, in which people inflate their own sense of intelligence and underestimate that of their opponents, you have evidence-informed, really shit political debate! ‘My opponent is stupid’ seems a bit better than ‘my opponent is mental’ but only in the sense that eating a cup of cold sick is preferable to eating shit.

I say this as we try to produce some practical recommendations (for scientist and advocates of EBPM) to engage with politicians to improve the use of evidence in policy. I’ll let you know if it goes beyond a simple maxim: adapt to their emotional and cognitive biases, but don’t simply assume they’re stupid.

See also: the many commentaries on how stupid it is to treat your political opponents as stupid

Stop Calling People “Low Information Voters

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

We all want ‘evidence based policy making’ but how do we do it?

Here are some notes for my talk to the Scottish Government on Thursday as part of its ‘inaugural ‘evidence in policy week’. The advertised abstract is as follows:

A key aim in government is to produce ‘evidence based’ (or ‘informed’) policy and policymaking, but it is easier said than done. It involves two key choices about (1) what evidence counts and how you should gather it, and (2) the extent to which central governments should encourage subnational policymakers to act on that evidence. Ideally, the principles we use to decide on the best evidence should be consistent with the governance principles we adopt to use evidence to make policy, but what happens when they seem to collide? Cairney provides three main ways in which to combine evidence and governance-based principles to help clarify those choices.

I plan to use the same basic structure of the talks I gave to the OSF (New York) and EUI-EP (Florence) in which I argue that every aspect of ‘evidence based policy making’ is riddled with the necessity to make political choices (even when we define EBPM):

ebpm-5-things-to-do

I’ll then ‘zoom in’ on points 4 and 5 regarding the relationship between EBPM and governance principles. They are going to videotape the whole discussion to use for internal discussions, but I can post the initial talk here when it becomes available. Please don’t expect a TED talk (especially the E part of TED).

EBPM and good governance principles

The Scottish Government has a reputation for taking certain governance principles seriously, to promote high stakeholder ‘ownership’ and ‘localism’ on policy, and produce the image of a:

  1. Consensual consultation style in which it works closely with interest groups, public bodies, local government organisations, voluntary sector and professional bodies, and unions when making policy.
  2. Trust-based implementation style indicating a relative ability or willingness to devolve the delivery of policy to public bodies, including local authorities, in a meaningful way

Many aspects of this image were cultivated by former Permanent Secretaries: Sir John Elvidge described a ‘Scottish Model’ focused on joined-up government and outcomes-based approaches to policymaking and delivery, and Sir Peter Housden labelled the ‘Scottish Approach to Policymaking’ (SATP) as an alternative to the UK’s command-and-control model of government, focusing on the ‘co-production’ of policy with local communities and citizens.

The ‘Scottish Approach’ has implications for evidence based policy making

Note the major implication for our definition of EBPM. One possible definition, derived from ‘evidence based medicine’, refers to a hierarchy of evidence in which randomised control trials and their systematic review are at the top, while expertise, professional experience and service user feedback are close to the bottom. An uncompromising use of RCTs in policy requires that we maintain a uniform model, with the same basic intervention adopted and rolled out within many areas. The focus is on identifying an intervention’s ‘active ingredient’, applying the correct dosage, and evaluating its success continuously.

This approach seems to challenge the commitment to localism and ‘co-production’.

At the other end of the spectrum is a storytelling approach to the use of evidence in policy. In this case, we begin with key governance principles – such as valuing the ‘assets’ of individuals and communities – and inviting people to help make and deliver policy. Practitioners and service users share stories of their experiences and invite others to learn from them. There is no model of delivery and no ‘active ingredient’.

This approach seems to challenge the commitment to ‘evidence based policy’

The Goldilocks approach to evidence based policy making: the improvement method

We can understand the Scottish Government’s often-preferred method in that context. It has made a commitment to:

Service performance and improvement underpinned by data, evidence and the application of improvement methodologies

So, policymakers use many sources of evidence to identify promising, make broad recommendations to practitioners about the outcomes they seek, and they train practitioners in the improvement method (a form of continuous learning summed up by a ‘Plan-Do-Study-Act’ cycle).

Table 1 Three ideal types EBBP

This approach appears to offer the best of both worlds; just the right mix of central direction and local discretion, with the promise of combining well-established evidence from sources including RCTs with evidence from local experimentation and experience.

Four unresolved issues in decentralised evidence-based policy making

Not surprisingly, our story does not end there. I think there are four unresolved issues in this process:

  1. The Scottish Government often indicates a preference for improvement methods but actually supports all three of the methods I describe. This might reflect an explicit decision to ‘let a thousand flowers bloom’ or the inability to establish a favoured approach.
  2. There is not a single way of understanding ‘improvement methodology’. I describe something akin to a localist model here, but other people describe a far more research-led and centrally coordinated process.
  3. Anecdotally, I hear regularly that key stakeholders do not like the improvement method. One could interpret this as a temporary problem, before people really get it and it starts to work, or a fundamental difference between some people in government and many of the local stakeholders so important to the ‘Scottish approach’.

4. The spectre of democratic accountability and the politics of EBPM

The fourth unresolved issue is the biggest: it’s difficult to know how this approach connects with the most important reference in Scottish politics: the need to maintain Westminster-style democratic accountability, through periodic elections and more regular reports by ministers to the Scottish Parliament. This requires a strong sense of central government and ministerial control – if you know who is in charge, you know who to hold to account or reward or punish in the next election.

In principle, the ‘Scottish approach’ provides a way to bring together key aims into a single narrative. An open and accessible consultation style maximises the gathering of information and advice and fosters group ownership. A national strategic framework, with cross-cutting aims, reduces departmental silos and balances an image of democratic accountability with the pursuit of administrative devolution, through partnership agreements with local authorities, the formation of community planning partnerships, and the encouragement of community and user-driven design of public services. The formation of relationships with public bodies and other organisations delivering services, based on trust, fosters the production of common aims across the public sector, and reduces the need for top-down policymaking. An outcomes-focus provides space for evidence-based and continuous learning about what works.

In practice, a government often needs to appear to take quick and decisive action from the centre, demonstrate policy progress and its role in that progress, and intervene when things go wrong. So, alongside localism it maintains a legislative, financial, and performance management framework which limits localism.

How far do you go to ensure EBPM?

So, when I describe the ‘5 things to do’, usually the fifth element is about how far scientists may want to go, to insist on one model of EBPM when it has the potential to contradict important governance principles relating to consultation and localism. For a central government, the question is starker:

Do you have much choice about your model of EBPM when the democratic imperative is so striking?

I’ll leave it there on a cliff hanger, since these are largely questions to prompt discussion in specific workshops. If you can’t attend, there is further reading on the EBPM and EVIDENCE tabs on this blog, and specific papers on the Scottish dimension

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

Paul Cairney, Siabhainn Russell and Emily St Denny (2016) “The ‘Scottish approach’ to policy and policymaking: what issues are territorial and what are universal?” Policy and Politics, 44, 3, 333-50

The politics of evidence-based best practice: 4 messages

 

 

2 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy, Scottish politics, Storytelling