Category Archives: UK politics and policy

Beware the well-intentioned advice of unusually successful academics

This post – by Dr Kathryn Oliver and me – originally appeared on the LSE Impact Blog. I have replaced the picture of a thumb up with a cat hanging in there. 

Many academics want to see their research have an impact on policy and practice, and there is a lot of advice on how to seek it. It can be helpful to take advice from experienced and successful people. However, is this always the best advice? Guidance based on best practice and success stories in particular, often reflect unequal access to policymakers, institutional support, and credibility attached to certain personal characteristics.

To take stock of the vast amount of advice being offered to academics, we decided to compare it with the more systematic analyses available in the peer-reviewed literature, on the ‘barriers’ between evidence and policy, and policy studies. This allowed us to situate this advice in a wider context, see whether it was generalisable across settings and career stages, and to think through the inconsistencies and dilemmas which underlie these suggestions.

The advice: Top tips on influencing policy

The key themes and individual recommendations we identified from the 86 most-relevant publications are:

  1. Do high quality research: Use well-established research designs, methods, or metrics.
  2. Make your research relevant and readable: Provide easily-understandable, clear, relevant and high-quality research. Aim for the general reader. Produce good stories based on emotional appeals or humour.
  3. Understand the policymaking context. Note the busy and constrained lives of policy actors. Maximise established ways to engage, such as in advisory committees. Be pragmatic, accepting that research rarely translates directly into policy.
  4. Be ‘accessible’ to policymakers. This may involve discussing topics beyond your narrow expertise. Be humble, courteous, professional, and recognise the limits to your skills.
  5. Decide if you want to be an ‘issue advocate’. Decide whether to simply explain the evidence, remain an ‘honest broker, or recommend specific policy options. Negative consequences may include peer criticism, being seen as an academic lightweight, being used to add legitimacy to a policy position, and burnout.
  6. Build relationships (and ground rules) with policymakers: Relationship-building requires investment and skills, but working collaboratively is often necessary. Academics could identify policy actors to provide insights into policy problems, act as champions for their research, and identify the most helpful policy actors.
  7. Be ‘entrepreneurial’ or find someone who is. Be a daring, persuasive scientist, comfortable in policy environments and available when needed. Or, seek brokers to act on your behalf.
  8. Reflect continuously: should you engage, do you want to, and is it working? Academics may enjoy the work or are passionate about the issue. Even so, keep track of when and how you have had impact, and revise your practices continuously.

hang-in-there-baby

Inconsistencies and dilemmas

This advice tends not to address wider issues. For example, there is no consensus over what counts as good evidence for policy, or therefore how best to communicate good evidence. We know little about how to gain the wide range of skills that researchers and policymakers need to act collectively, including to: produce evidence syntheses, manage expert communities, ‘co-produce’ research and policy with a wide range of stakeholders, and be prepared to offer policy recommendations as well as scientific advice. Further, a one-size fits-all model won’t help researchers navigate a policymaking environment where different venues have different cultures and networks. Researchers therefore need to decide what policy engagement is for—to frame problems or simply measure them according to an existing frame—and how far researchers should go to be useful and influential. If academics need to go ‘all in’ to secure meaningful impact, we need to reflect on the extent to which they have the resources and support to do so. This means navigating profound dilemmas:

Source: The dos and don’ts of influencing policy: a systematic review of advice to academics

 

Can academics try to influence policy? The financial costs of seeking impact are prohibitive for junior or untenured researchers, while women and people of colour may be more subject to personal abuse. Such factors undermine the diversity of voices available.

How should academics influence policy? Many of these new required skills – such as storytelling – are not a routine part of academic training, and may be looked down on by our colleagues.  

What is the purpose of academics engagement in policymaking? To go beyond tokenistic and instrumental engagement is to build genuine rapport with policymakers, which may require us to co-produce knowledge and cede some control over the research process. It involves a fundamentally different way of doing public engagement: one with no clear aim in mind other than to listen and learn, with the potential to transform research practices and outputs.

Where is the evidence that this advice helps us improve impact?

The existing advice offered to academics on how to create impact is – although often well-meaning – not based on systematic research or comprehensive analysis of empirical evidence. Few advice-givers draw clearly on key literatures on policymaking or evidence use. This leads to significant misunderstandings, which can have potentially costly repercussions for research, researchers and policy.  These limitations matter, as they lead to advice which fails to address core dilemmas for academics—whether to engage, how to engage, and why—which have profound implications for how scientists and universities should respond to the calls for increased impact.

Most tips focus on individual experience, whereas engagement between research and policy is driven by systemic factors. Many of the tips may be sensible and effective, but often only within particular settings. The advice is likely to be useful mostly to a relatively similar group of people who are confident and comfortable in policy environments, and have access and credibility within policy arenas. Thus, the current advice and structures may help reproduce and reinforce existing power dynamics and an underrepresentation of people who do not fit a very narrow mould.

The overall result may be that each generation of scientists has to fight the same battles, and learn the same lessons over again. Our best response as a profession is to interrogate current advice, shape and frame it, and to help us all to find ways to navigate the complex practical, political, moral and ethical challenges associated with being researchers today. The ‘how to’ literature can help, but only if authors are cognisant of their wider role in society and complex policymaking systems.

This blog post is based on the authors’ co-written articles, The dos and don’ts of influencing policy: a systematic review of advice to academics, published in Palgrave Communications, and ‘How should academics engage in policymaking to achieve impact?’  published in Political Studies Review 

About the authors

Kathryn Oliver is Associate Professor of Sociology and Public Health, London School of Hygiene and Tropical Medicine (@oliver_kathryn ). Her interest is in how knowledge is produced, mobilized and used in policy and practice, and how this affects the practice of research. She co-runs the research collaborative Transforming Evidence with Annette Boaz. https://transformure.wordpress.com and her writings can be found here: https://kathrynoliver.wordpress.com

Paul Cairney is Professor of Politics and Public Policy, University of Stirling, UK (@Cairneypaul).  His research interests are in comparative public policy and policy theories, which he uses to explain the use of evidence in policy and policymaking, in one book (The Politics of Evidence-Based Policy Making, 2016), several articles, and many, many blog posts: https://paulcairney.wordpress.com/ebpm/

See also:

  1. Adam Wellstead, Paul Cairney, and Kathryn Oliver (2018) ‘Reducing ambiguity to close the science-policy gap’, Policy Design and Practice, 1, 2, 115-25 PDF
  2. Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x PDF AM
  3. Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, 76, 3, 399–402 DOI:10.1111/puar.12555 PDF

1 Comment

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, public policy, Storytelling, UK politics and policy

Can Westminster take back control after Brexit?

All going well, this discussion will be in a box in Chapter 8 of Understanding Public Policy 2nd ed.

“The ‘Brexit’ referendum was dominated by a narrative of taking back control of policy and policy making. Control of policy would allow the UK government to make profound changes to immigration and spending. Control of policymaking would allow Parliament and the public to hold the UK government directly to account, in contrast to a more complex and distant EU policy process less subject to direct British scrutiny.

Such high level political debate is built on the false image of a small number of elected policymakers – and the Prime Minister in particular – responsible for the outcomes of the policy process.

There is a strange disconnect between the ways in which elected politicians and elected policymakers describe UK policymaking. Ministers have mostly given up the language of control; modern manifestos no longer make claims – such as to secure ‘full employment’ or eradicate health inequalities – that suggest they control the economy or can solve problems by providing public services. Yet, much Brexit rhetoric suggests that a vote to leave the EU will put control back in the hands of ministers to solve major problems.

The main problem with the latter way of thinking is that it is rejected continuously in the modern literature on policymaking. Policymaking is multi-centric: responsibility for outcomes is spread across many levels and types of government, to the extent that it is not possible to simply know who is in charge and to blame.

Some multi-level governance (MLG) relates to the choice to share power with EU, devolved, and local policymaking organisations.

However, most MLG is necessary because ministers do not have the cognitive or coordinative capacity to control policy outcomes.

They can only pay attention to a tiny proportion of their responsibilities, and have to delegate the rest. Most decisions are taken in their name but without their intervention. They occur within a policymaking environment over which ministers have limited knowledge and control.

The problem with using Brexit as a lens through which to understand British politics is that it emphasises the choice to no longer spread power across a political system, without acknowledging the necessity of doing so.

Our understanding of the future of UK policy and policymaking is incomplete without a focus on the concepts and evidence that help us understand why UK ministers must accept their limitations and act accordingly.

Yet, clearly the Westminster model archetype remains important even if it does not exist (Duggett, 2009). Policy studies have challenged successfully its image of central control, but, the model’s importance resides in its rhetorical power in wider politics when people maintain a simple argument during general election and referendum debates: we know who is – or should be – in charge. This perspective has a profound effect on the ways in which policymakers defend their actions, and political actors compete for votes, even when it is ridiculously misleading (Rhodes, 2013; Bevir, 2013)”

See also Policy Concepts in 1000 Words: the Westminster Model and Multi-level Governance

Leave a comment

Filed under POLU9UK, public policy, UK politics and policy

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in this article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

The UK government’s imaginative use of evidence to make policy

This post describes a new article published in British Politics (Open Access). Please find:

(1) A super-exciting video/audio powerpoint I use for a talk based on the article

(2) The audio alone (link)

(3) The powerpoint to download, so that the weblinks work (link) or the ppsx/ presentation file in case you are having a party (link)

(4) A written/ tweeted discussion of the main points

In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.

So I describe two cases in which its evidence-use seems cynical, when:

  1. Declaring complete success in turning around the lives of ‘troubled families’
  2. Exploiting vivid neuroscientific images to support ‘early intervention’

Then I describe more difficult cases in which supportive evidence is not clear:

  1. Family intervention project evaluations are of limited value and only tentatively positive
  2. Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’

As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?

At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:

Cairney 2018 British Politics discussion section

I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.

The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’

I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.

Cairney British Politics 2018 Table 1

As such, it forms part of a far wider body of work …

In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.

In that wider context, it is worth comparing this talk with the one I gave at the IDS (which, I reckon is a good primer for – or prequel to – the UK talk):

See also:

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Why doesn’t evidence win the day in policy and policymaking?

(found by searching for early intervention)

See also:

Here’s why there is always an expectations gap in prevention policy

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

(found by searching for prevention)

Powerpoint for guest lecture: Paul Cairney UK Government Evidence Policy

4 Comments

Filed under Evidence Based Policymaking (EBPM), POLU9UK, Prevention policy, UK politics and policy

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Here’s why there is always an expectations gap in prevention policy

Prevention is the most important social policy agenda of our time. Many governments make a sincere commitment to it, backed up by new policy strategies and resources. Yet, they also make limited progress before giving up or changing tack. Then, a new government arrives, producing the same cycle of enthusiasm and despair. This fundamental agenda never seems to get off the ground. We aim to explain this ‘prevention puzzle’, or the continuous gap between policymaker expectations and actual outcomes.

What is prevention policy and policymaking?

When engaged in ‘prevention’, governments seek to:

  1. Reform policy. To move from reactive to preventive public services, intervening earlier in people’s lives to ward off social problems and their costs when they seem avoidable.
  2. Reform policymaking. To (a) ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area, (b) give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users, and (c) produce long term aims for outcomes, and reduce short term performance targets.
  3. Ensure that policy is ‘evidence based’.

Three reasons why they never seem to succeed

We use well established policy theories/ studies to explain the prevention puzzle.

  1. They don’t know what prevention means. They express a commitment to something before defining it. When they start to make sense of it, they find out how difficult it is to pursue, and how many controversial choices it involves.
  2. They engage in a policy process that is too complex to control. They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes. Yet, they need to demonstrate to the electorate that they are in control. When they make sense of policymaking, they find out how difficult it is to localise and centralise.
  3. They are unable and unwilling to produce ‘evidence based policymaking’. Policymakers seek ‘rational’ and ‘irrational’ shortcuts to gather enough information to make ‘good enough’ decisions. When they seek evidence on preventing problems before they arise, they find that it is patchy, inconclusive, often counter to their beliefs, and unable to provide a ‘magic bullet’ to help make and justify choices.

Who knows what happens when they address these problems at the same time?

We draw on empirical and comparative UK and devolved government analysis to show in detail how policymaking differs according to the (a) type of government, (b) issue, and (c) era in which they operate.

Although it is reasonable to expect policymaking to be very different in, for example, the UK versus Scottish, or Labour versus Conservative governments, and in eras of boom versus austerity, a key part of our research is to show that the same basic ‘prevention puzzle’ exists at all times. You can’t simply solve it with a change of venue or government.

Our UK book will be out in 2018, with new draft chapters appearing here soon. Our longer term agenda – via IMAJINE – is to examine how policymakers try to reduce territorial inequalities across Europe partly by pursuing prevention and reforming public services.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Evidence based policymaking: 7 key themes

7 themes of EBPM

I looked back at my blog posts on the politics of ‘evidence based policymaking’ and found that I wrote quite a lot (particularly from 2016). Here is a list based on 7 key themes.

1. Use psychological insights to influence the use of evidence

My most-current concern. The same basic theme is that (a) people (including policymakers) are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you (b) bombard them with information, or (c) call them idiots.

Three ways to communicate more effectively with policymakers (shows how to use psychological insights to promote evidence in policymaking)

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid? (yes)

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself? (older paper, linking studies of psychology with studies of EBPM)

Older posts on the same theme:

Is there any hope for evidence in emotional debates and chaotic government? (yes)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

These complaints about ignoring science seem biased and naïve – and too easy to dismiss

How can we close the ‘cultural’ gap between the policymakers and scientists who ‘just don’t get it’?

2. How to use policy process insights to influence the use of evidence

I try to simplify key insights about the policy process to show to use evidence in it. One key message is to give up on the idea of an orderly policy process described by the policy cycle model. What should you do if a far more complicated process exists?

Why don’t policymakers listen to your evidence?

The Politics of Evidence Based Policymaking: 3 messages (3 ways to say that you should engage with the policy process that exists, not a mythical process that will never exist)

Three habits of successful policy entrepreneurs (shows how entrepreneurs are influential in politics)

Why doesn’t evidence win the day in policy and policymaking? and What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco and There is no blueprint for evidence-based policy, so what do you do? (3 posts describing the conditions that must be met for evidence to ‘win the day’)

Writing for Impact: what you need to know, and 5 ways to know it (explains how our knowledge of the policy process helps communicate to policymakers)

How can political actors take into account the limitations of evidence-based policy-making? 5 key points (presentation to European Parliament-European University Institute ‘Policy Roundtable’ 2016)

Evidence Based Policy Making: 5 things you need to know and do (presentation to Open Society Foundations New York 2016)

What 10 questions should we put to evidence for policy experts? (part of a series of videos produced by the European Commission)

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

My argument here is that EBPM is about deciding at the same time what is: (1) good evidence, and (2) a good way to make and deliver policy. If you just focus on one at a time – or consider one while ignoring the other – you cannot produce a defendable way to promote evidence-informed policy delivery.

Kathryn Oliver and I have just published an article on the relationship between evidence and policy (summary of and link to our article on this very topic)

We all want ‘evidence based policy making’ but how do we do it? (presentation to the Scottish Government on 2016)

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

What Works (in a complex policymaking system)?

How Far Should You Go to Make Sure a Policy is Delivered?

4. Face up to your need to make profound choices to pursue EBPM

These posts have arisen largely from my attendance at academic-practitioner conferences on evidence and policy. Many participants tell the same story about the primacy of scientific evidence challenged by post-truth politics and emotional policymakers. I don’t find this argument convincing or useful. So, in many posts, I challenge these participants to think about more pragmatic ways to sum up and do something effective about their predicament.

Political science improves our understanding of evidence-based policymaking, but does it produce better advice? (shows how our knowledge of policymaking clarifies dilemmas about engagement)

The role of ‘standards for evidence’ in ‘evidence informed policymaking’ (argues that a strict adherence to scientific principles may help you become a good researcher but not an effective policy influencer)

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators (you have to make profound ethical and strategic choices when seeking to maximise the use of evidence in policy)

Principles of science advice to government: key problems and feasible solutions (calling yourself an ‘honest broker’ while complaining about ‘post-truth politics’ is a cop out)

What sciences count in government science advice? (political science, obvs)

I know my audience, but does my other audience know I know my audience? (compares the often profoundly different ways in which scientists and political scientists understand and evaluate EBPM – this matters because, for example, we rarely discuss power in scientist-led debates)

Is Evidence-Based Policymaking the same as good policymaking? (no)

Idealism versus pragmatism in politics and policymaking: … evidence-based policymaking (how to decide between idealism and pragmatism when engaging in politics)

Realistic ‘realist’ reviews: why do you need them and what might they look like? (if you privilege impact you need to build policy relevance into systematic reviews)

‘Co-producing’ comparative policy research: how far should we go to secure policy impact? (describes ways to build evidence advocacy into research design)

The Politics of Evidence (review of – and link to – Justin Parkhurt’s book on the ‘good governance’ of evidence production and use)

20170512_095446

5. For students and researchers wanting to read/ hear more

These posts are relatively theory-heavy, linking quite clearly to the academic study of public policy. Hopefully they provide a simple way into the policy literature which can, at times, be dense and jargony.

‘Evidence-based Policymaking’ and the Study of Public Policy

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Practical Lessons from Policy Theories (series of posts on the policy process, offering potential lessons for advocates of evidence use in policy)

Writing a policy paper and blog post 

12 things to know about studying public policy

Can you want evidence based policymaking if you don’t really know what it is? (defines each word in EBPM)

Can you separate the facts from your beliefs when making policy? (no, very no)

Policy Concepts in 1000 Words: Success and Failure (Evaluation) (using evidence to evaluate policy is inevitably political)

Policy Concepts in 1000 Words: Policy Transfer and Learning (so is learning from the experience of others)

Four obstacles to evidence based policymaking (EBPM)

What is ‘Complex Government’ and what can we do about it? (read about it)

How Can Policy Theory Have an Impact on Policy Making? (on translating policy theories into useful advice)

The role of evidence in UK policymaking after Brexit (argues that many challenges/ opportunities for evidence advocates will not change after Brexit)

Why is there more tobacco control policy than alcohol control policy in the UK? (it’s not just because there is more evidence of harm)

Evidence Based Policy Making: If You Want to Inject More Science into Policymaking You Need to Know the Science of Policymaking and The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty and Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty and The barriers to evidence based policymaking in environmental policy (early versions of what became the chapters of the book)

6. Using storytelling to promote evidence use

This is increasingly a big interest for me. Storytelling is key to the effective conduct and communication of scientific research. Let’s not pretend we’re objective people just stating the facts (which is the least convincing story of all). So far, so good, except to say that the evidence on the impact of stories (for policy change advocacy) is limited. The major complication is that (a) the story you want to tell and have people hear interacts with (b) the story that your audience members tell themselves.

Combine Good Evidence and Emotional Stories to Change the World

Storytelling for Policy Change: promise and problems

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

7. The major difficulties in using evidence for policy to reduce inequalities

These posts show how policymakers think about how to combine (a) often-patchy evidence with (b) their beliefs and (c) an electoral imperative to produce policies on inequalities, prevention, and early intervention. I suggest that it’s better to understand and engage with this process than complain about policy-based-evidence from the side-lines. If you do the latter, policymakers will ignore you.

The UK government’s imaginative use of evidence to make policy 

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

Two myths about the politics of inequality in Scotland

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

A ‘decisive shift to prevention’: how do we turn an idea into evidence based policy?

Can the Scottish Government pursue ‘prevention policy’ without independence?

Note: these issues are discussed in similar ways in many countries. One example that caught my eye today:

 

All of this discussion can be found under the EBPM category: https://paulcairney.wordpress.com/category/evidence-based-policymaking-ebpm/T

See also the special issue on maximizing the use of evidence in policy

Palgrave C special

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Storytelling, UK politics and policy