Monthly Archives: October 2016

Evidence Based Policy Making: 5 things you need to know and do

These are some opening remarks for my talk on EBPM at Open Society Foundations (New York), 24th October 2016. The OSF recorded the talk, so you can listen below, externally, or by right clicking and saving. Please note that it was a lunchtime talk, so the background noises are plates and glasses.

Evidence based policy making’ is a good political slogan, but not a good description of the policy process. If you expect to see it, you will be disappointed. If you seek more thoughtful ways to understand and act within political systems, you need to understand five key points then decide how to respond.

  1. Decide what it means.

EBPM looks like a valence issue in which most of us agree that policy and policymaking should be ‘evidence based’ (perhaps like ‘evidence based medicine’). Yet, valence issues only command broad agreement on vague proposals. By defining each term we highlight ambiguity and the need to make political choices to make sense of key terms:

  • Should you use restrictive criteria to determine what counts as ‘evidence’ and scientific evidence?
  • Which metaphor, evidence based or informed, describes how pragmatic you will be?
  • The unclear meaning of ‘policy’ prompts you to consider how far you’d go to pursue EBPM, from a one-off statement of intent by a key actor, to delivery by many actors, to the sense of continuous policymaking requiring us to be always engaged.
  • Policymaking is done by policymakers, but many are unelected and the division between policy maker/ influencer is often unclear. So, should you seek to influence policy by influencing influencers?
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

These factors suggest that an effective engagement strategy is not straightforward: our instinct may be to influence elected policymakers at the ‘centre’ making authoritative choices, but the ‘return on investment’ is not clear. So, you need to decide how and where to engage, but it takes time to know ‘where the action is’ and with whom to form coalitions.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from service user and local practitioner experience. This principle seems to rule out the use of RCTs. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking. For example, if policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals? If policymaking systems are so complex, should we devote huge amounts of resources to make sure we’re effective? Kathryn Oliver and I also explore the implications for proponents of scientific evidence, and there is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

ebpm pic

5 Comments

Filed under Evidence Based Policymaking (EBPM)

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

I am now part of a large EU-funded Horizon2020 project called IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe), which begins in January 2017. It is led by Professor Michael Woods at Aberystwyth University and has a dozen partners across the EU. I’ll be leading one work package in partnership with Professor Michael Keating.

imajine-logo-2017

The aim in our ‘work package’ is deceptively simple: generate evidence to identify how EU countries try to reduce territorial inequalities, see who is the most successful, and recommend the transfer of that success to other countries.

Life is not that simple, though, is it?! If it were, we’d know for sure what ‘territorial inequalities’ are, what causes them, what governments are willing to do to reduce them, and if they’ll succeed if they really try.

Instead, here are some of the problems you encounter along the way, including an inability to identify:

  • What policies are designed explicitly to reduce inequalities. Instead, we piece together many intentions, actions, instruments, and outputs, in many levels and types of government, and call it ‘policy’.
  • The link between ‘policy’ and policy outcomes, because many factors interact to produce those outcomes.
  • Success. Even if we could solve the methodological problems, to separate cause and effect, we face a political problem about choosing measures to evaluate and report success.
  • Good ways to transfer successful policies. A policy is not like a #gbbo cake, in which you can produce a great product and give out the recipe. In that scenario, you can assume that we all have the same aims (we all want cake, and of course chocolate is the best), starting point (basically the same shops and kitchens), and language to describe the task (use loads of sugar and cocoa). In policy, governments describe and seek to solve similar-looking problems in very different ways and, if they look elsewhere for lessons, those insights have to be relevant to their context (and the evidence-gathering process has to fit their idea of good governance). They also ‘transfer’ some policies while maintaining their own, and a key finding from our previous work is that governments simultaneously pursue policies to reduce inequalities and undermine their inequality-reducing policies.

So, academics like me tend to spend their time highlighting problems, explaining why such processes are not ‘evidence-based’, and identifying all the things that will go wrong from your perspective if you think policymaking and policy transfer can ever be straightforward.

Yet, policymakers do not have this luxury to identify problems, find them interesting, then go home. Instead, they have to make decisions in the face of ambiguity (what problem are they trying to solve?), uncertainty (evidence will help, but always be limited), and limited time.

So, academics like me are now focused increasingly on trying to help address the problems we raise. On the plus side, it prompts us to speak with policymakers from start to finish, to try to understand what evidence they’re interested in and how they’ll use it. On the less positive side (at least if you are a purist about research), it might prompt all sorts of compromises about how to combine research and policy advice if you want policymakers to use your evidence (on, for example, the line between science and advice, and the blurry boundaries between evidence and advice). If you are interested, please let me know, or follow the IMAJINE category on this site (and #IMAJINE).

See also:

New EU study looks at gap between rich and poor

New research project examines regional inequalities in Europe

Understanding the transfer of policy failure: bricolage, experimentalism and translation by Diane Stone

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

#indyref2

A version of this post appears in The Conversation.

Nicola Sturgeon has announced a consultation on a new Bill on Scottish Independence. Clearly, it made the audience at the SNP’s conference very happy, but what should the rest of us make of it? My gut tells me there will be a second referendum but wouldn’t yet bet my house on it, because that decision is wrapped up in three unresolved issues.

First, there won’t be a referendum unless the SNP thinks it will win, but the polls won’t tell us the answer before Sturgeon has to ask the question! It sounds simple to hold back a referendum until enough people tell you they’ll vote Yes. The complication is that many people don’t know what their choice will be until they can make sense of recent events. ‘Brexit’ might be a ‘game changer’ in a year or two, but it isn’t right now, and Sturgeon might have to choose to pursue a referendum before those polls change in her favour.

Second, the polls don’t tell us much because it is too soon to know what Brexit will look like. The idea of Brexit is still too abstract and not yet related to the arguments that might win the day for a Yes vote.

In each case, I don’t think we can expect to see the full effect of such arguments because (a) they don’t yet form part of a coherent argument linked directly to Brexit, because (b) we still don’t yet know what Brexit looks like. If you don’t really know what something is, how it relates to your life, and who you should blame for that outcome, how can you express a view on its effect on your political preferences?

Third, it is therefore too soon to know how different the second Scottish independence referendum would be. The SNP would like it to relate to the constitutional crisis caused by Brexit, basing its case on a combination of simple statements: England is pulling Scotland out of the EU against our will; the Tories caused this problem; we want to clear up the mess that they caused; it’s a bit rich for the Tories to warn us about the disastrous economic consequences of Scottish independence after the havoc they just caused; and, we want to be a cosmopolitan Scotland, not little England.

Instead, what if people see the Leave vote as a cautionary tale? It is not easy to argue that our response to the catastrophic effect of a withdrawal from a major political, economic, and social union should be to withdraw from another major political, economic, and social union! This is particularly true now that Brexit has opened up the possibility of more devolution (a possibility that had been closed off before now). A feasible alternative is to push for more autonomy in the areas that are devolved and ‘Europeanised’ – including agriculture, fishing, and environmental policies – as a way to have the UK deal with the Scottish Government as ‘as equals on a range of areas’.

So, I’d describe Sturgeon’s announcement as a short term win: why not give your most active audience something to cheer about while you wait for events to unfold? Predicting the timing of a referendum is more difficult because it relates more to a concept than a date: it will be the point at which (a) we know enough about the meaning of Brexit to judge its likely impact, and (b) we have to decide before it feels too late (in other words, in time to respond to the timetable of the UK’s exit from the EU).

Some people are worrying that the UK Government might scupper the SNP’s chances directly, by withholding consent for a second referendum. Maybe it would be better to be tricksy indirectly, by remaining vague about the impact of Brexit and having people in Scotland worry about making a choice before they know its effect.

 

 

Leave a comment

Filed under Scottish independence, Scottish politics

‘Hard Brexit’ is not yet a game changer for Scottish Independence

The Herald reports that ‘Hard Brexit is not a game changer for SNP’. Based on its latest BMG poll, it describes an even split between those who want/ don’t want a second referendum on Scottish independence, and between those who want an early or late referendum.

These results don’t seem too surprising because the idea of Brexit is still too abstract and not yet related to the arguments that might win the day for a Yes vote. I think the basic story would relate to a combination of simple statements such as:

  • England is pulling Scotland out of the EU against our will
  • The Tories caused this problem
  • We want to clear up the mess that they caused
  • It’s a bit rich for the Tories to warn us about the disastrous economic consequences of Scottish independence after the havoc they just caused
  • We want to be a cosmopolitan Scotland, not little England

In each case, I don’t think we can expect to see the widespread effect of such arguments because (a) they don’t yet form part of a coherent argument linked directly to Brexit, because (b) we still don’t yet know what Brexit looks like.

If you don’t really know what something is, how it relates to your life, and who you should blame for that outcome, how can you express a view on its effect on your political preferences?

image for POLU9SP

2 Comments

Filed under Scottish independence, Scottish politics

Writing a policy paper and blog post #POLU9UK

It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.

You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.

There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.

bardachs-8-steps

Be concise, be smart

So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?

It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.

Your aim is not to give a full account of a problem – it’s to get someone important to care about it.

Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.

Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.

Be creative

I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:

  1. Who is your audience? Our discussion of the limits to centralised policymaking suggest that your most influential audience will not necessarily be a UK government minister – but who else would it be?
  2. How manipulative should you be? Our discussions of ‘bounded rationality’ and ‘evidence-based policymaking’ suggest that policymakers combine ‘rational’ and ‘irrational’ shortcuts to gather information and make choices. So, do you appeal to their desire to set goals and gather a lot of scientific information and/or make an emotional and manipulative appeal?
  3. Are you an advocate or an ‘honest broker’? Contemporary discussions of science advice to government highlight unresolved debates about the role of unelected advisors: should you simply lay out some possible solutions or advocate one solution strongly?

Be reflective

For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.

You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:

  • Show how your understanding of policymaker psychology helped you decide how to present information on problems and solutions.
  • Extract insights from policy theories, such as from punctuated equilibrium theory on policymaker attention, multiple streams analysis on timing and feasibility, or the NPF on how to tell persuasive stories.
  • Explore the implications of the lack of ‘comprehensive rationality’ and absence of a ‘policy cycle’: feasibility is partly about identifying the extent to which a solution is ‘doable’ when central governments have limited powers. What ‘policy style’ or policy instruments would be appropriate for the solution you favour?

Be a blogger

With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.

The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.

Be reflective once again

Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.

This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!

See also: Professionalism online with social media

Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?

20161125_094112c

20161125_094131

20161125_094146

20161125_094203

7 Comments

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK

Policy Concepts in 1000 Words: the Westminster Model and Multi-level Governance

This post is handy for week 4 of POLU9UK https://paulcairney.wordpress.com/policymaking-in-the-uk/

Paul Cairney: Politics & Public Policy

ideal-type

(podcast download)

A stark comparison between the ‘Westminster Model’ (WM) and Multi-level Governance (MLG) allows us to consider the difference between accountable government and the messy real world of policymaking. The WM may be used as an ideal-type to describe how power is centralized in the hands of a small number of elites:

  • We rely on representative, not participatory, democracy.
  • The plurality electoral system exaggerates the parliamentary majority of the biggest party and allows it to control Parliament.
  • A politically neutral civil service acts according to ministerial wishes.
  • The prime minister controls cabinet and ministers.

We may also identify an adversarial style of politics and a ‘winner takes all’ mentality which tends to exclude opposition parties. The government is responsible for the vast majority of public policy and it uses its governing majority, combined with a strong party ‘whip’ to make sure that its legislation is passed by Parliament…

View original post 873 more words

1 Comment

Filed under Uncategorized

Realistic ‘realist’ reviews: why do you need them and what might they look like?

This discussion is based on my impressions so far of realist reviews and the potential for policy studies to play a role in their effectiveness. The objectives section formed one part of a recent team bid for external funding (so, I acknowledge the influence of colleagues on this discussion, but not enough to blame them personally). We didn’t get the funding, but at least I got a lengthy blog post and a dozen hits out of it.

I like the idea of a ‘realistic’ review of evidence to inform policy, alongside a promising uptake in the use of ‘realist review’. The latter doesn’t mean realistic: it refers to a specific method or approach – realist evaluation, realist synthesis.

The agenda of the realist review already takes us along a useful path towards policy relevance, driven partly by the idea that many policy and practice ‘interventions’ are too complex to be subject to meaningful ‘systematic review’.

The latter’s aim – which we should be careful not to caricature – may be to identify something as close as possible to a general law: if you do X, the result will generally be Y, and you can be reasonably sure because the studies (such as randomised control trials) meet the ‘gold standard’ of research.

The former’s aim is to focus extensively on the context in which interventions take place: if you do X, the result will be Y under these conditions. So, for example, you identify the outcome that you want, the mechanism that causes it, and the context in which the mechanism causes the outcome. Maybe you’ll even include a few more studies, not meeting the ‘gold standard’, if they meet other criteria of high quality research (I declare that I am a qualitative researcher, so you call tell who I’m rooting for).

Realist reviews come increasingly with guide books and discussions on how to do them systematically. However, my impression is that when people do them, they find that there is an art to applying discretion to identify what exactly is going on. It is often difficult to identify or describe the mechanism fully (often because source reports are not clear on that point), say for sure it caused the outcome even in particular circumstances, and separate the mechanism from the context.

I italicised the last point because it is super-important. I think that it is often difficult to separate mechanism from context because (a) the context is often associated with a particular country’s political system and governing arrangements, and (b) it might be better to treat governing context as another mechanism in a notional chain of causality.

In other words, my impression is that realist reviews focus on the mechanism at the point of delivery; the last link in the chain in which the delivery of an intervention causes an outcome. It may be wise to also identify the governance mechanism that causes the final mechanism to work.

Why would you complicate an already complicated review?

I aim to complicate things then simplify them heroically at the end.

Here are five objectives that I maybe think we should pursue in an evidence review for policymakers (I can’t say for sure until we all agree on the principles of science advice):

  1. Focus on ways to turn evidence into feasible political action, identifying a clear set of policy conditions and mechanisms necessary to produce intended outcomes.
  2. Produce a manageable number of simple lessons and heuristics for policymakers, practitioners, and communities.
  3. Review a wider range of evidence sources than in traditional systematic reviews, to recognise the potential trade-offs between measures of high quality and high impact evidence.
  4. Identify a complex policymaking environment in which there is a need to connect the disparate evidence on each part of the ‘causal chain’.
  5. Recognise the need to understand individual countries and their political systems in depth, to know how the same evidence will be interpreted and used very differently by actors in different contexts.

Objective 1: evidence into action by addressing the politics of evidence-based policymaking

There is no shortage of scientific evidence of policy problems. Yet, we lack a way to use evidence to produce politically feasible action. The ‘politics of evidence-based policymaking’ produces scientists frustrated with the gap between their evidence and a proportionate policy response, and politicians frustrated that evidence is not available in a usable form when they pay attention to a problem and need to solve it quickly. The most common responses in key fields, such as environmental and health studies, do not solve this problem. The literature on ‘barriers’ between evidence and policy recommend initiatives such as: clearer scientific messages, knowledge brokerage and academic-practitioner workshops, timely engagement in politics, scientific training for politicians, and participation to combine evidence and community engagement.

This literature makes limited reference to policy theory and has two limitations. First, studies focus on reducing empirical uncertainty, not ‘framing’ issues to reduce ambiguity. Too many scientific publications go unread in the absence of a process of persuasion to influence policymaker demand for that information (particularly when more politically relevant and paywall-free evidence is available elsewhere). Second, few studies appreciate the multi-level nature of political systems or understand the strategies actors use to influence policy. This involves experience and cultural awareness to help learn: where key decisions are made, including in networks between policymakers and influential actors; the ‘rules of the game’ of networks; how to form coalitions with key actors; and, that these processes unfold over years or decades.

The solution is to produce knowledge that will be used by policymakers, community leaders, and ‘street level’ actors. It requires a (23%) shift in focus from the quality of scientific evidence to (a) who is involved in policymaking and the extent to which there is a ‘delivery chain’ from national to local, and (b) how actors demand, interpret, and use evidence to make decisions. For example, simple qualitative stories with a clear moral may be more effective than highly sophisticated decision-making models or quantitative evidence presented without enough translation.

Objective 2: produce simple lessons and heuristics

We know that the world is too complex to fully comprehend, yet people need to act despite uncertainty. They rely on ‘rational’ methods to gather evidence from sources they trust, and ‘irrational’ means to draw on gut feeling, emotion, and beliefs as short cuts to action (or system 1 and 2 thinking). Scientific evidence can help reduce some uncertainty, but not tell people how to behave. Scientific information strategies can be ineffective, by expecting audiences to appreciate the detail and scale of evidence, understand the methods used to gather it, and possess the skills to interpret and act on it. The unintended consequence is that key actors fall back on familiar heuristics and pay minimal attention to inaccessible scientific information. The solution is to tailor evidence reviews to audiences: examining their practices and ways of thinking; identifying the heuristics they use; and, describing simple lessons and new heuristics and practices.

Objective 3: produce a pragmatic review of the evidence

To review a wider range of evidence sources than in traditional systematic reviews is to recognise the trade-offs between measures of high quality (based on a hierarchy of methods and journal quality) and high impact (based on familiarity and availability). If scientists reject and refuse to analyse evidence that policymakers routinely take more seriously (such as the ‘grey’ literature), they have little influence on key parts of policy analysis. Instead, provide a framework that recognises complexity but produces research that is manageable at scale and translatable into key messages:

  • Context. Identify the role of factors described routinely by policy theories as the key parts of policy environments: the actors involved in multiple policymaking venues at many levels of government; the role of informal and formal rules of each venue; networks between policymakers and influential actors; socio-economic conditions; and, the ‘paradigms’ or ways of thinking that underpin the consideration of policy problems and solutions.
  • Mechanisms. Focus on the connection between three mechanisms: the cause of outcomes at the point of policy delivery (intervention); the cause of ‘community’ or individual ‘ownership’ of effective interventions; and, the governance arrangements that support high levels of community ownership and the effective delivery of the most effective interventions. These connections are not linear. For example, community ownership and effective interventions may develop more usefully from the ‘bottom up’, scientists may convince national but not local policymakers of the value of interventions (or vice versa), or political support for long term strategies may only be temporary or conditional on short term measures of success.
  • Outcomes. Identify key indicators of good policy outcomes in partnership with the people you need to make policy work. Work with those audiences to identify a small number of specific positive outcomes, and synthesise the best available evidence to explain which mechanisms produce those outcomes under the conditions associated with your region of study.

This narrow focus is crucial to the development of a research question, limiting analysis to the most relevant studies to produce a rigorous review in a challenging timeframe. Then, the idea from realist reviews is that you ‘test’ your hypotheses and clarify the theories that underpin this analysis. This should involve a test for political as well as technical feasibility: speak regularly with key actors i to gauge the likelihood that the mechanisms you recommend will be acted upon, and the extent to which the context of policy delivery is stable and predictable and if mechanism will work consistently under those conditions.

Objective 4: identify key links in the ‘causal chain’ via interdisciplinary study

We all talk about combining perspectives from multiple disciplines but I totally mean it, especially if it boosts the role of political scientists who can’t predict elections. For example, health or environmental scientists can identify the most effective interventions to produce good health or environmental outcomes, but not how to work with and influence key people. Policy scholars can identify how the policy process works and how to maximise the use of scientific evidence within it. Social science scholars can identify mechanisms to encourage community participation and the ownership of policies. Anthropologists can provide insights on the particular cultural practices and beliefs underpinning the ways in which people understand and act according to scientific evidence.

Perhaps more importantly, interdisciplinarity provides political cover: we got the best minds in many disciplines and locked them in a room until they produced an answer.

We need this cover for something I’ll call ‘informed extrapolation’ and justify with reference to pragmatism: if we do not provide well-informed analyses of the links between each mechanism, other less-informed actors will fill the gap without appreciating key aspects of causality. For example, if we identify a mechanism for the delivery of successful interventions – e.g. high levels of understanding and implementation of key procedures – there is still uncertainty: do these mechanisms develop organically through ‘bottom up’ collaboration or can they be introduced quickly from the ‘top’ to address an urgent issue? A simple heuristic for central governments could be to introduce training immediately or to resist the temptation for a quick fix.

Relatively-informed analysis, to recommend one of those choices, may only be used if we can back it up with interdisciplinary weight and produce recommendations that are unequivocal (although, again, other approaches are available).

Objective 5: focus intensively on one region, and one key issue, not ‘one size fits all’

We need to understand individual countries or regions – their political systems, communities, and cultural practices – and specific issues in depth, to know how abstract mechanisms work in concrete contexts, and how the same evidence will be interpreted and used differently by actors in those contexts. We need to avoid politically insensitive approaches based on the assumption that a policy that works in countries like (say) the UK will work in countries that are not (say) the UK, and/ or that actors in each country will understand policy problems in the same way.

But why?

It all looks incredibly complicated, doesn’t it? There’s no time to do all that, is there? It will end up as a bit of a too-rushed jumble of high-and-low quality evidence and advice, won’t it?

My argument is that these problems are actually virtues because they provide more insight into how busy policymakers will gather and use evidence. Most policymakers will not know how to do a systematic review or understand why you are so attached to them. Maybe you’ll impress them enough to get them to trust your evidence, but have you put yourself into a position to know what they’ll do with it? Have you thought about the connection between the evidence you’ve gathered, what people need to do, who needs to do it, and who you need to speak to about getting them to do it? Maybe you don’t have to, if you want to be no more than a ‘neutral scientist’ or ‘honest broker’ – but you do if you want to give science advice to policymakers that policymakers can use.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Principles of science advice to government: key problems and feasible solutions

Q: can we design principles of science advice to government to be universal, exhaustive, coherent, clearly defined, and memorable?

If not, we need to choose between these requirements. So, who should get to choose and what should their criteria be?

I provide six scenarios to help us make clear choices between trade-offs. Please enjoy the irony of a 2000-word post calling for a small number of memorable heuristics.

world-science-forum-need-for-principles

In 2015, the World Science Forum declared the value of scientific advice to government and called for a set of principles to underpin the conduct of people giving that advice, based on the principles including transparency, visibility, responsibility, integrity, independence, and accountability. INGSA is taking this recommendation forward, with initial discussions led by Peter Gluckman, James Wilsdon and Daniel Sarewitz and built on many existing documents outlining those principles, followed by consultation and key contributions from people like Heather Douglas and Marc Saner. Here is Marc Saner summing up the pre-conference workshop, and David Mair inviting us to reflect on our aims:

marc-saner

I outline some of those points of tension in this huge table, paraphrasing three days of discussion before and during INGSA’s Science and Policymaking conference in September 2016.

table-1-snip

Here is Dan Sarewitz inviting scientists to reject a caricature of science and the idea that scientists can solve problems simply by producing evidence.sarewitz

table-1b-snipNote: the links in the table don’t work! Here they are: frame, honest brokers, new and diverse generation

Two solutions: a mega-document or a small set of heuristics

One solution to this problem is a super-document incorporating all of the points of all key players. The benefit is that we can present it as a policy solution in the knowledge that (a) very few people will read the document, (b) anyone will be able to find their points in it, and (c) it will be too long and complicated for many people to identify serious contradictions between different beliefs about how to supply and demand science advice. It would literally have weight (if you printed it out) but would not be used as a common guide for scientists and government audiences across the globe, except perhaps as a legitimising document (‘I adhered to the principles’).

Another solution is to produce a super-short document, built on a rationale that should be familiar to anyone giving science advice to policymakers: tell people only the information you expect them to remember, from a brief conversation in the elevator or a half-page document. In other words, the world is complex but we need to simplify it to allow us to act or, at least, to get the attention of your audience. We tell this to scientists advising government – keep it brief and accessible to encourage simple but effective heuristics – but the same point applies to scientists themselves. They may have huge brains, but they also make decisions based on ‘rational’ and ‘irrational’ shortcuts to information. So, giving them a small set of simple and memorable rules will trump a long and worthy but forgettable document.

Producing heuristics for science advice is a political exercise

This is no mean feat because the science community will inevitably produce a large number of different and often-contradictory recommendations for science advice principles. Turning them into a small set of rules is an exercise of power to decide which interpretation of the rules counts and whose experiences they most reflect.

Many scientists would like to think that we can produce a solution to this problem by gathering evidence and seeking consensus, but life is not that simple: we have different values, understandings of the world, priorities and incentives, and there comes a point when you have to make choices which produce winners and losers. So, let’s look at a few options and you can tell me which one you’d choose (or suggest your own in the comments section).

To be honest, I’m finding it difficult to know which principle links to which practices, and if some principles are synonymous enough to lump together. Indeed, in options 3 and 4 the authors have modified the original principles listed by the WSF (from responsibility, integrity, independence, accountability, transparency, visibility). Note how easier it is to remember option 1 which, I think, is the most naïve and least useful option. As in life, the more nuanced accounts are harder to explain and remember.

Option 1: the neutral scientist

  • Demonstrate independence by collaborating only with scientists
  • Demonstrate transparency and visibility by publishing all your data and showing your calculations
  • Demonstrate integrity by limiting your role to evidence and ‘speaking truth to power’
  • Demonstrate responsibility and accountability through peer review and other professional mechanisms for quality control

Option 2: the ‘honest broker’

  • Demonstrate independence by working with policymakers only when you demarcate your role
  • Demonstrate transparency and visibility by publishing your data and declaring your involvement in science advice
  • Demonstrate integrity by limiting your role to evidence and influencing the search for the right question or providing evidence-based options, not explicit policy advice
  • Demonstrate responsibility and accountability through peer review and other professional mechanisms for quality control

Option 3: my interpretation of the Wilsdon and Sarewitz opening gambit

  • Demonstrate independence by communicating while free of political influence, declaring your interests, and making your role clear
  • Demonstrate transparency and visibility by publishing your evidence as quickly and fully as possible
  • Demonstrate integrity by limiting your role to the role of intellectually free ‘honest broker’, while respecting the limits to your advice in a democratic system
  • Demonstrate diversity by working across scientific disciplines and using knowledge from ‘civil society’
  • Demonstrate responsibility and accountability through mechanisms including peer review and other professional means for quality control, public dialogue, and the development of clear lines of institutional accountability.

Option 4: my interpretation of the Heather Douglas modification

  • Demonstrate integrity by having proper respect for inquiry (including open mindedness to the results, and reflective on how one’s values influence interpretation)
  • Take responsibility for the production of advice which is scientifically accurate, explained well and in a transparent way (clearly, and with an openness about the values underpinning judgements), and responsive to societal concerns
  • Demonstrate accountability to the expert community by encouraging other scientists to ‘call them out’ for misjudgements, and to advisees by encouraging them to probe the values underpinning science advice.
  • Demonstrate independence by rejecting illegitimate political interference (e.g. expert selection, too-specific problem definition, dictating or altering scientific results)
  • Demonstrate legitimacy by upholding these principles and complementary values (such as to encourage diversity of participation)

Here is Heather Douglas explaining the links between each principle at the pre-conference workshop (and here is her blog post):

heather-douglas

Option 5: the responsible or hesitant advocate

  • Demonstrate independence by working closely with policymakers but establishing the boundaries between science advice and collective action
  • Demonstrate transparency and visibility by publishing your data, declaring your involvement in science advice, and declaring the substance of your advice
  • Demonstrate integrity by making a sincere attempt to link policy advice to the best available evidence tailored to the legitimate agenda of elected policymakers
  • Demonstrate responsibility and accountability through professional mechanisms for quality control and institutional mechanisms to record the extent of your involvement in policy decisions

Option 6: the openly political advocate for evidence-based policy

  • Demonstrate independence by establishing an analytically distinct role for ‘scientific thinking’ within collective political choice
  • Demonstrate transparency and visibility by publishing relevant scientific data, and declaring the extent of your involvement in policy advice
  • Demonstrate integrity by making a sincere attempt to link policy advice to the best available evidence tailored to the legitimate agenda of elected policymakers
  • Demonstrate responsibility and accountability through professional mechanisms for quality control and institutional mechanisms to reward or punish your judgement calls
  • Demonstrate the effectiveness of evidence-based policy by establishing a privileged position in government for mechanisms obliging policymakers to gather and consider scientific evidence routinely in decisions
  • Select any legitimate strategy necessary – telling stories, framing problems, being entrepreneurial when proposing solutions, leading and brokering discussions – to ensure that policies are based on scientific evidence.

See Also:

Expediting evidence synthesis for healthcare decision-making: exploring attitudes and perceptions towards rapid reviews using Q methodology

The article distinguishes between (for example) a purist position on systematic reviews, privileging scientific criteria, and a pragmatic position on rapid reviews, privileging the needs of policymakers.

See also:

The INGSA website http://www.ingsa.org/

See also ‘The Brussels Declaration’

 

18 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Policymaking in the UK: do you really know who is in charge and who to blame? #POLU9UK

This week, we continue with the idea of two stories of British politics. In one, the Westminster model-style story, the moral is that the centralisation of power produces clear lines of accountability: you know who is in charge and, therefore, the heroes or villains. In another, the complex government story, the world seems too messy and power too diffuse to know all the main characters.

Although some aspects of these stories are specific to the UK, they relate to some ‘universal’ questions and concepts that we can use to identify the limits to centralised power. Put simply, some rather unrealistic requirements for the Westminster story include:

  1. You know what policy is, and that it is made by a small number of actors at the heart of government.
  2. Those actors possess comprehensive knowledge about the problems and solutions they describe.
  3. They can turn policy intent into policy outcomes in a straightforward way.

If life were that simple, I wouldn’t be asking you to read the following blog posts (underlined) which complicate the hell out of our neat story:

You don’t know what policy is, and it is not only made by a small number of actors at the heart of government.

We don’t really know what government policy is. In fact, we don’t even know how to define ‘public policy’ that well. Instead, a definition like ‘the sum total of government action, from signals of intent to the final outcomes’ raises more issues than it settles: policy is remarkably difficult to identify and measure; it is made by many actors inside, outside, and sort of inside/outside government; the boundary between the people influencing and making policy is unclear; and, the study of policy is often about the things governments don’t do.

Actors don’t possess comprehensive knowledge about the problems and solutions they describe

It’s fairly obvious than no-one possesses all possible information about policy problems and the likely effects of proposed solutions. It’s not obvious what happens next. Classic discussions identified a tendency to produce ‘good enough’ decisions based on limited knowledge and cognitive ability, or to seek other measures of ‘good’ policy such as their ability to command widespread consensus (and no radical movement away from such policy settlements). Modern discussions offer us a wealth of discussions of the implications of ‘bounded rationality’, but three insights stand out:

  1. Policymakers pay disproportionate attention to a tiny proportion of the issues for which they are responsible. There is great potential for punctuations in policy/ policymaking when their attention lurches, but most policy is made in networks in the absence of such attention.
  2. Policymakers combine ‘rational’ and ‘irrational’ ways to make decisions with limited information. The way they frame problems limits their attention to a small number of possible solutions, and that framing can be driven by emotional/ moral choices backed up with a selective use of evidence.
  3. It is always difficult to describe this process as ‘evidence-based policymaking’ even when policymakers have sincere intentions.

Policymakers cannot turn policy intent into policy outcomes in a straightforward way

The classic way to describe straightforward policymaking is with reference to a policy cycle and its stages. This image of a cycle was cooked up by marketing companies trying to sell hula hoops to policymakers and interest groups in the 1960s. It is not an accurate description of policymaking (but spirographs are harder to sell).

Instead, for decades we have tried to explain the ‘gap’ between the high expectations of policymakers and the actual – often dispiriting- outcomes, or wonder if policymakers really have such high expectations for success in the first place (or if they prefer to focus on how to present any of their actions as successful). This was a key topic before the rise of ‘multi-level governance’ and the often-deliberate separation of central government action and expected outcomes.

The upshot: in Westminster systems do you really know who is in charge and who to blame?

These factors combine to generate a sense of complex government in which it is difficult to identify policy, link it to the ‘rational’ processes associated with a small number of powerful actors at the heart of government, and trace a direct line from their choices to outcomes.

Of course, we should not go too far to argue that governments don’t make a difference. Indeed, many ministers have demonstrated the amount of damage (or good) you can do in government. Still, let’s not assume that the policy process in the UK is anything like the story we tell about Westminster.

Seminar questions

In the seminar, I’ll ask you reflect on these limits and what they tell us about the ‘Westminster model’. We’ll start by me asking you to summarise the main points of this post. Then, we’ll get into some examples in British politics.

Try to think of some relevant examples of what happens when, for example, minsters seem to make quick and emotional (rather than ‘evidence based’) decisions: what happens next? Some obvious examples – based on our discussions so far – include the Iraq War and the ‘troubled families’ agenda, but please bring some examples that interest you.

In group work, I’ll invite you to answer these questions:

  1. What is UK government policy on X? Pick a topic and tell me what government policy is.
  2. How did the government choose policy? When you decide what government policy is, describe how it made its choices.
  3. What were the outcomes? When you identify government policy choices, describe their impact on policy outcomes.

I’ll also ask you to identify at least one blatant lie in this blog post.

2 Comments

Filed under POLU9UK, Uncategorized

Some rubbish cultural references for lecturers

As an ageing lecturer, I often find that my cultural references generally fall flat with late teenage/ early 20s students. Still, I persevere because I forget which ones are completely useless, which ones still work if you explain them a little bit, and which ones work again because there has been a film remake. So, here is a repository to help me remember, followed by some tweets by similarly concerned colleagues:

Things that still just about work

The Matrix seems to work for just about everything, which means that it works for nothing (‘I remember we talked about it, but what was the point again?’)

matrix-2

A JFK scene sort of works as a vague reference to ‘the whole system’ (but undermines regression analysis of discrete variables)

Joe Pesci JFK the system

Mr Robot might work – eventually – if you don’t have to subscribe to Amazon to get it

mr-robot

Things that don’t work

Don’t refer to a ‘sliding doors moment‘ in British politics unless you want to look like a lecturer stuck in the 90s or a wellness guru

slidingdoors

I have also almost given up on describing ‘universal’ policy concepts in relation to an old Martini advert from my childhood (‘any time, any place, anywhere’).

Things that work again but don’t really

Ghostbusters works again because of the remake, but mostly serves as a reminder that my multiple streams analogy doesn’t work because of the number of streams

ghostbusters-full-new-img

Things that work only if you invest too much time

King Canute works if you really go to town with the explanation, but the Emperor’s New Clothes is surprisingly not-memorable

King Canute

If you like going all meta, I recommend the ‘Darmok’ episode of Star Trek: The Next Generation. On the one hand, the episode is pretty much about two characters getting frustrated for ages about not understanding each other because they have no common cultural, historical, or language-based points to which to refer. So, it is perfect to explain the problem you’ll have with your students. You could even shout ‘DARMOK AND JALAD AT TANAGRA” a few times. On the other hand, they won’t have seen it, so you’ll have to explain the show, the episode, and your original point, before everyone gets bored. Then you’ll realise that you’ve spent 20 minutes on the whole exercise when you could have just said ‘tell me if this metaphor or cultural reference does not work’ in 3 seconds.

Things that don’t work and should be avoided

Of course, these cultural references generally refer to a very specific culture, which runs the risk of excluding some people in a group if only a select number of people ‘get it’.

I say this partly to clever-up my recent silly attempt to describe a politician’s Road to Damascus moment while giving an Erasmus talk in the Czech Republic. The old Christian tales don’t always travel well, especially in former communist states.

road-to-damascus

The moral of the story

In each case, I reckon you can (a) begin by thinking that a cultural reference will point people to a shared story that you can use as a shortcut to the memory and (b) help build a memorable scholarly point, but (c) end up by producing an unexpected shared story in which you are the clownish figure, not the hero (‘remember when that guy told us about a scene in Seinfeld and we had no idea what he was talking about?’).

https://twitter.com/jackiantonovich/status/1425200751891124231?s=09

2022 update of the highest quality shows to avoid because no one else saw them:

3 Comments

Filed under Uncategorized