Tag Archives: policy theory

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

These notes are for my brief panel talk at the European Parliament-European University Institute ‘Policy Roundtable’: Evidence and Analysis in EU Policy-Making: Concepts, Practice and Governance. As you can see from the programme description, the broader theme is about how EU institutions demonstrate their legitimacy through initiatives such as stakeholder participation and evidence-based policymaking (EBPM). So, part of my talk is about what happens when EBPM does not exist.

The post is a slightly modified version of my (recorded) talk for Open Society Foundations (New York) but different audiences make sense of these same basic points in very different ways.

  1. Recognise that the phrase ‘evidence-based policy-making’ means everything and nothing

The main limitation to ‘evidence-based policy-making’ is that no-one really knows what it is or what the phrase means. So, each actor makes sense of EBPM in different ways and you can tell a lot about each actor by the way in which they answer these questions:

  • Should you use restrictive criteria to determine what counts as ‘evidence’? Some actors equate evidence with scientific evidence and adhere to specific criteria – such as evidence-based medicine’s hierarchy of evidence – to determine what is scientific. Others have more respect for expertise, professional experience, and stakeholder and service user feedback as sources of evidence.
  • Which metaphor, evidence based or informed is best? ‘Evidence based’ is often rejected by experienced policy participants as unrealistic, preferring ‘informed’ to reflect pragmatism about mixing evidence and political calculations.
  • How far do you go to pursue EBPM? It is unrealistic to treat ‘policy’ as a one-off statement of intent by a single authoritative actor. Instead, it is made and delivered by many actors in a continuous policymaking process within a complicated policy environment (outlined in point 3). This is relevant to EU institutions with limited resources: the Commission often makes key decisions but relies on Member States to make and deliver, and the Parliament may only have the ability to monitor ‘key decisions’. It is also relevant to stakeholders trying to ensure the use of evidence throughout the process, from supranational to local action.
  • Which actors count as policymakers? Policymaking is done by ‘policymakers’, but many are unelected and the division between policymaker/ influencer is often unclear. The study of policymaking involves identifying networks of decision-making by elected and unelected policymakers and their stakeholders, while the actual practice is about deciding where to draw the line between influence and action.
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

For stakeholders, an effective engagement strategy is not straightforward: it takes time to know ‘where the action is’, how and where to engage with policymakers, and with whom to form coalitions. For the Commission, it is difficult to know what will happen to policy after it is made (although we know the end point will not resemble the starting point). For the Parliament, it is difficult even to know where to look.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected national and local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from stakeholders, professional groups, service user and local practitioner experience. This principle seems to rule out the use of RCTs, at least as a source of a uniform model to be rolled out and evaluated. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach to EBPM or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking

  • If policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals?
  • If policymaking systems are so complex, should stakeholders devote huge amounts of resources to make sure they’re effective at each stage?
  • Should proponents of scientific evidence go to great lengths to make sure that EBPM is based on a hierarch of evidence? There is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
  • Should policymakers try to direct the use of evidence in policy as well as policy itself?

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Writing a policy paper and blog post #POLU9UK

It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.

You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.

There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.

bardachs-8-steps

Be concise, be smart

So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?

It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.

Your aim is not to give a full account of a problem – it’s to get someone important to care about it.

Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.

Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.

Be creative

I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:

  1. Who is your audience? Our discussion of the limits to centralised policymaking suggest that your most influential audience will not necessarily be a UK government minister – but who else would it be?
  2. How manipulative should you be? Our discussions of ‘bounded rationality’ and ‘evidence-based policymaking’ suggest that policymakers combine ‘rational’ and ‘irrational’ shortcuts to gather information and make choices. So, do you appeal to their desire to set goals and gather a lot of scientific information and/or make an emotional and manipulative appeal?
  3. Are you an advocate or an ‘honest broker’? Contemporary discussions of science advice to government highlight unresolved debates about the role of unelected advisors: should you simply lay out some possible solutions or advocate one solution strongly?

Be reflective

For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.

You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:

  • Show how your understanding of policymaker psychology helped you decide how to present information on problems and solutions.
  • Extract insights from policy theories, such as from punctuated equilibrium theory on policymaker attention, multiple streams analysis on timing and feasibility, or the NPF on how to tell persuasive stories.
  • Explore the implications of the lack of ‘comprehensive rationality’ and absence of a ‘policy cycle’: feasibility is partly about identifying the extent to which a solution is ‘doable’ when central governments have limited powers. What ‘policy style’ or policy instruments would be appropriate for the solution you favour?

Be a blogger

With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.

The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.

Be reflective once again

Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.

This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!

See also: Professionalism online with social media

Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?

20161125_094112c

20161125_094131

20161125_094146

20161125_094203

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

The Politics of Evidence-based Policymaking in 2500 words

Here is a 2500 word draft of an entry to the Oxford Research Encyclopaedia (public administration and policy) on EBPM. It brings together some thoughts in previous posts and articles

Evidence-based Policymaking (EBPM) has become one of many valence terms that seem difficult to oppose: who would not want policy to be evidence based? It appears to  be the most recent incarnation of a focus on ‘rational’ policymaking, in which we could ask the same question in a more classic way: who would not want policymaking to be based on reason and collecting all of the facts necessary to make good decisions?

Yet, as we know from classic discussions, there are three main issues with such an optimistic starting point. The first is definitional: valence terms only seem so appealing because they are vague. When we define key terms, and produce one definition at the expense of others, we see differences of approach and unresolved issues. The second is descriptive: ‘rational’ policymaking does not exist in the real world. Instead, we treat ‘comprehensive’ or ‘synoptic’ rationality as an ideal-type, to help us think about the consequences of ‘bounded rationality’ (Simon, 1976). Most contemporary policy theories have bounded rationality as a key starting point for explanation (Cairney and Heikkila, 2014). The third is prescriptive. Like EBPM, comprehensive rationality seems – initially – to be unequivocally good. Yet, when we identify its necessary conditions, or what we would have to do to secure this aim, we begin to question EBPM and comprehensive rationality as an ideal scenario.

What is ‘evidence-based policymaking?’ is a lot like ‘what is policy?’ but more so!

Trying to define EBPM is like magnifying the problem of defining policy. As the entries in this encyclopaedia suggest, it is difficult to say what policy is and measure how much it has changed. I use the working definition, ‘the sum total of government action, from signals of intent to the final outcomes’ (Cairney, 2012: 5) not to provide something definitive, but to raise important qualifications, including: there is a difference between what people say they will do, what they actually do, and the outcome; and, policymaking is also about the power not to do something.

So, the idea of a ‘sum total’ of policy sounds intuitively appealing, but masks the difficulty of identifying the many policy instruments that make up ‘policy’ (and the absence of others), including: the level of spending; the use of economic incentives/ penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organisational change; and, the levels of resources/ methods dedicated to policy implementation and evaluation (2012: 26). In that context, we are trying to capture a process in which actors make and deliver ‘policy’ continuously, not identify a set-piece event providing a single opportunity to use a piece of scientific evidence to prompt a policymaker response.

Similarly, for the sake of simplicity, we refer to ‘policymakers’ but in the knowledge that it leads to further qualifications and distinctions, such as: (1) between elected and unelected participants, since people such as civil servants also make important decisions; and (2) between people and organisations, with the latter used as a shorthand to refer to a group of people making decisions collectively and subject to rules of collective engagement (see ‘institutions’). There are blurry dividing lines between the people who make and influence policy and decisions are made by a collection of people with formal responsibility and informal influence (see ‘networks’). Consequently, we need to make clear what we mean by ‘policymakers’ when we identify how they use evidence.

A reference to EBPM provides two further definitional problems (Cairney, 2016: 3-4). The first is to define evidence beyond the vague idea of an argument backed by information. Advocates of EBPM are often talking about scientific evidence which describes information produced in a particular way. Some describe ‘scientific’ broadly, to refer to information gathered systematically using recognised methods, while others refer to a specific hierarchy of methods. The latter has an important reference point – evidence based medicine (EBM) – in which the aim is to generate the best evidence of the best interventions and exhort clinicians to use it. At the top of the methodological hierarchy are randomized control trials (RCTs) to determine the evidence, and the systematic review of RCTs to demonstrate the replicated success of interventions in multiple contexts, published in the top scientific journals (Oliver et al, 2014a; 2014b).

This reference to EBM is crucial in two main ways. First, it highlights a basic difference in attitude between the scientists proposing a hierarchy and the policymakers using a wider range of sources from a far less exclusive list of publications: ‘The tools and programs of evidence-based medicine … are of little relevance to civil servants trying to incorporate evidence in policy advice’ (Lomas and Brown 2009: 906).  Instead, their focus is on finding as much information as possible in a short space of time – including from the ‘grey’ or unpublished/non-peer reviewed literature, and incorporating evidence on factors such as public opinion – to generate policy analysis and make policy quickly. Therefore, second, EBM provides an ideal that is difficult to match in politics, proposing: “that policymakers adhere to the same hierarchy of scientific evidence; that ‘the evidence’ has a direct effect on policy and practice; and that the scientific profession, which identifies problems, is in the best place to identify the most appropriate solutions, based on scientific and professionally driven criteria” (Cairney, 2016: 52; Stoker 2010: 53).

These differences are summed up in the metaphor ‘evidence-based’ which, for proponents of EBM suggests that scientific evidence comes first and acts as the primary reference point for a decision: how do we translate this evidence of a problem into a proportionate response, or how do we make sure that the evidence of an intervention’s success is reflected in policy? The more pragmatic phrase ‘evidence-informed’ sums up a more rounded view of scientific evidence, in which policymakers know that they have to take into account a wider range of factors (Nutley et al, 2007).

Overall, the phrases ‘evidence-based policy’ and ‘evidence-based policymaking’ are less clear than ‘policy’. This problem puts an onus on advocates of EBPM to state what they mean, and to clarify if they are referring to an ideal-type to aid description of the real world, or advocating a process that, to all intents and purposes, would be devoid of politics (see below). The latter tends to accompany often fruitless discussions about ‘policy based evidence’, which seems to describe a range of mistakes by policymakers – including ignoring evidence, using the wrong kinds, ‘cherry picking’ evidence to suit their agendas, and/ or producing a disproportionate response to evidence – without describing a realistic standard to which to hold them.

For example, Haskins and Margolis (2015) provide a pie chart of ‘factors that influence legislation’ in the US, to suggest that research contributes 1% to a final decision compared to, for example, ‘the public’ (16%), the ‘administration’ (11%), political parties (8%) and the budget (8%). Theirs is a ‘whimsical’ exercise to lampoon the lack of EBPM in government (compare with Prewitt et al’s 2012 account built more on social science studies), but it sums up a sense in some scientific circles about their frustrations with the inability of the policymaking world to keep up with science.

Indeed, there is an extensive literature in health science (Oliver, 2014a; 2014b), emulated largely in environmental studies (Cairney, 2016: 85; Cairney et al, 2016), which bemoans the ‘barriers’ between evidence and policy. Some identify problems with the supply of evidence, recommending the need to simplify reports and key messages. Others note the difficulties in providing timely evidence in a chaotic-looking process in which the demand for information is unpredictable and fleeting. A final main category relates to a sense of different ‘cultures’ in science and policymaking which can be addressed in academic-practitioner workshops (to learn about each other’s perspectives) and more scientific training for policymakers. The latter recommendation is often based on practitioner experiences and a superficial analysis of policy studies (Oliver et al, 2014b; Embrett and Randall’s, 2014).

EBPM as a misleading description

Consequently, such analysis tends to introduce reference points that policy scholars would describe as ideal-types. Many accounts refer to the notion of a policy cycle, in which there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, breaking down their task into clearly defined and well-ordered stages (Cairney, 2016: 16-18). The hope may be that scientists can help policymakers make good decisions by getting them as close as possible to ‘comprehensive rationality’ in which they have the best information available to inform all options and consequences. In that context, policy studies provides two key insights (2016; Cairney et al, 2016).

  1. The role of multi-level policymaking environments, not cycles

Policymaking takes place in less ordered and predictable policy environments, exhibiting:

  • a wide range of actors (individuals and organisations) influencing policy in many levels and types of government
  • a proliferation of rules and norms followed in different venues
  • close relationships (‘networks’) between policymakers and powerful actors
  • a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  • policy conditions and events that can prompt policymaker attention to lurch at short notice.

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multilevel policy process. It shows scientists that they are competing with many actors to present evidence in a particular way to secure a policymaker audience. Support for particular solutions varies according to which organisation takes the lead and how it understands the problem. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift – but major policy change is rare.

  1. Policymakers use two ‘shortcuts’ to deal with bounded rationality and make decisions

Policymakers deal with ‘bounded rationality’ by employing two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, beliefs, habits, and familiar reference points to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing.

Framing refers to the ways in which we understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), and responsible for policy, how much attention they pay, and what kind of solution they favour. Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with evidence. Rather, policy theories signal the strategies that actors use to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (True, Jones, and Baumgartner 2007)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (Jones, Shanahan, and McBeth 2014)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (Weible, Heikkila, and Sabatier 2012)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (Kingdon 1984).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, it can take years to produce support for an ‘evidence-based’ policy solution, built on its technical and political feasibility (will it work as intended, and do policymakers have the motive and opportunity to select it?).

EBPM as a problematic prescription

A pragmatic solution to the policy process would involve: identifying the key venues in which the ‘action’ takes place; learning the ‘rules of the game’ within key networks and institutions; developing framing and persuasion techniques; forming coalitions with allies; and engaging for the long term (Cairney, 2016: 124; Weible et al, 2012: 9-15). The alternative is to seek reforms to make EBPM in practice more like the EBM ideal.

Yet, EBM is defendable because the actors involved agree to make primary reference to scientific evidence and be guided by what works (combined with their clinical expertise and judgement). In politics, there are other – and generally more defendable – principles of ‘good’ policymaking (Cairney, 2016: 125-6). They include the need to legitimise policy: to be accountable to the public in free and fair elections, consult far and wide to generate evidence from multiple perspectives, and negotiate policy across political parties and multiple venues with a legitimate role in policymaking. In that context, we may want scientific evidence to play a major role in policy and policymaking, but pause to reflect on how far we would go to secure a primary role for unelected experts and evidence that few can understand.

Conclusion: the inescapable and desirable politics of evidence-informed policymaking

Many contemporary discussions of policymaking begin with the naïve belief in the possibility and desirability of an evidence-based policy process free from the pathologies of politics. The buzz phrase for any complaint about politicians not living up to this ideal is ‘policy based evidence’: biased politicians decide first what they want to do, then cherry pick any evidence that backs up their case. Yet, without additional thought, they put in its place a technocratic process in which unelected experts are in charge, deciding on the best evidence of a problem and its best solution.

In other words, new discussions of EBPM raise old discussions of rationality that have occupied policy scholars for many decades. The difference since the days of Simon and Lindblom (1959) is that we now have the scientific technology and methods to gather information in ways beyond the dreams of our predecessors. Yet, such advances in technology and knowledge have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policymaking at multiple levels (partly to uphold a principled commitment, in many political system, to devolve or share power).  Policy theories do not tell us what to do about these limits to EBPM, but they help us to separate pragmatism from often-misplaced idealism.

References

Cairney, Paul (2012) Understanding Public Policy (Basingstoke: Palgrave)

Cairney, Paul (2016) The Politics of Evidence-based Policy Making (Basingstoke: Palgrave)

Cairney, Paul and Heikkila, Tanya (2014) ‘A Comparison of Theories of the Policy Process’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early view, DOI:10.1111/puar.12555

Embrett, M. and Randall, G. (2014) ‘Social determinants of health and health equity policy research: Exploring the use, misuse, and nonuse of policy analysis theory’, Social Science and Medicine, 108, 147-55

Haskins, Ron and Margolis, Greg (2015) Show Me the Evidence: Obama’s fight for rigor and results in social policy (Washington DC: Brookings Institution Press)

Kingdon, J. (1984) Agendas, Alternatives and Public Policies 1st ed. (New York, NY: Harper Collins)

Lindblom, C. (1959) ‘The Science of Muddling Through’, Public Administration Review, 19: 79–88

Lomas J. and Brown A. (2009) ‘Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health’, Milbank Quarterly, 87, 4, 903–926

McBeth, M., Jones, M. and Shanahan, E. (2014) ‘The Narrative Policy Framework’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Nutley, S., Walter, I. and Davies, H. (2007) Using evidence: how research can inform public services (Bristol: The Policy Press)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Kenneth Prewitt, Thomas A. Schwandt, and Miron L. Straf, (Editors) (2012) Using Science as Evidence in Public Policy http://www.nap.edu/catalog.php?record_id=13460

Simon, H. (1976) Administrative Behavior, 3rd ed. (London: Macmillan)

Stoker, G. (2010) ‘Translating experiments into policy’, The ANNALS of the American Academy of Political and Social Science, 628, 1, 47-58

True, J. L., Jones, B. D. and Baumgartner, F. R. (2007) Punctuated Equilibrium Theory’ in P. Sabatier (ed.) Theories of the Policy Process, 2nd ed (Cambridge, MA: Westview Press)

Weible, C., Heikkila, T., deLeon, P. and Sabatier, P. (2012) ‘Understanding and influencing the policy process’, Policy Sciences, 45, 1, 1–21

 

6 Comments

Filed under Evidence Based Policymaking (EBPM)

Policy bubbles and emotional policymaking

I am at a workshop today on policy ‘bubbles’, or (real and perceived) disproportionate policy responses. For Moshe Maor, a bubble describes an over-reaction to a problem, and a negative policy bubble describes under-reaction.

For Maor, this focus on bubbles is one way into our increasing focus on the role of emotion in policymaking: we pay disproportionate attention to problems, and try to solve some but not others, based on the ways in which we engage emotionally with information.

This focus on psychology is, I think, gaining a lot of traction in political science now, and I think it is crucial to explaining, for example, processes associated with ‘evidence-based policymaking’.

In taking this agenda forward, there remain some outstanding issues:

How much of the psychology literature is already reflected in policy studies? For example, see the social construction of target populations (emotion-driven treatment of social groups), ACF (on loss aversion and the devil shift), and the NPF (telling stories to exploit cognitive biases).

What insights remain untapped from key fields such as organisational psychology? I’ll say more about this in a forthcoming post.

How can we study the psychology of policymaking? Most policy theory begins with some reference to bounded rationality, including PET and the identification of disproportionate information processing (policymakers pay disproportionate attention to some issues and ignore the rest). It is largely deductive then empirical: we make some logical steps about the implications of bounded rationality, then study the process in that light.

Similarly, I think most studies of emotion/ policymaking take insights from psychology (e.g. people value losses more than gains, or they make moral judgements then seek evidence to justify them) and then apply them indirectly to policymaking (asking, for example, what is the effect of prospect theory on the behaviour of coalitions).

Can we do more, by studying more directly the actions of policymakers rather than merely interpreting their actions? The problem, of course, is that few policymakers may be keen on engaging in the types of study (e.g. experiments with control groups) that psychologists have used to establish things like fluency effects.

How does policymaker psychology fit into broader explanations of policymaking? The psychology of policymakers is one part of the story. The other is the system or environment in which they operate. So, we have some choices to make about future studies. Some might ‘zoom in’ to focus on emotionally-driven policymaking in key actors, perhaps at the centre of government.

Others may ‘zoom out’. The latter may involve ascribing the same basic thought processes to a large number of actors, examining that process at a relatively abstract level. This is the necessary consequence of trying to account for the effects of a very large number of actors, and to take into account the role of a policymaking environment, only some of which is in the control of policymakers.

Can we really demonstrate disproportionate policy action? The idea of a proportionate policy response interests me, because I think it is always in the eye of the beholder. We make moral and other personal evaluative statements when we describe a proportionate solution in relation to the size of the problem.

For example, in tobacco policy, a well-established argument in public health is that a proportionate policy response to the health effects of smoking and passive smoking (a) has been 20-30 years behind the evidence in ‘leading countries’, and (b) has yet to happen in ‘laggard’ countries. The counterargument is that the identification of a problem does not necessitate the favoured public health solution (comprehensive tobacco control, towards the ‘endgame’ of zero smoking) because it involves major limits to personal liberties and choice.

Is emotion-driven policymaking necessarily a bad thing?

[excerpt from my 2014 PSA paper ] This is partly the focus of Alter and Oppenheimer (2008) when they argue that policymakers spend disproportionate amounts of money on risks with which they are familiar, at the expense of spending money on things with more negative effects, producing a ‘dramatic misallocation of funds’. They draw on Sunstein (2002), who suggests that emotional bases for attention to environmental problems from the 1970s prompted many regulations to be disproportionate to the risk involved. Further, Slovic’s work suggest that people’s feelings towards risk may even be influenced by the way in which it is described, for example as a percentage versus a 1 in X probability (Slovic, P. 2010: xxii).

Haidt (2001: 815) argues that a focus on psychology can be used to improve policymaking: the identification of the ‘intuitive basis of moral judgment’ can be used to help policymakers ‘avoid mistakes’ or allow people to develop ‘programs’ or an ‘environment’ to ‘improve the quality of moral judgment and behavior’. Similarly, Alter and Oppenheimer (2009: 232) worry about medical and legal judgements swayed by fluid diagnoses and stories.

These studies compare with arguments focusing on the positive role of emotions of decision-making, either individually (see Constantinescu, 2012, drawing on Frank, 1988 and Elster, 2000 on the decisions of judges) or as part of social groups, with emotional responses providing useful information in the form of social cues (Van Kleef et al, 2010).

Policy theory does not shy away from these issues. For example, Schneider and Ingram (2014) argue that the outcomes of social construction are often dysfunctional and not based on a well-reasoned, goal-oriented strategy: ‘Studies have shown that rules, tools, rationales and implementation structures inspired by social constructions send dysfunctional messages and poor choices may hamper the effectiveness of policy’. However, part of the value of policy theory is to show that policy results from the interaction of large numbers of people and institutions. So, the poor actions of one policymaker would not be the issue; we need to know more about the cumulative effect of individual emotional decision making in collective decision-making – not only in discrete organisations, but also networks and systems.

And finally: if it is a bad thing, should we do something about it?

Our choice is to find it interesting then go home (this might appeal to the academics) or try to limit the damage/ maximise the benefits of policymaker psychology to policy and society (this might appeal to practitioners). There is no obvious way to do something, though, is there?

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy in 500 Words: if the policy cycle does not exist, what do we do?

It is easy to reject the empirical value of the policy cycle, but difficult to replace it as a practical tool. I identify the implications for students, policymakers, and the actors seeking influence in the policy process.

cycle

A policy cycle divides the policy process into a series of stages:

  • Agenda setting. Identifying problems that require government attention, deciding which issues deserve the most attention and defining the nature of the problem.
  • Policy formulation. Setting objectives, identifying the cost and estimating the effect of solutions, choosing from a list of solutions and selecting policy instruments.
  • Legitimation. Ensuring that the chosen policy instruments have support. It can involve one or a combination of: legislative approval, executive approval, seeking consent through consultation with interest groups, and referenda.
  • Implementation. Establishing or employing an organization to take responsibility for implementation, ensuring that the organization has the resources (such as staffing, money and legal authority) to do so, and making sure that policy decisions are carried out as planned.
  • Evaluation. Assessing the extent to which the policy was successful or the policy decision was the correct one; if it was implemented correctly and, if so, had the desired effect.
  • Policy maintenance, succession or termination. Considering if the policy should be continued, modified or discontinued.

Most academics (and many practitioners) reject it because it oversimplifies, and does not explain, a complex policymaking system in which: these stages may not occur (or occur in this order), or we are better to imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and predictable outputs.

But what do we do about it?

The implications for students are relatively simple: we have dozens of concepts and theories which serve as better ways to understand policymaking. In the 1000 Words series, I give you 25 to get you started.

The implications for policymakers are less simple because they cycle may be unrealistic and useful. Stages can be used to organise policymaking in a simple way: identify policymaker aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate the policy. The idea is simple and the consequent advice to policy practitioners is straightforward.  A major unresolved challenge for scholars and practitioners is to describe a more meaningful, more realistic, analytical model to policymakers and give advice on how to act and justify action in the same straightforward way. So, in this article, I discuss how to reconcile policy advice based on complexity and pragmatism with public and policymaker expectations.

The implications for actors trying to influence policymaking can be dispiriting: how can we engage effectively in the policy process if we struggle to understand it? So, in this page (scroll down – it’s long!), I discuss how to present evidence in complex policymaking systems.

Take home message for students. It is easy to describe then assess the policy cycle as an empirical tool, but don’t stop there. Consider how to turn this insight into action. First, examine the many ways in which we use concepts to provide better descriptions and explanations. Then, think about the practical implications. What useful advice could you give an elected policymaker, trying to juggle pragmatism with accountability? What strategies would you recommend to actors trying to influence the policy process?

2 Comments

Filed under 500 words, public policy

Policy in 500 Words: how much does policy change?

You should get the impression from 1000 words that most policy changes are small or not radically different from the past: Lindblom identifies incrementalism; punctuated equilibrium  highlights a huge number of small changes and small number of huge changes; the ACF compares routine learning by a dominant coalition to a ‘shock’ which prompts new subsystem and policy dynamics; and multiple streams identifies the conditions (rarely met) for major change.

Yet, I just gave you the impression that we don’t know how to define policy. If we can’t define it well, how can we measure it well enough to come to this conclusion so consistently?

Why is the measurement of policy change important?

We miss a lot if we equate policy with statements rather than outputs/ outcomes. We also miss a lot if we equate policy change with the most visible outputs such as legislation. I list 16 different policy instruments, although they tend to be grouped into smaller categories: focusing on regulation (including legislation) and resources (money and staffing) to accentuate the power at policymaker’s disposal; or regulatory/ distributive/ redistributive to suggest that some policy measures are more difficult to ‘sell’ than others.

We also give a limited picture if we equate change with outputs rather than outcomes, since a key insight from policy studies is that there is generally a gap between policymaker expectations and the actual result.

What are the key issues in measurement?

So, as in defining policy change, we need to make choices about what counts as policy in this instance to measure how much it has changed. For example, I have (a) written on one output as a key exemplar of policy change –  legislation to ban smoking in public places for Scotland, England/ Wales, the UK, and (almost) EU – to show that a government is signalling major changes to come, but also (b) situated that policy instrument within a much broader discussion – of many tobacco policies in the UK and across the globe – to examine the extent to which it is already consistent with a well-established direction of travel.

To make such choices we need to consider:

  • Breadth (to give the ‘big picture’) versus depth (to note important details forensically)
  • How much we expect policy to change, given the size of the problem (a big feature in public health studies, which criticise government inaction)
  • How radical policy change looks from the ‘top’ (at the point of central government choice) or the ‘bottom’ (longer-term delivery of policy by other bodies)
  • What policies mean (what problem were policymakers trying to solve?)
  • How consistent ‘policy’ seems when made of often-contradictory instruments

How do we solve the problem?

The problem is that we can produce very different accounts of policy change from the same pool of evidence, by accentuating some measures and ignoring others, or putting more faith in some data more than others (e.g. during interviews).

500 words p30 UPP

Sometimes, my preferred solution is to compare more than one narrative of policy change. Another is simply to ‘show your work’.

Take home message for students: ‘show your work’ means explaining your logical process and step-by-step choices. Don’t just write that it is difficult to define policy and measure change. Instead, explain how you assess policy change in one important way, why you chose this way, and shine a light on the payoffs to your approach. Read up on how other scholars do it, to learn good practice and how to make your results comparable to theirs. Indeed, part of the benefit of using an established theory, to guide our analysis, is that we can engage in research systematically as a group.

 

1 Comment

Filed under 500 words, public policy, Research design