Tag Archives: agenda setting

Policy Analysis in 750 Words: Defining policy problems and choosing solutions

This post forms one part of the Policy Analysis in 750 words series overview.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The idea is that both elements are analytically separable but mutually informative: policy analysis is crucial to solving real policy problems, policy studies inform the feasibility of analysis, the study of policy analysts informs policy studies, and so on.

Both elements focus on similar questions – such as What is policy? – and explore their descriptive (what do policy actors do?) and prescriptive (what should they do?) implications.

  1. What is the policy problem?

Policy studies tend to describe problem definition in relation to framing, narrative, social construction, power, and agenda setting.

Actors exercise power to generate attention for their preferred interpretation, and minimise attention to alternative frames (to help foster or undermine policy change, or translate their beliefs into policy).

Policy studies incorporate insights from psychology to understand (a) how policymakers might combine cognition and emotion to understand problems, and therefore (b) how to communicate effectively when presenting policy analysis.

Policy studies focus on the power to reduce ambiguity rather than simply the provision of information to reduce uncertainty. In other words, the power to decide whose interpretation of policy problems counts, and therefore to decide what information is policy-relevant.

This (unequal) competition takes place within a policy process over which no actor has full knowledge or control.

The classic 5-8 step policy analysis texts focus on how to define policy problems well, but they vary somewhat in their definition of doing it well (see also C.Smith):

  • Bardach recommends using rhetoric and eye-catching data to generate attention
  • Weimer and Vining and Mintrom recommend beginning with your client’s ‘diagnosis’, placing it in a wider perspective to help analyse it critically, and asking yourself how else you might define it (see also Bacchi, Stone)
  • Meltzer and Schwartz and Dunn identify additional ways to contextualise your client’s definition, such as by generating a timeline to help ‘map’ causation or using ‘problem-structuring methods’ to compare definitions and avoid making too many assumptions on a problem’s cause.
  • Thissen and Walker compare ‘rational’ and ‘argumentative’ approaches, treating problem definition as something to be measured scientifically or established rhetorically (see also Riker).

These approaches compare with more critical accounts that emphasise the role of power and politics to determine whose knowledge is relevant (L.T.Smith) and whose problem definition counts (Bacchi, Stone). Indeed, Bacchi and Stone provide a crucial bridge between policy analysis and policy studies by reflecting on what policy analysts do and why.

  1. What is the policy solution?

In policy studies, it is common to identify counterintuitive or confusing aspects of policy processes, including:

  • Few studies suggest that policy responses actually solve problems (and many highlight their potential to exacerbate them). Rather, ‘policy solutions’ is shorthand for proposed or alleged solutions.
  • Problem definition often sets the agenda for the production of ‘solutions’, but note the phrase solutions chasing problems (when actors have their ‘pet’ solutions ready, and they seek opportunities to promote them).

Policy studies: problem definition informs the feasibility and success of solutions

Generally speaking, to define the problem is to influence assessments of the feasibility of solutions:

  • Technical feasibility. Will they work as intended, given the alleged severity and cause of the problem?
  • Political feasibility. Will they receive sufficient support, given the ways in which key policy actors weigh up the costs and benefits of action?

Policy studies highlight the inextricable connection between technical and political feasibility. Put simply, (a) a ‘technocratic’ choice about the ‘optimality’ of a solution is useless without considering who will support its adoption, and (b) some types of solution will always be a hard sell, no matter their alleged effectiveness (Box 2.3 below).

In that context, policy studies ask: what types of policy tools or instruments are actually used, and how does their use contribute to policy change? Measures include the size, substance, speed, and direction of policy change.

box 2.3 2nd ed UPP

In turn, problem definition informs: the ways in which actors will frame any evaluation of policy success, and the policy-relevance of the evidence to evaluate solutions. Simple examples include:

  • If you define tobacco in relation to: (a) its economic benefits, or (b) a global public health epidemic, evaluations relate to (a) export and taxation revenues, or (b) reductions in smoking in the population.
  • If you define ‘fracking’ in relation to: (a) seeking more benefits than costs, or (b) minimising environmental damage and climate change, evaluations relate to (a) factors such as revenue and effective regulation, or simply (b) how little it takes place.

Policy analysis: recognising and pushing boundaries

Policy analysis texts tend to accommodate these insights when giving advice:

  • Bardach recommends identifying solutions that your audience might consider, perhaps providing a range of options on a notional spectrum of acceptability.
  • Smith highlights the value of ‘precedent’, or relating potential solutions to previous strategies.
  • Weimer and Vining identify the importance of ‘a professional mind-set’ that may be more important than perfecting ‘technical skills’
  • Mintrom notes that some solutions are easier to sell than others
  • Meltzer and Schwartz describe the benefits of making a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Dunn warns against too-narrow forms of ‘evidence based’ analysis which undermine a researcher’s ability to adapt well to the evidence-demands of policymakers
  • Thissen and Walker relate solution feasibility to a wide range of policy analysis ‘styles’

Still, note the difference in emphasis.

Policy analysis education/ training may be about developing the technical skills to widen definitions and apply many criteria to compare solutions.

Policy studies suggest that problem definition and a search for solutions takes place in an environment where many actors apply a much narrower lens and are not interested in debates on many possibilities (particularly if they begin with a solution).

I have exaggerated this distinction between each element, but it is worth considering the repeated interaction between them in practice: politics and policymaking provide boundaries for policy analysis, analysis could change those boundaries, and policy studies help us reflect on the impact of analysts.

I’ll take a quick break, then discuss how this conclusion relates to the idea of ‘entrepreneurial’ policy analysis.

Further reading

Understanding Public Policy (2020: 28) describes the difference between governments paying for and actually using the ‘tools of policy formulation’. To explore this point, see ‘The use and non-use of policy appraisal tools in public policy making‘ and The Tools of Policy Formulation.

p28 upp 2nd ed policy tools

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: Rachel Meltzer and Alex Schwartz (2019) Policy Analysis as Problem Solving

Please see the Policy Analysis in 750 words series overview before reading the summary. This post might well represent the largest breach of the ‘750 words’ limit, so please get comfortable. I have inserted a picture of a cat hanging in there baby after the main (*coughs*) 1400-word summary. The rest is bonus material, reflecting on the links between this book and the others in the series.

Meltzer Schwartz 2019 cover

Rachel Meltzer and Alex Schwartz (2019) Policy Analysis as Problem Solving (Routledge)

We define policy analysis as evidence-based advice giving, as the process by which one arrives at a policy recommendation to address a problem of public concern. Policy analysis almost always involves advice for a client’ (Meltzer and Schwartz, 2019: 15).

Meltzer and Schwartz (2019: 231-2) describe policy analysis as applied research, drawing on many sources of evidence, quickly, with limited time, access to scientific research, or funding to conduct a lot of new research (2019: 231-2). It requires:

  • careful analysis of a wide range of policy-relevant documents (including the ‘grey’ literature often produced by governments, NGOs, and think tanks) and available datasets
  • perhaps combined with expert interviews, focus groups, site visits, or an online survey (see 2019: 232-64 on methods).

Meltzer and Schwartz (2019: 21) outline a ‘five-step framework’ for client-oriented policy analysis. During each step, they contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’ (to reflect bounded, not comprehensive, rationality):

  1. ‘Define the problem’.

Problem definition is a political act of framing, not an exercise in objectivity (2019: 52-3). It is part of a narrative to evaluate the nature, cause, size, and urgency of an issue (see Stone), or perhaps to attach to an existing solution (2019: 38-40; compare with Mintrom).

In that context, ask yourself ‘Who is defining the problem? And for whom?’ and do enough research to be able to define it clearly and avoid misunderstanding among you and your client (2019: 37-8; 279-82):

  • Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour, their deadline, and their ability to make or influence the policies you might suggest (2019: 49; compare with Weimer and Vining).
  • Tailor your narrative to your audience, albeit while recognising the need to learn from ‘multiple perspectives’ (2019: 40-5).
  • Make it ‘concise’ and ‘digestible’, not too narrowly defined, and not in a way that already closes off discussion by implying a clear cause and solution (2019: 51-2).

In doing so:

  • Ask yourself if you can generate a timeline, identify key stakeholders, and place a ‘boundary’ on the problem.
  • Establish if the problem is urgent, who cares about it, and who else might care (or not) (2019 : 46).
  • Focus on the ‘central’ problem that your solution will address, rather than the ‘related’ and ‘underlying’ problems that are ‘too large and endemic to be solved by the current analysis’ (2019: 47).
  • Avoid misdiagnosing a problem with reference to one cause. Instead, ‘map’ causation with reference to (say) individual and structural causes, intended and unintended consequences, simple and complex causation, market or government failure, and/ or the ability to blame an individual or organisation (2019: 48-9).
  • Combine quantitative and qualitative data to frame problems in relation to: severity, trends in severity, novelty, proximity to your audience, and urgency or crisis (2019: 53-4).

During this process, interrogate your own biases or assumptions and how they might affect your analysis (2019: 50).

2. ‘Identify potential policy options (alternatives) to address the problem’.

Common sources of ideas include incremental changes from current policy, ‘client suggestions’, comparable solutions (from another time, place, or policy area), reference to common policy instruments, and ‘brainstorming’ or ‘design thinking’ (2019: 67-9; see box 2.3 and 7.1, below, from Understanding Public Policy).

box 2.3 2nd ed UPP

Identify a ‘wide range’ of possible solutions, then select the (usually 3-5) ‘most promising’ for further analysis (2019: 65). In doing so:

  • be careful not to frame alternatives negatively (e.g. ‘death tax’ – 2019: 66)
  • compare alternatives in ‘good faith’ rather than keeping some ‘off the table’ to ensure that your preferred solution looks good (2019: 66)
  • beware ‘ best practice’ ideas that are limited in terms of (a) applicability (if made at a smaller scale, or in a very different jurisdiction), and (b) evidence of success (2019: 70; see studies of policy learning and transfer)
  • think about how to modify existing policies according to scale or geographical coverage, who to include (and based on what criteria), for how long, using voluntary versus mandatory provisions, and ensuring oversight (2019: 71-3)
  • consider combinations of common policy instruments, such as regulations and economic penalties/ subsidies (2019: 73-7)
  • consider established ways to ‘brainstorm’ ideas (2019: 77-8)
  • note the rise of instruments derived from the study of psychology and behavioural public policy (2019: 79-90)
  • learn from design principles, including ‘empathy’, ‘co-creating’ policy with service users or people affected, ‘prototyping’ (2019: 90-1)

box 7.1

3. ‘Specify the objectives to be attained in addressing the problem and the criteria to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.

Your objectives relate to your problem definition and aims: what is the problem, what do you want to happen when you address it, and why?

  • For example, questions to your client may include: what is your organization’s ‘mission’, what is feasible (in terms of resources and politics), which stakeholders to you want to include, and how will you define success (2019: 105; 108-12)?

In that values-based context, your criteria relate to ways to evaluate each policy’s likely impact (2019: 106-7). They should ensure:

  • Comprehensiveness. E.g. how many people, and how much of their behaviour, can you influence while minimizing the ‘burden’ on people, businesses, or government? (2019: 113-4)
  • Mutual Exclusiveness. In other words, don’t have two objectives doing the same thing (2019: 114).

Common criteria include (2019: 116):

  1. Effectiveness. The size of its intended impact on the problem (2019: 117).
  2. Equity (fairness). The impact in terms of ‘vertical equity’ (e.g. the better off should pay more), ‘horizontal equity’ (e.g. you should not pay more if unmarried), fair process, fair outcomes, and ‘intergenerational’ equity (e.g. don’t impose higher costs on future populations) (2019: 118-19).
  3. Feasibility (administrative, political, and technical). The likelihood of this policy being adopted and implemented well (2019: 119-21)
  4. Cost (or financial feasibility). Who would bear the cost, and their willingness and ability to pay (2019: 122).
  5. Efficiency. To maximise the benefit while minimizing costs (2019: 122-3).

 

4. ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.

When explaining objectives and criteria,

  • ‘label’ your criteria in relation to your policy objectives (e.g. to ‘maximize debt reduction’) rather than using generic terms (2019: 123-7)
  • produce a table – with alternatives in rows, and criteria in columns – to compare each option
  • quantify your policies’ likely outcomes, such as in relation to numbers of people affected and levels of income transfer, or a percentage drop in the size of the problem, but also
  • communicate the degree of uncertainty related to your estimates (2019: 128-32; see Spiegelhalter)

Consider using cost-benefit analysis to identify (a) the financial and opportunity cost of your plans (what would you achieve if you spent the money elsewhere?), compared to (b) the positive impact of your funded policy (2019: 141-55).

  • The principle of CBA may be intuitive, but a thorough CBA process is resource-intensive, vulnerable to bias and error, and no substitute for choice. It requires you to make a collection of assumptions about human behaviour and likely costs and benefits, decide whose costs and benefits should count, turn all costs and benefits into a single measure, and imagine how to maximise winners and compensate losers (2019: 155-81; compare Weimer and Vining with Stone).
  • One alternative is cost-effectiveness analysis, which quantifies costs and relates them to outputs (e.g. number of people affected, and how) without trying to translate them into a single measure of benefit (2019: 181-3).
  • These measures can be combined with other thought processes, such as with reference to ‘moral imperatives’, a ‘precautionary approach’, and ethical questions on power/ powerlessness (2019: 183-4).

 

5. ‘Arrive at a recommendation’.

Predict the most likely outcomes of each alternative, while recognising high uncertainty (2019: 189-92). If possible,

  • draw on existing, comparable, programmes to predict the effectiveness of yours (2019: 192-4)
  • combine such analysis with relevant theories to predict human behaviour (e.g. consider price ‘elasticity’ if you seek to raise the price of a good to discourage its use) (2019: 193-4)
  • apply statistical methods to calculate the probability of each outcome (2019: 195-6), and modify your assumptions to produce a range of possibilities, but
  • note Spiegelhalter’s cautionary tales and anticipate the inevitable ‘unintended consequences’ (when people do not respond to policy in the way you would like) (2019: 201-2)
  • use these estimates to inform a discussion on your criteria (equity, efficiency, feasibility) (2019: 196-200)
  • present the results visually – such as in a ‘matrix’ – to encourage debate on the trade-offs between options
  • simplify choices by omitting irrelevant criteria and options that do not compete well with others (2019: 203-10)
  • make sure that your recommendation (a) flows from the analysis, and (b) is in the form expected by your client (2019: 211-12)
  • consider making a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups (2019: 212).

 

hang-in-there-baby

 

Policy analysis in a wider context

Meltzer and Schwartz’s approach makes extra sense if you have already read some of the other texts in the series, including:

  1. Weimer and Vining, which represents an exemplar of an X-step approach informed heavily by the study of economics and application of economic models such as cost-benefit-analysis (compare with Radin’s checklist).
  2. Geva-May on the existence of a policy analysis profession with common skills, heuristics, and (perhaps) ethics (compare with Meltzer and Schwartz, 2019: 282-93)
  3. Radin, on:
  • the proliferation of analysts across multiple levels of government, NGOs, and the private sector (compare with Meltzer and Schwartz, 2019: 269-77)
  • the historic shift of analysis from formulation to all notional stages (contrast with Meltzer and Schwartz, 2019: 16-7 on policy analysis not including implementation or evaluation)
  • the difficulty in distinguishing between policy analysis and advocacy in practice (compare with Meltzer and Schwartz, 2019: 276-8, who suggest that actors can choose to perform these different roles)
  • the emerging sense that it is difficult to identify a single client in a multi-centric policymaking system. Put another way, we might be working for a specific client but accept that their individual influence is low.
  1. Stone’s challenge to
  • a historic tendency for economics to dominate policy analysis,
  • the applicability of economic assumptions (focusing primarily on individualist behaviour and markets), and
  • the pervasiveness of ‘rationalist’ policy analysis built on X-steps.

Meltzer and Schwartz (2019: 1-3) agree that economic models are too dominant (identifying the value of insights from ‘other disciplines – including design, psychology, political science, and sociology’).

However, they argue that critiques of rational models exaggerate their limitations (2019: 23-6). For example:

  • these models need not rely solely on economic techniques or quantification, a narrow discussion or definition of the problem, or the sense that policy analysis should be comprehensive, and
  • it is not problematic for analyses to reflect their client’s values or for analysts to present ambiguous solutions to maintain wide support, partly because
  • we would expect the policy analysis to form only one part of a client’s information or strategy.

Further, they suggest that these critiques provide no useful alternative, to help guide new policy analysts. Yet, these guides are essential:

to be persuasive, and credible, analysts must situate the problem, defend their evaluative criteria, and be able to demonstrate that their policy recommendation is superior, on balance, to other alternative options in addressing the problem, as defined by the analyst. At a minimum, the analyst needs to present a clear and defensible ranking of options to guide the decisions of the policy makers’ (Meltzer and Schwartz, 2019: 4).

Meltzer and Schwartz (2019: 27-8) then explore ways to improve a 5-step model with insights from approaches such as ‘design thinking’, in which actors use a similar process – ‘empathize, define the problem, ideate, prototype, test and get feedback from others’ – to experiment with policy solutions without providing a narrow view on problem definition or how to evaluate responses.

Policy analysis and policy theory

One benefit to Meltzer and Schwartz’s approach is that it seeks to incorporate insights from policy theories and respond with pragmatism and hope. However, I think you also need to read the source material to get a better sense of those theories, key debates, and their implications. For example:

  1. Meltzer and Schwartz (2019: 32) note correctly that ‘incremental’ does not sum up policy change well. Indeed, Punctuated Equilibrium Theory shows that policy change is characterised by a huge number of small and a small number of huge changes.
  • However, the direct implications of PET are not as clear as they suggest. Baumgartner and Jones have both noted that they can measure these outcomes and identify the same basic distribution across a political system, but not explain or predict why particular policies change dramatically.
  • It is useful to recommend to policy analysts that they invest some hope in major policy change, but also sensible to note that – in the vast majority of cases – it does not happen.
  • On his point, see Mintrom on policy analysis for the long term, Weiss on the ‘enlightenment’ function of research and analysis, and Box 6.3 (from Understanding Public Policy), on the sense that (a) we can give advice to ‘budding policy entrepreneurs’ on how to be effective analysts, but (b) should note that all their efforts could be for nothing.

box 6.3

  1. Meltzer and Schwartz (2019: 32-3) tap briefly into the old debate on whether it is preferable to seek radical or incremental change. For more on that debate, see chapter 5 in the 1st ed of Understanding Public Policy in which Lindblom notes that proposals for radical/ incremental changes are not mutually exclusive.
  2. Perhaps explore the possible tension between Meltzer and Schwartz’s (2019: 33-4) recommendation that (a) policy analysis should be ‘evidence-based advice giving’, and (b) ‘flexible and open-ended’.
  • I think that Stone’s response would be that phrases such as ‘evidence based’ are not ‘flexible and open-ended’. Rather, they tend to symbolise a narrow view of what counts as evidence (see also Smith, and Hindess).
  • Further, note that the phrase ‘evidence based policymaking’ is a remarkably vague term (see the EBPM page), perhaps better seen as a political slogan than a useful description or prescription of policymaking.

 

Finally, if you read enough of these policy analysis texts, you get a sense that many are bunched together even if they describe their approach as new or distinctive.

  • Indeed, Meltzer and Schwarz (2019: 22-3) provide a table (containing Bardach and Patashnik, Patton et al, Stokey and Zeckhauser, Hammond et al, and Weimer & Vining) of ‘quite similar’ X-step approaches.
  • Weimer and Vining also discuss the implications of policy theories and present the sense that X-step policy analysis should be flexible and adaptive.
  • Many texts – including Radin, and Smith (2016) – focus on the value of case studies to think through policy analysis in particular contexts, rather than suggesting that we can produce a universal blueprint.

However, as Geva-May might suggest, this is not a bad thing if our aim is to generate the sense that policy analysis is a profession with its own practices and heuristics.

 

 

15 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy in 500 Words: Punctuated Equilibrium Theory

See also the original – and now 6 years old – 1000 Words post.

This 500 Words version is a modified version of the introduction to chapter 9 in the 2nd edition of Understanding Public Policy.  

UPP p147 PET box

 Punctuated equilibrium theory (PET) tells a story of complex systems that are stable and dynamic:

  • Most policymaking exhibits long periods of stability, but with the ever-present potential for sudden instability.
  • Most policies stay the same for long periods. Some change very quickly and dramatically.

We can explain this dynamic with reference to bounded rationality: since policymakers cannot consider all issues at all times, they ignore most and promote relatively few to the top of their agenda.

This lack of attention to most issues helps explain why most policies may not change, while intense periods of attention to some issues prompts new ways to frame and solve policy problems.

Some explanation comes from the power of participants, to (a) minimize attention and maintain an established framing, or (b) expand attention in the hope of attracting new audiences more sympathetic to new ways of thinking.

Further explanation comes from policymaking complexity, in which the scale of conflict is too large to understand, let alone control.

The original PET story

The original PET story – described in more detail in the 1000 Words version – applies two approaches – policy communities and agenda setting – to demonstrate stable relationships between interest groups and policymakers:

  • They endure when participants have built up trust and agreement – about the nature of a policy problem and how to address it – and ensure that few other actors have a legitimate role or interest in the issue.
  • They come under pressure when issues attract high policymaker attention, such as following a ‘focusing event’ or a successful attempt by some groups to ‘venue shop’ (seek influential audiences in another policymaking venue). When an issue reaches the ‘top’ of this wider political agenda it is processed in a different way: more participants become involved, and they generate more ways to look at (and seek to solve) the policy.

The key focus is the competition to frame or define a policy problem (to exercise power to reduce ambiguity). The successful definition of a policy problem as technical or humdrum ensures that issues are monopolized and considered quietly in one venue. The reframing of that issue as crucial to other institutions, or the big political issues of the day, ensures that it will be considered by many audiences and processed in more than one venue (see also Schattschneider).

The modern PET story

The modern PET story is about complex systems and attention.

Its analysis of bounded rationality and policymaker psychology remains crucial, since PET measures the consequences of the limited attention of individuals and organisations.

However, note the much greater quantification of policy change across entire political systems (see the Comparative Agendas Project).

PET shows how policy actors and organisations contribute to ‘disproportionate information processing’, in which attention to information fluctuates out of proportion to (a) the size of policy problems and (b) the information on problems available to policymakers.

It also shows that the same basic distribution of policy change – ‘hyperincremental’ in most cases, but huge in some – is present in every political system studied by the CAP (summed up by the image below)

True et al figure 6.2

See also:

5 images of the policy process

6 Comments

Filed under 500 words, agenda setting, public policy

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell; and see Sousa on contestation).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

 

13 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Policy in 500 Words: Power and Knowledge

Classic studies suggest that the most profound and worrying kinds of power are the hardest to observe. We often witness highly visible political battles and can use pluralist methods to identify who has material resources, how they use them, and who wins. However, key forms of power ensure that many such battles do not take place. Actors often use their resources to reinforce social attitudes and policymakers’ beliefs, to establish which issues are policy problems worthy of attention and which populations deserve government support or punishment. Key battles may not arise because not enough people think they are worthy of debate. Attention and support for debate may rise, only to be crowded out of a political agenda in which policymakers can only debate a small number of issues.

Studies of power relate these processes to the manipulation of ideas or shared beliefs under conditions of bounded rationality (see for example the NPF). Manipulation might describe some people getting other people to do things they would not otherwise do. They exploit the beliefs of people who do not know enough about the world, or themselves, to know how to identify and pursue their best interests. Or, they encourage social norms – in which we describe some behaviour as acceptable and some as deviant – which are enforced by the state (for example, via criminal justice and mental health policy), but also social groups and individuals who govern their own behaviour with reference to what they feel is expected of them (and the consequences of not living up to expectations).

Such beliefs, norms, and rules are profoundly important because they often remain unspoken and taken for granted. Indeed, some studies equate them with the social structures that appear to close off some action. If so, we may not need to identify manipulation to find unequal power relationships: strong and enduring social practices help some people win at the expense of others, by luck or design.

In practice, these more-or-less-observable forms of power co-exist and often reinforce each other:

Example 1. The control of elected office is highly skewed towards men. Male incumbency, combined with social norms about who should engage in politics and public life, signal to women that their efforts may be relatively unrewarded and routinely punished – for example, in electoral campaigns in which women face verbal and physical misogyny – and the oversupply of men in powerful positions tends to limit debates on feminist issues.

Example 2. ‘Epistemic violencedescribes the act of dismissing an individual, social group, or population by undermining the value of their knowledge or claim to knowledge. Specific discussions include: (a) the colonial West’s subjugation of colonized populations, diminishing the voice of the subaltern; (b) privileging scientific knowledge and dismissing knowledge claims via personal or shared experience; and (c) erasing the voices of women of colour from the history of women’s activism and intellectual history.

It is in this context that we can understand ‘critical’ research designed to ‘produce social change that will empower, enlighten, and emancipate’ (p51). Powerlessness can relate to the visible lack of economic material resources and factors such as the lack of opportunity to mobilise and be heard.

See also:

Policy Concepts in 1000 Words: Power and Ideas

Evidence-informed policymaking: context is everything

14 Comments

Filed under 500 words, agenda setting, public policy, Storytelling

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

For a longer discussion, see Fostering Evidence-informed Policy Making: Uncertainty Versus Ambiguity (PDF)

Or, if you fancy it in French: Favoriser l’élaboration de politiques publiques fondées sur des données probantes : incertitude versus ambiguïté (PDF)

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

27 Comments

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Telling Stories that Shape Public Policy

This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.

But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story.  Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place.  The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.

Why focus on stories?

The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.

The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.

Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.

The 5 Steps of Good Policy Narrating

  1. Tell a Story: Remember, facts never speak for themselves. If you are presenting best practices, relaying scientific information, or detailing cost/benefit analyses, you are telling or contributing to a story.  Engage your storytelling deliberately.
  2. Set the Stage: Policy narratives have a setting and in this setting you will find specific evidence, geography, legal parameters, and other policy consequential items and information.  Think of these setting items as props.  Not all stages can hold every relevant prop.  Be true to science; be true to your craft, but set your stage with props that maximize the potency of your story, which always includes making your setting amenable to your audience.
  3. Establish the Plot: In public policy plots usually define the problem (and polices do not exist without at least a potential problem). Define your problem. Doing so determines the causes, which establishes blame.
  4. Cast the Characters:  Having established a plot and defined your problem, the roles you will need your characters to play become apparent. Determine who the victim is (who is harmed by the problem), who is responsible (the villain) and who can bring relief (the hero). Cast characters your audience will appreciate in their roles.
  5. Clearly Specify the Moral: Postmodern films might get away without having a point.  Policy narratives usually do not. Let your audience know what the solution is.

Public Policy Intervention Points

There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.

These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.

Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.

When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.

  1. Who are you and what are your goals? Are you an outsider trying to affect change to solve a problem or push an agency to do something it might not be inclined to do?  Are you an insider trying to evaluate and improve policy making and implementation? Understanding your role and your goals is essential to both selecting an appropriate intervention point and optimizing your narrative therein.
  2. Carefully consider your audience. Who are they and what is their posture towards your overall goal? Understanding your audience’s values and beliefs is essential for avoiding invoking defensiveness.
  3. There is the intervention point itself – what is the best way to reach your audience? What are the rules for the type of communication you plan to use? For example, media communications can be done with lengthy press releases, interviews with the press, or in the confines of a simple tweet.  All of these methods have both formal and informal constraints that will determine what you can and can’t do.

Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.

On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.

 

2 Comments

Filed under public policy, Storytelling

Three ways to communicate more effectively with policymakers

By Paul Cairney and Richard Kwiatkowski

Use psychological insights to inform communication strategies

Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).

They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.

Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.

If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.

We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.

To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.

Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.

On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:

  1. Tailor framing strategies to policymaker bias. If people are cognitive misers, minimise the cognitive burden of your presentation. If policymakers combine cognitive and emotive processes, combine facts with emotional appeals. If policymakers make quick choices based on their values and simple moral judgements, tell simple stories with a hero and moral. If policymakers reflect a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with those beliefs.
  2. Identify ‘windows of opportunity’ to influence individuals and processes. ‘Timing’ can refer to the right time to influence an individual, depending on their current way of thinking, or to act while political conditions are aligned.
  3. Adapt to real-world ‘dysfunctional’ organisations rather than waiting for an orderly process to appear. Form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.


This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.

Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

How do we get governments to make better decisions?

This is a guest post by Chris Koski (left) and Sam Workman (right), discussing how to use insights from punctuated equilibrium theory to reform government policy making. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Koski Workman

Many people assume that the main problem faced by governments is an information deficit. However, the opposite is true. A surfeit of information exists and institutions have a hard time managing it.  At the same time, all the information that exists in defining problems may be insufficient. Institutions need to develop a capacity to seek out better quality information too.

Institutions, from the national government, to state legislatures, to city councils – try to solve the information processing dilemma by delegating authority to smaller subgroups. Delegation increases the information processing capacity of governments by involving more actors to attend to narrower issues.

The delegation of authority is ultimately a delegation of attention. It solves the ‘flow’ problem, but also introduces new ‘filters’.  The preferences, interests, and modes of information search all influence the process. Even narrowly focused smaller organizations face limitations in their capacity to search and are subject to similar forces as the governments which created them – filters for the deluge of information and capacity limitations for information seeking.

Organizational design predisposes institutions to filter information for ideas that support status quo problem definitions – that is, definitions that existed at the time of delegation – and to seek out information based on these status quo understandings.  As a result, despite a desire to expand attention and information processing to adapt to changes in problem characteristics, most institutions look for information that supports their identity.  Institutional problem definitions stay the same even as the problems change.

Governments eventually face trade-offs between the gains made from delegating decision-making to smaller subgroups and the losses associated with coordinating the information generated by those subgroups.

Governments get stuck in the same ruts as when the delegation process started: status quo bias that doesn’t adjust with change problem conditions.  There is a sense among citizens and academics that governments make bad decisions in part because they respond to problems of today with the policies of 10 years ago.  Government solutions look like hammers in search of nails when they ought to look more like contractors or even urban planners.

Governments should not respond simply by centralizing

When institutions become stultified in their problem definitions, policymakers and citizens often misdiagnose the problem as entirely a coordination problem.  The logic here is that a small group of actors have captured policymaking and are using such capture for their own gain.  This understanding may be true, or may not, but it leads to the “centralization as savior” fallacy.  The idea here is that organizations with broader latitude will be better able to receive a wider variety of information from a broader range of sources.

There are two problems with this strategy.  First, centralization might guarantee an outcome, but at the expense of an honest problems search and, likely, at the expense of what we might call policy stability.  Second, centralization may offer the opportunity for a broader array of information to bear on policy decisions, but, in practice will rely on even narrower information filters given the number of issues to which the newly centralized policymaking forum must attend.

More delegation produces fragmentation

The alternative, more delegation, has significant coordination challenges as we find bottlenecks of attention when multiple subsystems bear on decision-points.  Also, simply delegating authority can predispose subsystems to a particular solution, which we want to avoid.

We’d propose: Adaptive governance

  • Design institutions not just to attend to problems, but to be specifically information seeking. For example, NEPA requires that all US federal decision-making regarding the environment undergo some kind of environmental assessment – this can be as simply as saying “the environmental will not be harmed” or as complex as an environmental impact statement.  At the same time, we’d suggest greater coordination of institutional actions – enhance communication across delegated units but also better feedback mechanisms to overarching institutions.
  • Institutions need to listen to the signals that their delegated units give them. When delegated institutions come to similar conclusions regarding similar problems, these are key signals to broader policymaking bodies.  Listening to signals from multiple delegated units allows for expertise to shine.  At the same time, disharmony across delegated units on the same problems is a good indicator of disharmony in information search.  Sometimes institutions respond to this disharmony by attempting to reduce participation in the policy process or cast outliers as simply outliers.  We think this is a bad idea as it exaggerates the acceptability of the status quo.
  • We propose ‘issue bundling’ which allows for issues to be less tied up by monolithic problem definitions. Policymaking institutions ought to formally direct delegated institutions to look at the same problem relying upon different expertise.  Examples here are climate change or critical infrastructure protection.  To create institutions to deal with these issues is a challenge given the wide range of information necessary to address each.  Institutions can solve the attention problems that emerge from the multiple sources by creating specific channels of information.  This allows for multiple subsystems  – e.g. Agriculture, Transportation, or Environmental Protection – to assist institutional decision-making by sorting issue specific – e.g. Climate Change – information.

Our solutions do solve fundamental problems of information processing in terms of sorting and seeking information – such problems are fundamental to humans and human-created organizations.  However, while governments may be predisposed to prioritize decisions over information, we are optimistic that our recommendations can facilitate better informed policy in the future.

2 Comments

Filed under agenda setting, public policy

Practical Lessons from Policy Theories

These links to blog posts (the underlined headings) and tweets (with links to their full article) describe a new special issue of Policy and Politics, published in April 2018 and free to access until the end of May.

Weible Cairney abstract

Three habits of successful policy entrepreneurs

Telling stories that shape public policy

How to design ‘maps’ for policymakers relying on their ‘internal compass’

Three ways to encourage policy learning

How can governments better collaborate to address complex problems?

How do we get governments to make better decisions?

How to navigate complex policy designs

Why advocacy coalitions matter and how to think about them

None of these abstract theories provide a ‘blueprint’ for action (they were designed primarily to examine the policy process scientifically). Instead, they offer one simple insight: you’ll save a lot of energy if you engage with the policy process that exists, not the one you want to see.

Then, they describe variations on the same themes, including:

  1. There are profound limits to the power of individual policymakers: they can only process so much information, have to ignore almost all issues, and therefore tend to share policymaking with many other actors.
  2. You can increase your chances of success if you work with that insight: identify the right policymakers, the ‘venues’ in which they operate, and the ‘rules of the game’ in each venue; build networks and form coalitions to engage in those venues; shape agendas by framing problems and telling good stories, design politically feasible solutions, and learn how to exploit ‘windows of opportunity’ for their selection.

Background to the special issue

Chris Weible and I asked a group of policy theory experts to describe the ‘state of the art’ in their field and the practical lessons that they offer.

Our next presentation was at the ECPR in Oslo:

The final articles in this series are now complete, but our introduction discusses the potential for more useful contributions

Weible Cairney next steps pic

20 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

The UK Government’s ‘troubled families’ policy appears to be a classic top-down, evidence-free, and quick emotional reaction to crisis. It developed after riots in England (primarily in London) in August 2011. Within one week, and before announcing an inquiry into them, then Prime Minister David Cameron made a speech linking behaviour directly to ‘thugs’ and immorality – ‘people showing indifference to right and wrong…people with a twisted moral code…people with a complete absence of self-restraint’ – before identifying a breakdown in family life as a major factor (Cameron, 2011a).

Although the development of parenting programmes was already government policy, Cameron used the riots to raise parenting to the top of the agenda:

We are working on ways to help improve parenting – well now I want that work accelerated, expanded and implemented as quickly as possible. This has got to be right at the top of our priority list. And we need more urgent action, too, on the families that some people call ‘problem’, others call ‘troubled’. The ones that everyone in their neighbourhood knows and often avoids …Now that the riots have happened I will make sure that we clear away the red tape and the bureaucratic wrangling, and put rocket boosters under this programme …with a clear ambition that within the lifetime of this Parliament we will turn around the lives of the 120,000 most troubled families in the country.

Cameron reinforced this agenda in December 2011 by stressing the need for individuals and families to take moral responsibility for their actions, and for the state to intervene earlier in their lives to reduce public spending in the long term:

Officialdom might call them ‘families with multiple disadvantages’. Some in the press might call them ‘neighbours from hell’. Whatever you call them, we’ve known for years that a relatively small number of families are the source of a large proportion of the problems in society. Drug addiction. Alcohol abuse. Crime. A culture of disruption and irresponsibility that cascades through generations. We’ve always known that these families cost an extraordinary amount of money…but now we’ve come up the actual figures. Last year the state spent an estimated £9 billion on just 120,000 families…that is around £75,000 per family.

The policy – primarily of expanding the provision of ‘family intervention’ approaches – is often described as a ‘classic case of policy based evidence’: policymakers cherry pick or tell tall tales about evidence to justify action. It is a great case study for two reasons:

  1. Within this one programme are many different kinds of evidence-use which attract the ire of academic commentators, from an obviously dodgy estimate and performance management system to a more-sincere-but-still-criticised use of evaluations and neuroscience.
  2. It is easy to criticise the UK government’s actions but more difficult to say – when viewing the policy problem from its perspective – what the government should do instead.

In other words, it is useful to note that the UK government is not winning awards for ‘evidence-based policymaking’ (EBPM) in this area, but less useful to deny the politics of EBPM and hold it up to a standard that no government can meet.

The UK Government’s problematic use of evidence

Take your pick from the following ways in which the UK Government has been criticised for its use of evidence to make and defend ‘troubled families’ policy.

Its identification of the most troubled families: cherry picking or inventing evidence

At the heart of the programme is the assertion that we know who the ‘troubled families’ are, what causes their behaviour, and how to stop it. Yet, much of the programme is built on a value judgements about feckless parents, and tipping the balance from support to sanctions, and unsubstantiated anecdotes about key aspects such as the tendency of ‘worklessness’ or ‘welfare dependency’ to pass from one generation to another.

The UK government’s target of almost 120000 families was based speculatively on previous Cabinet Office estimates in 2006 that about ‘2% of families in England experience multiple and complex difficulties’. This estimate was based on limited survey data and modelling to identify families who met five of seven criteria relating to unemployment, poor housing, parental education, the mental health of the mother, the chronic illness or disability of either parent, an income below 60% of the median, and an inability to by certain items of food or clothing.

It then gave locally specific estimates to each local authority and asked them to find that number of families, identifying households with: (1) at least one under-18-year-old who has committed an offense in the last year, or is subject to an ASBO; and/ or (2) has been excluded from school permanently, or suspended on three consecutive terms, in a Pupil Referral Unit, off the school roll, or has over 15% unauthorised absences over three consecutive terms; and (3) an adult on out of work benefits.

If the household met all three criteria, they would automatically be included. Otherwise, local authorities had the discretion to identify further troubled families meeting two of the criteria and other indicators of concerns about ‘high costs’ of late intervention such as, ‘a child who is on a Child Protection Plan’, ‘Families subject to frequent police call-outs or arrests’, and ‘Families with health problems’ linked to mental health, addiction, chronic conditions, domestic abuse, and teenage pregnancy.

Its measure of success: ‘turning around’ troubled families

The UK government declared almost-complete success without convincing evidence. Success ‘in the last 6 months’ to identify a ‘turned around family’ is measured in two main ways: (1) the child no longer having three exclusions in a row, a reduction in the child offending rate of 33% or ASB rate of 60%, and/or the adult entering a relevant ‘progress to work’ programme; or (2) at least one adult moving from out of work benefits to continuous employment. It was self-declared by local authorities, and both parties had a high incentive to declare it: local authorities received £4000 per family payments and the UK government received a temporary way to declare progress without long term evidence.

The declaration is in stark contrast to an allegedly suppressed report to the government which stated that the programme had ‘no discernible effect on unemployment, truancy or criminality’. This lack of impact was partly confirmed by FOI requests by The Guardian – demonstrating that at least 8000 families received no intervention, but showed improvement anyway – and analysis by Levitas and Crossley which suggests that local authorities could only identify families by departing from the DCLG’s initial criteria.

Its investment in programmes with limited evidence of success

The UK government’s massive expansion of ‘family intervention projects’, and related initiatives, is based on limited evidence of success from a small sample of people from a small number of pilots. The ‘evidence for the effectiveness of family intervention projects is weak’ and a government-commissioned systematic review suggests that there are no good quality evaluations to demonstrate (well) the effectiveness or value-for-money of key processes such as coordinated service provision. The impact of other interventions, previously with good reputations, has been unclear, such as the Family Nurse Partnership imported from the US which so far has produced ‘no additional short-term benefit’. Overall, Crossley and Lambert suggest that “the weight of evidence surrounding ‘family intervention’ and similar approaches, over the longue durée, actually suggests that the approach doesn’t work”. There is also no evidence to support its heroic claim that spending £10000 per family will save £65000.

Its faith in sketchy neuroscientific evidence on the benefits of early intervention

The government is driven partly by a belief in the benefits of early intervention in the lives of children (from 0-3, or even before birth), which is based partly on the ‘now or never’ argument found in key reviews by Munro and Allen (one and two).

normal brain

Policymakers take liberties with neuroscientific evidence to emphasise the profound effect of stress on early brain development (measured, for example, by levels of cortisol found in hair samples). These accounts underpinning the urgency of early intervention are received far more critically in fields such as social science, neuroscience, and psychology. For example, Wastell and White find no good quality scientific evidence behind the comparison of child brain development reproduced in Allen’s reports.

Now let’s try to interpret and explain these points partly from a government perspective

Westminster politics necessitates this presentation of ‘prevention’ policies

If you strip away the rhetoric, the troubled families programme is a classic attempt at early intervention to prevent poor outcomes. In this general field, it is difficult to know what government policy is – what it stands for and how you measure its success. ‘Prevention’ is vague, plus governments make a commitment to meaningful local discretion and the sense that local actors should be guided by a combination of the evidence of ‘what works’ and its applicability to local circumstances.

This approach is not tolerated in Westminster politics, built on the simple idea of accountability in which you know who is in charge and therefore to blame! UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect: although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable outcomes – even if the broader aim is to encourage local discretion.

This context helps explain why governments appear to exploit crises to sell existing policies, and pursue ridiculous processes of estimation and performance measurement. They need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and they have to do these things very quickly.

Consequently, for example, they will not worry about some academics complaining about policy based evidence – they are more concerned about their media and public reception and the ability of the opposition to exploit their failures – and few people in politics have the time (that many academics take for granted) to wait for research. This is the lens through which we should view all discussions of the use of evidence in politics and policy.

Unequivocal evidence is impossible to produce and we can’t wait forever

The argument for evidence-based-policy rather than policy-based-evidence suggests that we know what the evidence is. Yet, in this field in particular, there is potential for major disagreement about the ‘bar’ we set for evidence.

Table 1 Three ideal types EBBP

For some, it relates to a hierarchy of evidence in which randomised control trials (RCTs) and their systematic review are at the top: the aim is to demonstrate that an intervention’s effect was positive, and more positive than another intervention or non-intervention. This requires experiments: to compare the effects of interventions in controlled settings, in ways that are directly comparable with other experiments.

As table 1 suggests, some other academics do not adhere to – and some reject – this hierarchy. This context highlights three major issues for policymakers:

  1. In general, when they seek evidence, they find this debate about how to gather and analyse it (and the implications for policy delivery).
  2. When seeking evidence on interventions, they find some academics using the hierarchy to argue the ‘evidence for the effectiveness of family intervention projects is weak’. This adherence to a hierarchy to determine research value also doomed to failure a government-commissioned systematic review: the review applied a hierarchy of evidence to its analysis of reports by authors who did not adhere to the same model. The latter tend to be more pragmatic in their research design (and often more positive about their findings), and their government audience rarely adheres to the same evidential standard built on a hierarchy. In the absence of someone giving ground, some researchers will never be satisfied with the available evidence and elected policymakers are unlikely to listen to them.
  3. The evidence generated from RCTs is often disappointing. The so-far-discouraging experience of the Family Nurse Partnership has a particularly symbolic impact, and policymakers can easily pick up a general sense of uncertainty about the best policies in which to invest.

So, if your main viewpoint is academic, you can easily conclude that the available evidence does not yet justify massive expansion in the troubled families programme (perhaps you might prefer the Scottish approach of smaller scale piloting, or for the government to abandon certain interventions altogether).

However, if you are a UK government policymaker feeling the need to act – and knowing that you always have to make decisions despite uncertainty – you may also feel that there will never be enough evidence on which to draw. Given the problems outlined above, you may as well act now than wait for years for little to change.

The ends justify the means

Policymakers may feel that the ends of such policies – investment in early intervention by shifting funds from late intervention – may justify the means, which can include a ridiculous oversimplification of evidence. It may seem almost impossible for governments to find other ways to secure the shift, given the multiple factors which undermine its progress.

Governments sometimes hint at this approach when simplifying key figures – effectively to argue that late intervention costs £9bn while early intervention will only cost £448m – to reinforce policy change: ‘the critical point for the Government was not necessarily the precise figure, but whether a sufficiently compelling case for a new approach was made’.

Similarly the vivid comparison of healthy versus neglected brains provides shocking reference points to justify early intervention. Their rhetorical value far outweighs their evidential value. As in all EBPM, the choice for policymakers is to play the game, to generate some influence in not-ideal circumstances, or hope that science and reason will save the day (and the latter tends to be based on hope rather than evidence). So, the UK appeared to follow the US’ example in which neuroscience ‘was chosen as the scientific vehicle for the public relations campaign to promote early childhood programs more for rhetorical, than scientific reasons’, partly because a focus on, for example, permanent damage to brain circuitry is less abstract than a focus on behaviour.

Overall, policymakers seem willing to build their case on major simplifications and partial truths to secure what they believe to be a worthy programme (although it would be interesting to find out which policymakers actually believe the things they say). If so, pointing out their mistakes or alleging lies can often have a minimal impact (or worse, if policymakers ‘double down’ in the face of criticism).

Implications for academics, practitioners, and ‘policy based evidence’

I have been writing on ‘troubled families’ while encouraging academics and practitioners to describe pragmatic strategies to increase the use of evidence in policy.

Palgrave C special

Our starting point is relevant to this discussion – since it asks what we should do if policymakers don’t think like academics:

  • They worry more about Westminster politics – their media and public reception and the ability of the opposition party to exploit their failures – than what academics think of their actions.
  • They do not follow the same rules of evidence generation and analysis.
  • They do not have the luxury of uncertainty and time.

Generally, this is a useful lens through which we should view discussions of the realistic use of evidence in politics and policy. Without being pragmatic – to recognise that policymakers will never think like scientists, and always face different pressures – we might simply declare ‘policy based evidence’ in all cases. Although a commitment to pragmatism does not solve these problems, at least it prompts us to be more specific about categories of PBE, the criteria we use to identify it, if our colleagues share a commitment to those criteria, what we can reasonably expect of policymakers, and how we might respond.

In disciplines like social policy we might identify a further issue, linked to:

  1. A tradition of providing critical accounts of government policy to help hold elected policymakers to account. If so, the primary aim may be to publicise key flaws without engaging directly with policymakers to help fix them – and perhaps even to criticise other scholars for doing so – because effective criticism requires critical distance.
  2. A tendency of many other social policy scholars to engage directly in evaluations of government policy, with the potential to influence and be influenced by policymakers.

It is a dynamic that highlights well the difficulty of separating empirical and normative evaluations when critics point to the inappropriate nature of the programmes as they interrogate the evidence for their effectiveness. This difficulty is often more hidden in other fields, but it is always a factor.

For example, Parr noted in 2009 that ‘despite ostensibly favourable evidence … it has been argued that the apparent benign-welfarism of family and parenting-based antisocial behaviour interventions hide a growing punitive authoritarianism’. The latter’s most extreme telling is by Garrett in 2007, who compares residential FIPs (‘sin bins’) to post-war Dutch programmes resembling Nazi social engineering and criticises social policy scholars for giving them favourable evaluations – an argument criticised in turn by Nixon and Bennister et al.

For present purposes, note Nixon’s identification of ‘an unusual case of policy being directly informed by independent research’, referring to the possible impact of favourable evaluations of FIPs on the UK Government’s move way from (a) an intense focus on anti-social behaviour and sanctions towards (b) greater support. While it would be a stretch to suggest that academics can set government agendas, they can at least enhance their impact by framing their analysis in a way that secures policymaker interest. If academics seek influence, rather than critical distance, they may need to get their hands dirty: seeking to understand policymakers to find alternative policies that still give them what they want.

5 Comments

Filed under Prevention policy, public policy, UK politics and policy

Heresthetics and referendums

Heresthetic(s) describes the importance of the order of choice on political choices. The Scottish referendum process could become a brilliant example ….

William Riker invented the term heresthetics (or heresthetic) to describe the importance of a particular kind of manipulation:

one can help produce a particular choice if one can determine the context of, or order in which people make, choices.

Put simply, if you want to make something happen, it may be better to influence the institutions in which people make decisions, or frame issues to determine which particular aspect of a problem to which people pay attention, than change their minds about their preferences.

The prospect of a second referendum on Scottish independence could provide a nice, simple, example of this process.

Ideally, you would want to know about people’s preferences in considerable detail. After all, life is more complicated than binary choices suggest, and people are open to compromise. Yet, we tend to produce very simple binary referendums because they would otherwise be very difficult for most of the public to understand or for policymakers to interpret.

So, the way in which we simply that choice matters (for example, in Scotland, it led to the rejection of a third option – super dee duper mega max devolution – on the ballot paper, and therefore limited the choices of people who might have that third option as their first preference).

So too does the way in which we make several simple choices in a particular order.

Imagine a group of people – crucial to the outcome – whose main preference is that Scotland stays inside the UK in the EU:

  1. In a referendum in which Scotland votes first, this group votes No to Scottish independence on the assumption that the result will best reflect their preferences (helping produce 55% No).
  2. In a referendum in which Scotland votes after the UK (and the UK votes to leave the EU), many people will change their choice even if they have not changed their preferences (they would still prefer to be in the UK and EU, but that is no longer an option). So, some will choose to be in the UK out of the EU, but others will choose out of the UK and in the EU.

So, the order of choice, and the conditions under which we make choices, matters even when people have the same basic preferences. The people who voted No in the first referendum may vote Yes in the second, but still say that their initial choice was correct under the circumstances (and quite right too). Or, there may not be a second opportunity to choose.

This dynamic of choice is true even before we get into the more emotional side (some people will feel let down by the argument that a No vote was to stay in the EU).

Further reading:

If you want the Scottish argument in a less dispassionate form, read this by Alan Massie. If you want something more concise, see this tweet:

If you want more on heresthetic, google William Riker and take it from there.

Or, have a look at my series on policymaking. In two-dozen different ways, these posts identify these issues of framing, rules, and the order of choice. Search, for example, for ‘path dependence’ which describes the often profound long term effects of events and decisions made in a particular order in the past.

Note, of course, that only some choice situations are open to direct manipulation. In our case, I don’t think anyone managed to produce a Leave vote in the EU referendum to get a second crack at Scottish independence 😉

6 Comments

Filed under agenda setting, Scottish independence, Scottish politics, UK politics and policy

Policy in 500 Words: if the policy cycle does not exist, what do we do?

It is easy to reject the empirical value of the policy cycle, but difficult to replace it as a practical tool. I identify the implications for students, policymakers, and the actors seeking influence in the policy process.

cycle

A policy cycle divides the policy process into a series of stages:

  • Agenda setting. Identifying problems that require government attention, deciding which issues deserve the most attention and defining the nature of the problem.
  • Policy formulation. Setting objectives, identifying the cost and estimating the effect of solutions, choosing from a list of solutions and selecting policy instruments.
  • Legitimation. Ensuring that the chosen policy instruments have support. It can involve one or a combination of: legislative approval, executive approval, seeking consent through consultation with interest groups, and referenda.
  • Implementation. Establishing or employing an organization to take responsibility for implementation, ensuring that the organization has the resources (such as staffing, money and legal authority) to do so, and making sure that policy decisions are carried out as planned.
  • Evaluation. Assessing the extent to which the policy was successful or the policy decision was the correct one; if it was implemented correctly and, if so, had the desired effect.
  • Policy maintenance, succession or termination. Considering if the policy should be continued, modified or discontinued.

Most academics (and many practitioners) reject it because it oversimplifies, and does not explain, a complex policymaking system in which: these stages may not occur (or occur in this order), or we are better to imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and predictable outputs.

But what do we do about it?

The implications for students are relatively simple: we have dozens of concepts and theories which serve as better ways to understand policymaking. In the 1000 Words series, I give you 25 to get you started.

The implications for policymakers are less simple because they cycle may be unrealistic and useful. Stages can be used to organise policymaking in a simple way: identify policymaker aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate the policy. The idea is simple and the consequent advice to policy practitioners is straightforward.  A major unresolved challenge for scholars and practitioners is to describe a more meaningful, more realistic, analytical model to policymakers and give advice on how to act and justify action in the same straightforward way. So, in this article, I discuss how to reconcile policy advice based on complexity and pragmatism with public and policymaker expectations.

The implications for actors trying to influence policymaking can be dispiriting: how can we engage effectively in the policy process if we struggle to understand it? So, in this page (scroll down – it’s long!), I discuss how to present evidence in complex policymaking systems.

Take home message for students. It is easy to describe then assess the policy cycle as an empirical tool, but don’t stop there. Consider how to turn this insight into action. First, examine the many ways in which we use concepts to provide better descriptions and explanations. Then, think about the practical implications. What useful advice could you give an elected policymaker, trying to juggle pragmatism with accountability? What strategies would you recommend to actors trying to influence the policy process?

13 Comments

Filed under 500 words, public policy

Hydraulic fracturing policy in comparative perspective: how typical is the UK experience?

This paper – Cairney PSA 2016 UK fracking 15.3.16– collects insights from the comparative study of ‘fracking’ policy, including a forthcoming book using the ‘Advocacy Coalition Framework’ to compare policy and policymaking in the US, Canada and five European countries (Weible, Heikkila, Ingold and Fischer, 2016), the UK chapter, and offshoot article submissions comparing the UK with Switzerland. It is deliberately brief to reflect the likelihood that, in a 90-minute panel with 5 papers, we will need to keep our initial presentations short and sweet. I am also a member of the no-powerpoint-collective.

See also Three lessons from a comparison of fracking policy in the UK and Switzerland

Category: Fracking

Leave a comment

Filed under agenda setting, Fracking, public policy, UK politics and policy

Evidence-based policymaking: lecture and Q&A

Here is my talk (2 parts) on EBPM at the School of Public Affairs, University of Colorado Denver 24.2.16 (or download the main talk and Q and A):

You can find more on this topic here: https://paulcairney.wordpress.com/ebpm/

ebpm notes Denver 2016

3 Comments

Filed under Evidence Based Policymaking (EBPM)

Policy Concepts in 1000 Words: Framing

framing main

(podcast download)

‘Framing’ is a metaphor to describe the ways in which we understand, and use language selectively to portray, policy problems. There are many ways to describe this process in many disciplines, including communications, psychological, and sociological research. There is also more than one way to understand the metaphor.

For example, I think that most scholars describe this image (from litemind) of someone deciding which part of the world on which to focus.

framing with hands

However, I have also seen colleagues use this image, of a timber frame, to highlight the structure of a discussion which is crucial but often unseen and taken for granted:

timber frame

  1. Intentional framing and cognition.

The first kind of framing relates to bounded rationality or the effect of our cognitive processes on the ways in which we process information (and influence how others process information):

  • We use major cognitive shortcuts to turn an infinite amount of information into the ‘signals’ we perceive or pay attention to.
  • These cognitive processes often produce interesting conclusions, such as when (a) we place higher value on the things we own/ might lose rather than the things we don’t own/ might gain (‘prospect theory’) or (b) we value, or pay more attention to, the things with which we are most familiar and can process more easily (‘fluency’).
  • We often rely on other people to process and select information on our behalf.
  • We are susceptible to simple manipulation based on the order (or other ways) in which we process information, and the form it takes.

In that context, you can see one meaning of framing: other actors portray information selectively to influence the ways in which we see the world, or which parts of the world capture our attention (here is a simple example of wind farms).

In policy theory, framing studies focus on ambiguity: there are many ways in which we can understand and define the same policy problem (note terms such as ‘problem definition’ and a ‘policy image’). Therefore, actors exercise power to draw attention to, and generate support for, one particular understanding at the expense of others. They do this with simple stories or the selective presentation of facts, often coupled with emotional appeals, to manipulate the ways in which we process information.

  1. Frames as structures

Think about the extent to which we take for granted certain ways to understand or frame issues. We don’t begin each new discussion with reference to ‘first principles’. Instead, we discuss issues with reference to:

(a) debates that have been won and may not seem worth revisiting (imagine, for example, the ways in which ‘socialist’ policies are treated in the US)

(b) other well-established ways to understand the world which, when they seem to dominate our ways of thinking, are often described as ‘hegemonic’ or with reference to paradigms.

In such cases, the timber frame metaphor serves two purposes:

(a) we can conclude that it is difficult but not impossible to change.

(b) if it is hidden by walls, we do not see it; we often take it for granted even though we should know it exists.

Framing the social, not physical, world

These metaphors can only take us so far, because the social world does not have such easily identifiable physical structures. Instead, when we frame issues, we don’t just choose where to look; we also influence how people describe what we are looking at. Or, ‘structural’ frames relate to regular patterns of behaviour or ways of thinking which are more difficult to identify than in a building. Consequently, we do not all describe structural constraints in the same way even though, ostensibly, we are looking at the same thing.

In this respect, for example, the well-known ‘Overton window’ is a sort-of helpful but also problematic concept, since it suggests that policymakers are bound to stay within the limits of what Kingdon calls the ‘national mood’. The public will only accept so much before it punishes you in events such as elections. Yet, of course, there is no such thing as the public mood. Rather, some actors (policymakers) make decisions with reference to their perception of such social constraints (how will the public react?) but they also know that they can influence how we interpret those constraints with reference to one or more proxies, including opinion polls, public consultations, media coverage, and direct action:

JEPP public opinion

They might get it wrong, and suffer the consequences, but it still makes sense to say that they have a choice to interpret and adapt to such ‘structural’ constraints.

Framing, power and the role of ideas

We can bring these two ideas about framing together to suggest that some actors exercise power to reinforce dominant ways to think about the world. Power is not simply about visible conflicts in which one group with greater material resources wins and another loses. It also relates to agenda setting. First, actors may exercise power to reinforce social attitudes. If the weight of public opinion is against government action, maybe governments will not intervene. The classic example is poverty – if most people believe that it is caused by fecklessness, what is the role of government? In such cases, power and powerlessness may relate to the (in)ability of groups to persuade the public, media and/ or government that there is a reason to make policy; a problem to be solved.  In other examples, the battle may be about the extent to which issues are private (with no legitimate role for government) or public (and open to legitimate government action), including: should governments intervene in disputes between businesses and workers? Should they intervene in disputes between husbands and wives? Should they try to stop people smoking in private or public places?

Second, policymakers can only pay attention to a tiny amount of issues for which they are responsible. So, actors exercise power to keep some issues on their agenda at the expense of others.  Issues on the agenda are sometimes described as ‘safe’: more attention to these issues means less attention to the imbalances of power within society.

42 Comments

Filed under 1000 words, agenda setting, PhD, public policy

12 things to know about studying public policy

Here is a blog post on 12 things to know about studying public policy. Please see the end of the post if you would like to listen to or watch my lecture on this topic.

  1. There is more to politics than parties and elections.

Think of policy theory as an antidote to our fixation on elections, as a focus on what happens in between. We often point out that elections can produce a change in the governing party without prompting major changes in policy and policymaking, partly because most policy is processed at a level of government that receives very little attention from elected policymakers. Elections matter but, in policy studies, they do not represent the centre of the universe.

2. Public policy is difficult to define.

Imagine a simple definition: ‘the sum total of government action, from signals of intent to the final outcomes’. Then consider these questions. Does policy include what policymakers say they will do (e.g. in manifestos) as well as what they actually do? Does it include the policy outcome if it does not match the original aim? What is ‘the government’ and does it include elected and unelected policymakers? Does public policy include what policymakers decide to not do? Is it still ‘public policy’ when neither the public nor elected policymakers have the ability to pay attention to what goes on in their name?

3. Policy change is difficult to see and measure.

Usually we know that something has changed because the government has passed legislation, but policy is so much more: spending, economic penalties or incentives (taxes and subsidies), social security payments and sanctions, formal and informal regulations, public education, organisations and staffing, and so on. So, we need to sum up this mix of policies, asking: is there an overall and coherent aim, or a jumble of policy instruments? Can we agree on the motives of policymakers when making these policies? Does policy impact seem different when viewed from the ‘top’ or the ‘bottom’? Does our conclusion change when we change statistical measures?

4. There is no objective way to identify policy success.

We know that policy evaluation is political because left/right wing political parties and commentators argue as much about a government’s success as its choices. Yet, it cannot be solved by scientists identifying objective or technical measures of success, because there is political choice in the measures we use and much debate about the best measures. Measurement also involves (frequently) a highly imperfect proxy, such as by using waiting times to measure the effectiveness of a health service. We should also note the importance of perspective: should we measure success in terms of the aims of elected policymakers, the organisations carrying out policy, or the people who are most affected? What if many policymakers were involved, or their aims were not clear? What if their aim was to remain popular, or have an easy time in the legislature, not to improve people’s lives? What if it improved the lives of some, but hurt others?

5. There is no ‘policy cycle’ with well-ordered stages.

Imagine this simple advice to policymakers: identify your aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is ‘legitimised’ by the population or its legislature, identify the necessary resources, implement, and then evaluate the policy. If only life were so simple. Instead, think of policymaking as a collection of thousands of policy cycles, which interact with each other to produce much less predictable outcomes. Then note that it is often impossible in practice to know when one stage begins and another ends. Finally, imagine that the order of stages is completely messed up, such as when we have a solution long before a problem arises.

6. Policymakers are ‘rational’ and ‘irrational’.

A classic reference point is the ‘ideal-type’ of comprehensive (or synoptic) rationality which helps elected policymakers translate their values into policy in a straightforward manner. They have a clear, coherent and rank-ordered set of policy preferences which neutral organizations carry out on their behalf. We can separate policymaker values from organizational facts. There are clear-cut and ordered stages to the process and analysis of the policymaking context is comprehensive. This allows policymakers to maximize the benefits of policy to society in much the same way that an individual maximizes her own utility. In the real world, we identify ‘bounded rationality’, challenge all of the assumptions of comprehensive rationality, and wonder what happens next. The classic debate focused on the links between bounded rationality and incrementalism. Our current focus is on ‘rational’ and ‘irrational’ responses to the need to make decisions quickly without comprehensive information: limiting their options, and restricting information searches to sources they trust, to make their task manageable; but also making quick decisions by relying on instinct, gut, emotion, beliefs, ideology, and habits.

7. We talk of actors, but not on stage.

Most policy theories use the word ‘actor’ simply to describe the ability of people and organisations to deliberate and act to make choices. Many talk about the large number of actors involved in policymaking, at each level and across many levels of policymaking. Some discuss a shift, in many countries since the early post-war period, from centralized and exclusive policymaking, towards a fragmented multi-level system involving a much larger number of actors

8. We talk of institutions, but not buildings.

In political science, ‘institution’ refers to the rules, ‘norms’, and other practices that influence policymaking behaviour.  Some rules are visible or widely understood, such as constitutions. Others are less visible, such as the ‘rules of the game’ in politics, or organisational ‘cultures’. So, for example, ‘majoritarian’ and ‘consensus’ democracies could have very different formal rules but operate in very similar ways in practice. These rules develop in different ways in many parts of government, prompting us to consider what happens when many different actors develop different expectations of politics and policymaking.  For example, it might help explain a gap between policies made in one organisation and implemented by another. It might cause government policy to be contradictory, when many different organisations produce their own policies without coordinating with others. Or, governments may contribute to a convoluted statute book by adding to laws and regulations without thinking how they all fit together.

9. We have 100 ways to describe policy networks.

Put simply, ‘policy network’ describes the relationships between policymakers, in formal positions of power, and the actors who seek to influence them. It can also describe a notional venue – a ‘subsystem’ – in which this interaction takes place. Although the network concept is crucial to most policy theories, it can be described using very different concepts,and with reference to different political systems. For example, in the UK, we might describe networks as a consequence of bounded rationality: elected policymakers delegate responsibility to civil servants who, in turn, rely on specialist organisations for information and advice. Those organisations trade information for access to government. This process often becomes routine: civil servants begin to trust and rely on certain organisations and they form meaningful relationships. If so, most public policy is conducted primarily through small and specialist ‘policy communities’ that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement. Network theories tend to consider the key implications, including a tendency for governments to contain ‘silos’ and struggle to ‘join up’ government when policy is made in so many different places

10. We struggle to separate power from ideas.

Policy theory is about the relationship between power and ideas (or shared beliefs). These terms are difficult to disentangle, even analytically, because people often exercise power by influencing the beliefs of others. Classic power debates inform current discussions of ‘agenda setting’ and ‘framing’. Debates began with the idea that we could identify the powerful by examining ‘key political choices’: the powerful would win and benefit from the outcomes at the expense of other actors. The debate developed into discussions of major barriers to the ‘key choices’ stage: actors may exercise power to persuade/ reinforce the popular belief that the government should not get involved, or to keep an issue off a government agenda by drawing attention to other issues. This ability to persuade depends on the resources of actors, but also the beliefs of the actors they seek to influence.

11. We talk a lot about ‘context’ and events, and sometimes about ‘complexity’ and ‘emergence’.

Context’ describes the policy conditions that policymakers take into account when identifying problems, such as a country’s geography, demographic profile, economy, and social attitudes. This wider context is in addition to the ‘institutional’ context, when governments inherit the laws and organisations of their predecessors. Important ‘game changing’ events can be routine, such as when elections produce new governments with new ideas, or unanticipated, such as when crises or major technological changes prompt policymakers to reconsider existing policies. In each case, we should consider the extent to which policymaking is in the control of policymakers. In some cases, the role of context seems irresistible – think for example of a ‘demographic timebomb’ – but governments show that they can ignore such issues for long periods of time or, at least, decide how and why they are important. This question of policymaker control is also explored in discussions of ‘complexity theory’, which highlights the unpredictability of policymaking, limited central government control, and a tendency for policy outcomes to ‘emerge’ from activity at local levels.

12. It can inform real world policymaking, but you might not like the advice.

For example, policymakers often recognise that they make decisions within an unpredictable and messy, not ‘linear’, process. Many might even accept the implications of complexity theory, which suggests that they should seek new ways to act when they recognise their limitations: use trial and error; keep changing policies to suit new conditions; devolve and share power with the local actors able to respond to local areas; and so on. Yet, such pragmatic advice goes against the idea of Westminster-style democratic accountability, in which ministers remain accountable to Parliament and the public because you know who is in charge and, therefore, who to blame. Or, for example, we might use policy theory to inform current discussions of evidence-based policymaking, saying to scientists that they will only be influential if they go beyond the evidence to make manipulative emotional appeals.

For more information, see Key policy theories and concepts in 1000 words

To listen to the lecture (about 50 minutes plus Q&A), you can download here or stream:

You can also download the video here or stream:

To be honest, there is little gain to watching the lecture, unless you want to laugh at my posture & shuffle and wonder if I have been handcuffed.

 

10 Comments

Filed under 1000 words, Evidence Based Policymaking (EBPM), public policy