Tag Archives: problem definition

Policy Analysis in 750 Words: Defining policy problems and choosing solutions

This post forms one part of the Policy Analysis in 750 words series overview.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The idea is that both elements are analytically separable but mutually informative: policy analysis is crucial to solving real policy problems, policy studies inform the feasibility of analysis, the study of policy analysts informs policy studies, and so on.

Both elements focus on similar questions – such as What is policy? – and explore their descriptive (what do policy actors do?) and prescriptive (what should they do?) implications.

  1. What is the policy problem?

Policy studies tend to describe problem definition in relation to framing, narrative, social construction, power, and agenda setting.

Actors exercise power to generate attention for their preferred interpretation, and minimise attention to alternative frames (to help foster or undermine policy change, or translate their beliefs into policy).

Policy studies incorporate insights from psychology to understand (a) how policymakers might combine cognition and emotion to understand problems, and therefore (b) how to communicate effectively when presenting policy analysis.

Policy studies focus on the power to reduce ambiguity rather than simply the provision of information to reduce uncertainty. In other words, the power to decide whose interpretation of policy problems counts, and therefore to decide what information is policy-relevant.

This (unequal) competition takes place within a policy process over which no actor has full knowledge or control.

The classic 5-8 step policy analysis texts focus on how to define policy problems well, but they vary somewhat in their definition of doing it well (see also C.Smith):

  • Bardach recommends using rhetoric and eye-catching data to generate attention
  • Weimer and Vining and Mintrom recommend beginning with your client’s ‘diagnosis’, placing it in a wider perspective to help analyse it critically, and asking yourself how else you might define it (see also Bacchi, Stone)
  • Meltzer and Schwartz and Dunn identify additional ways to contextualise your client’s definition, such as by generating a timeline to help ‘map’ causation or using ‘problem-structuring methods’ to compare definitions and avoid making too many assumptions on a problem’s cause.
  • Thissen and Walker compare ‘rational’ and ‘argumentative’ approaches, treating problem definition as something to be measured scientifically or established rhetorically (see also Riker).

These approaches compare with more critical accounts that emphasise the role of power and politics to determine whose knowledge is relevant (L.T.Smith) and whose problem definition counts (Bacchi, Stone). Indeed, Bacchi and Stone provide a crucial bridge between policy analysis and policy studies by reflecting on what policy analysts do and why.

  1. What is the policy solution?

In policy studies, it is common to identify counterintuitive or confusing aspects of policy processes, including:

  • Few studies suggest that policy responses actually solve problems (and many highlight their potential to exacerbate them). Rather, ‘policy solutions’ is shorthand for proposed or alleged solutions.
  • Problem definition often sets the agenda for the production of ‘solutions’, but note the phrase solutions chasing problems (when actors have their ‘pet’ solutions ready, and they seek opportunities to promote them).

Policy studies: problem definition informs the feasibility and success of solutions

Generally speaking, to define the problem is to influence assessments of the feasibility of solutions:

  • Technical feasibility. Will they work as intended, given the alleged severity and cause of the problem?
  • Political feasibility. Will they receive sufficient support, given the ways in which key policy actors weigh up the costs and benefits of action?

Policy studies highlight the inextricable connection between technical and political feasibility. Put simply, (a) a ‘technocratic’ choice about the ‘optimality’ of a solution is useless without considering who will support its adoption, and (b) some types of solution will always be a hard sell, no matter their alleged effectiveness (Box 2.3 below).

In that context, policy studies ask: what types of policy tools or instruments are actually used, and how does their use contribute to policy change? Measures include the size, substance, speed, and direction of policy change.

box 2.3 2nd ed UPP

In turn, problem definition informs: the ways in which actors will frame any evaluation of policy success, and the policy-relevance of the evidence to evaluate solutions. Simple examples include:

  • If you define tobacco in relation to: (a) its economic benefits, or (b) a global public health epidemic, evaluations relate to (a) export and taxation revenues, or (b) reductions in smoking in the population.
  • If you define ‘fracking’ in relation to: (a) seeking more benefits than costs, or (b) minimising environmental damage and climate change, evaluations relate to (a) factors such as revenue and effective regulation, or simply (b) how little it takes place.

Policy analysis: recognising and pushing boundaries

Policy analysis texts tend to accommodate these insights when giving advice:

  • Bardach recommends identifying solutions that your audience might consider, perhaps providing a range of options on a notional spectrum of acceptability.
  • Smith highlights the value of ‘precedent’, or relating potential solutions to previous strategies.
  • Weimer and Vining identify the importance of ‘a professional mind-set’ that may be more important than perfecting ‘technical skills’
  • Mintrom notes that some solutions are easier to sell than others
  • Meltzer and Schwartz describe the benefits of making a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Dunn warns against too-narrow forms of ‘evidence based’ analysis which undermine a researcher’s ability to adapt well to the evidence-demands of policymakers
  • Thissen and Walker relate solution feasibility to a wide range of policy analysis ‘styles’

Still, note the difference in emphasis.

Policy analysis education/ training may be about developing the technical skills to widen definitions and apply many criteria to compare solutions.

Policy studies suggest that problem definition and a search for solutions takes place in an environment where many actors apply a much narrower lens and are not interested in debates on many possibilities (particularly if they begin with a solution).

I have exaggerated this distinction between each element, but it is worth considering the repeated interaction between them in practice: politics and policymaking provide boundaries for policy analysis, analysis could change those boundaries, and policy studies help us reflect on the impact of analysts.

I’ll take a quick break, then discuss how this conclusion relates to the idea of ‘entrepreneurial’ policy analysis.

Further reading

Understanding Public Policy (2020: 28) describes the difference between governments paying for and actually using the ‘tools of policy formulation’. To explore this point, see ‘The use and non-use of policy appraisal tools in public policy making‘ and The Tools of Policy Formulation.

p28 upp 2nd ed policy tools

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: William Dunn (2017) Public Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary. This book is a whopper, with almost 500 pages and 101 (excellent) discussions of methods, so 800 words over budget seems OK to me. If you disagree, just read every second word.  By the time you reach the cat hanging in there baby you are about 300 (150) words away from the end.

Dunn 2017 cover

William Dunn (2017) Public Policy Analysis 6th Ed. (Routledge)

Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical assessment, and communication of policy-relevant knowledge … to solve practical problemsIts practitioners are free to choose among a range of scientific methods, qualitative as well as quantitative, and philosophies of science, so long as these yield reliable knowledge’ (Dunn, 2017: 2-3).

Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’, to help solve problems with analysis that people can trust.

This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world should be and how solutions get us there (as opposed to policy studies/ research seeking primarily to explain what happens).

Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:

  1. The idea of ‘best practice’ characterised by 5-step plans.
  • In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather information; the role they perform in an organisation; the time constraints and incentive structures in organisations and political systems; the expectations and standards of their profession; and, the need to work with teams consisting of many professions/ disciplines (2017: 15-6)
  • The cost (in terms of time and resources) of conducting multiple research and analytical methods is high, and highly constrained in political environments (2017: 17-8; compare with Lindblom)
  1. The too-narrow idea of evidence-based policymaking
  • The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’ undermines a researcher’s ability to adapt well to the evidence-demands of policymakers (2017: 68; 4 compare with Why don’t policymakers listen to your evidence?).

To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3) and after (Qs4-5) policy intervention (2017: 5-7; 54-6):

  1. What is the policy problem to be solved?
  • For example, identify its severity, urgency, cause, and our ability to solve it.
  • Don’t define the wrong problem, such as by oversimplifying or defining it with insufficient knowledge.
  • Key aspects of problems including ‘interdependency’ (each problem is inseparable from a host of others, and all problems may be greater than the sum of their parts), ‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change rather than being solved), and ‘hierarchy’ (which level or type of government is responsible) (2017: 70; 75).
  • Problems vary in terms of how many relevant policymakers are involved, how many solutions are on the agenda, the level of value conflict, and the unpredictability of outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
  • ‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a problem, and ward against making too many assumptions about its nature and cause; produce models of cause-and-effect; and make a problem seem solve-able, such as by placing boundaries on its coverage. These methods foster creativity, which is useful when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69; 77; 81-107).
  • Problem definition draws on evidence, but is primarily the exercise of power to reduce ambiguity through argumentation, such as when defining poverty as the fault of the poor, the elite, the government, or social structures (2017: 79; see Stone).
  1. What effect will each potential policy solution have?
  • Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
  • ‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
  • However, even the most-effective expert/ theory-based methods to extrapolate from the past are flawed, and it is important to communicate levels of uncertainty (2017: 118-23; see Spiegelhalter).
  1. Which solutions should we choose, and why?
  • ‘Prescription’ methods help provide a consistent way to compare each potential solution, in terms of its feasibility and predicted outcome, rather than decide too quickly that one is superior (2017: 55; 190-2; 220-42).
  • They help to combine (a) an estimate of each policy alternative’s outcome with (b) a normative assessment.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer & Schwartz, and Stone on the meaning of these values).
  • For example, cost benefit analysis (CBA) is an established – but problematic – economics method based on finding one metric – such as a $ value – to predict and compare outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
  • Cost effectiveness analysis uses a $ value for costs, but compared with other units of measurement for benefits (such as outputs per $) (2017: 217-9)
  • Although such methods help us combine information and values to compare choices, note the inescapable role of power to decide whose values (and which outcomes, affecting whom) matter (2017: 204)
  1. What were the policy outcomes?
  • ‘Monitoring’ methods help identify (say): levels of compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly (such as on clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
  • Monitoring is crucial because it is so difficult to predict policy success, and unintended consequences are almost inevitable (2017: 250).
  • However, the data gathered are usually no more than proxy indicators of outcomes. Further, the choice of indicators reflect what is available, ‘particular social values’, and ‘the political biases of analysts’ (2017: 262)
  • The idea of ‘evidence based policy’ is linked strongly to the use of experiments and systematic review to identify causality (2017: 273-6; compare with trial-and-error learning in Gigerenzer, complexity theory, and Lindblom).
  1. Did the policy solution work as intended? Did it improve policy outcomes?
  • Although we frame policy interventions as ‘solutions’, few problems are ‘solved’. Instead, try to measure the outcomes and the contribution of your solution, and note that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).  
  • Policy evaluation is not an objective process in which we can separate facts from values.
  • Rather, values and beliefs are part of the criteria we use to gauge success (and even their meaning is contested – 2017: 322-32).
  • We can gather facts about the policy process, and the impacts of policy on people, but this information has little meaning until we decide whose experiences matter.

Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since policymaking is continuous, and evaluations of past choices inform current choices.

Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the impact of knowledge regarding one question on the other four (2017: 7-13; contrast with Meltzer & Schwartz, Thissen & Walker).

Developing arguments and communicating effectively

Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4 for advice on how to write briefs, memos, and executive summaries and prepare oral testimony).

He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:

  • The claim itself, such as a description (size, cause) or evaluation (importance, urgency) of a problem, and prescription of a solution
  • The things that support it (including reasoning, knowledge, authority)
  • Incorporating the things that could undermine it (including any ‘qualifier’, the communication of uncertainty about current knowledge, and counter-arguments).

The key stages of communication (2017: 392-7; 405; 432) include:

  1. ‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing with ‘political and organizational constraints’ and suggesting something that can actually be done.
  2. ‘Documentation’, focusing on synthesising information from many sources, organising it into a coherent argument, translating from jargon or a technical language, simplifying, summarising, and producing user-friendly visuals.
  3. ‘Utilization’, by making sure that (a) communications are tailored to the audience (its size, existing knowledge of policy and methods, attitude to analysts, and openness to challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn from each other.

 

hang-in-there-baby

 

Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy cycles

Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which culminated in new ways to express old ambitions, such as to:

  1. Use ‘systems thinking’, to understand the interdependence between many elements in complex policymaking systems (see also socio-technical and socio-ecological systems).
  • Note the huge difference between (a) policy analysis discussions of ‘systems thinking’ built on the hope that if we can understand them we can direct them, and (b) policy theory discussions that emphasise ‘emergence’ in the absence of central control (and presence of multi-centric policymaking).
  • Also note that Dunn (2017: 73) describes policy problems – rather than policymaking – as complex systems. I’ll write another post (short, I promise) on the many different (and confusing) ways to use the language of complexity.
  1. Promote ‘evidence based policy, as the new way to describe an old desire for ‘technocratic’ policymaking that accentuates scientific evidence and downplays politics and values (see also 2017: 60-4).

In that context, see Dunn’s (47-52) discussion of comprehensive versus bounded rationality:

  • Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in favour of a continuous process of ‘questioning and answering’.
  • This approach is a pragmatic response to the lack of order and predictability of policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5 step policy analyses (compare with Meltzer & Schwartz).

Dunn (2017: 41-7) also provides an unusually useful discussion of the policy cycle. Rather than seeing it as a mythical series of orderly stages, Dunn highlights:

  1. Lasswell’s original discussion of policymaking functions (or functional requirements of policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering knowledge), ‘promotion’ (persuasion and argumentation while defining problems), ‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is made and carried out), and ‘appraisal’ (2017: 42-3).
  2. The constant interaction between all notional ‘stages’ rather than a linear process: attention to a policy problem fluctuates, actors propose and adopt solutions continuously, actors are making policy (and feeding back on its success) as they implement, evaluation (of policy success) is not a single-shot document, and previous policies set the agenda for new policy (2017: 44-5).

In that context, it is no surprise that the impact of a single policy analyst is usually minimal (2017: 57). Sorry to break it to you. Hang in there, baby.

hang-in-there-baby

 

13 Comments

Filed under 750 word policy analysis, public policy

Policy Analysis in 750 words: Rachel Meltzer and Alex Schwartz (2019) Policy Analysis as Problem Solving

Please see the Policy Analysis in 750 words series overview before reading the summary. This post might well represent the largest breach of the ‘750 words’ limit, so please get comfortable. I have inserted a picture of a cat hanging in there baby after the main (*coughs*) 1400-word summary. The rest is bonus material, reflecting on the links between this book and the others in the series.

Meltzer Schwartz 2019 cover

Rachel Meltzer and Alex Schwartz (2019) Policy Analysis as Problem Solving (Routledge)

We define policy analysis as evidence-based advice giving, as the process by which one arrives at a policy recommendation to address a problem of public concern. Policy analysis almost always involves advice for a client’ (Meltzer and Schwartz, 2019: 15).

Meltzer and Schwartz (2019: 231-2) describe policy analysis as applied research, drawing on many sources of evidence, quickly, with limited time, access to scientific research, or funding to conduct a lot of new research (2019: 231-2). It requires:

  • careful analysis of a wide range of policy-relevant documents (including the ‘grey’ literature often produced by governments, NGOs, and think tanks) and available datasets
  • perhaps combined with expert interviews, focus groups, site visits, or an online survey (see 2019: 232-64 on methods).

Meltzer and Schwartz (2019: 21) outline a ‘five-step framework’ for client-oriented policy analysis. During each step, they contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’ (to reflect bounded, not comprehensive, rationality):

  1. ‘Define the problem’.

Problem definition is a political act of framing, not an exercise in objectivity (2019: 52-3). It is part of a narrative to evaluate the nature, cause, size, and urgency of an issue (see Stone), or perhaps to attach to an existing solution (2019: 38-40; compare with Mintrom).

In that context, ask yourself ‘Who is defining the problem? And for whom?’ and do enough research to be able to define it clearly and avoid misunderstanding among you and your client (2019: 37-8; 279-82):

  • Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour, their deadline, and their ability to make or influence the policies you might suggest (2019: 49; compare with Weimer and Vining).
  • Tailor your narrative to your audience, albeit while recognising the need to learn from ‘multiple perspectives’ (2019: 40-5).
  • Make it ‘concise’ and ‘digestible’, not too narrowly defined, and not in a way that already closes off discussion by implying a clear cause and solution (2019: 51-2).

In doing so:

  • Ask yourself if you can generate a timeline, identify key stakeholders, and place a ‘boundary’ on the problem.
  • Establish if the problem is urgent, who cares about it, and who else might care (or not) (2019 : 46).
  • Focus on the ‘central’ problem that your solution will address, rather than the ‘related’ and ‘underlying’ problems that are ‘too large and endemic to be solved by the current analysis’ (2019: 47).
  • Avoid misdiagnosing a problem with reference to one cause. Instead, ‘map’ causation with reference to (say) individual and structural causes, intended and unintended consequences, simple and complex causation, market or government failure, and/ or the ability to blame an individual or organisation (2019: 48-9).
  • Combine quantitative and qualitative data to frame problems in relation to: severity, trends in severity, novelty, proximity to your audience, and urgency or crisis (2019: 53-4).

During this process, interrogate your own biases or assumptions and how they might affect your analysis (2019: 50).

2. ‘Identify potential policy options (alternatives) to address the problem’.

Common sources of ideas include incremental changes from current policy, ‘client suggestions’, comparable solutions (from another time, place, or policy area), reference to common policy instruments, and ‘brainstorming’ or ‘design thinking’ (2019: 67-9; see box 2.3 and 7.1, below, from Understanding Public Policy).

box 2.3 2nd ed UPP

Identify a ‘wide range’ of possible solutions, then select the (usually 3-5) ‘most promising’ for further analysis (2019: 65). In doing so:

  • be careful not to frame alternatives negatively (e.g. ‘death tax’ – 2019: 66)
  • compare alternatives in ‘good faith’ rather than keeping some ‘off the table’ to ensure that your preferred solution looks good (2019: 66)
  • beware ‘ best practice’ ideas that are limited in terms of (a) applicability (if made at a smaller scale, or in a very different jurisdiction), and (b) evidence of success (2019: 70; see studies of policy learning and transfer)
  • think about how to modify existing policies according to scale or geographical coverage, who to include (and based on what criteria), for how long, using voluntary versus mandatory provisions, and ensuring oversight (2019: 71-3)
  • consider combinations of common policy instruments, such as regulations and economic penalties/ subsidies (2019: 73-7)
  • consider established ways to ‘brainstorm’ ideas (2019: 77-8)
  • note the rise of instruments derived from the study of psychology and behavioural public policy (2019: 79-90)
  • learn from design principles, including ‘empathy’, ‘co-creating’ policy with service users or people affected, ‘prototyping’ (2019: 90-1)

box 7.1

3. ‘Specify the objectives to be attained in addressing the problem and the criteria to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.

Your objectives relate to your problem definition and aims: what is the problem, what do you want to happen when you address it, and why?

  • For example, questions to your client may include: what is your organization’s ‘mission’, what is feasible (in terms of resources and politics), which stakeholders to you want to include, and how will you define success (2019: 105; 108-12)?

In that values-based context, your criteria relate to ways to evaluate each policy’s likely impact (2019: 106-7). They should ensure:

  • Comprehensiveness. E.g. how many people, and how much of their behaviour, can you influence while minimizing the ‘burden’ on people, businesses, or government? (2019: 113-4)
  • Mutual Exclusiveness. In other words, don’t have two objectives doing the same thing (2019: 114).

Common criteria include (2019: 116):

  1. Effectiveness. The size of its intended impact on the problem (2019: 117).
  2. Equity (fairness). The impact in terms of ‘vertical equity’ (e.g. the better off should pay more), ‘horizontal equity’ (e.g. you should not pay more if unmarried), fair process, fair outcomes, and ‘intergenerational’ equity (e.g. don’t impose higher costs on future populations) (2019: 118-19).
  3. Feasibility (administrative, political, and technical). The likelihood of this policy being adopted and implemented well (2019: 119-21)
  4. Cost (or financial feasibility). Who would bear the cost, and their willingness and ability to pay (2019: 122).
  5. Efficiency. To maximise the benefit while minimizing costs (2019: 122-3).

 

4. ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.

When explaining objectives and criteria,

  • ‘label’ your criteria in relation to your policy objectives (e.g. to ‘maximize debt reduction’) rather than using generic terms (2019: 123-7)
  • produce a table – with alternatives in rows, and criteria in columns – to compare each option
  • quantify your policies’ likely outcomes, such as in relation to numbers of people affected and levels of income transfer, or a percentage drop in the size of the problem, but also
  • communicate the degree of uncertainty related to your estimates (2019: 128-32; see Spiegelhalter)

Consider using cost-benefit analysis to identify (a) the financial and opportunity cost of your plans (what would you achieve if you spent the money elsewhere?), compared to (b) the positive impact of your funded policy (2019: 141-55).

  • The principle of CBA may be intuitive, but a thorough CBA process is resource-intensive, vulnerable to bias and error, and no substitute for choice. It requires you to make a collection of assumptions about human behaviour and likely costs and benefits, decide whose costs and benefits should count, turn all costs and benefits into a single measure, and imagine how to maximise winners and compensate losers (2019: 155-81; compare Weimer and Vining with Stone).
  • One alternative is cost-effectiveness analysis, which quantifies costs and relates them to outputs (e.g. number of people affected, and how) without trying to translate them into a single measure of benefit (2019: 181-3).
  • These measures can be combined with other thought processes, such as with reference to ‘moral imperatives’, a ‘precautionary approach’, and ethical questions on power/ powerlessness (2019: 183-4).

 

5. ‘Arrive at a recommendation’.

Predict the most likely outcomes of each alternative, while recognising high uncertainty (2019: 189-92). If possible,

  • draw on existing, comparable, programmes to predict the effectiveness of yours (2019: 192-4)
  • combine such analysis with relevant theories to predict human behaviour (e.g. consider price ‘elasticity’ if you seek to raise the price of a good to discourage its use) (2019: 193-4)
  • apply statistical methods to calculate the probability of each outcome (2019: 195-6), and modify your assumptions to produce a range of possibilities, but
  • note Spiegelhalter’s cautionary tales and anticipate the inevitable ‘unintended consequences’ (when people do not respond to policy in the way you would like) (2019: 201-2)
  • use these estimates to inform a discussion on your criteria (equity, efficiency, feasibility) (2019: 196-200)
  • present the results visually – such as in a ‘matrix’ – to encourage debate on the trade-offs between options
  • simplify choices by omitting irrelevant criteria and options that do not compete well with others (2019: 203-10)
  • make sure that your recommendation (a) flows from the analysis, and (b) is in the form expected by your client (2019: 211-12)
  • consider making a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups (2019: 212).

 

hang-in-there-baby

 

Policy analysis in a wider context

Meltzer and Schwartz’s approach makes extra sense if you have already read some of the other texts in the series, including:

  1. Weimer and Vining, which represents an exemplar of an X-step approach informed heavily by the study of economics and application of economic models such as cost-benefit-analysis (compare with Radin’s checklist).
  2. Geva-May on the existence of a policy analysis profession with common skills, heuristics, and (perhaps) ethics (compare with Meltzer and Schwartz, 2019: 282-93)
  3. Radin, on:
  • the proliferation of analysts across multiple levels of government, NGOs, and the private sector (compare with Meltzer and Schwartz, 2019: 269-77)
  • the historic shift of analysis from formulation to all notional stages (contrast with Meltzer and Schwartz, 2019: 16-7 on policy analysis not including implementation or evaluation)
  • the difficulty in distinguishing between policy analysis and advocacy in practice (compare with Meltzer and Schwartz, 2019: 276-8, who suggest that actors can choose to perform these different roles)
  • the emerging sense that it is difficult to identify a single client in a multi-centric policymaking system. Put another way, we might be working for a specific client but accept that their individual influence is low.
  1. Stone’s challenge to
  • a historic tendency for economics to dominate policy analysis,
  • the applicability of economic assumptions (focusing primarily on individualist behaviour and markets), and
  • the pervasiveness of ‘rationalist’ policy analysis built on X-steps.

Meltzer and Schwartz (2019: 1-3) agree that economic models are too dominant (identifying the value of insights from ‘other disciplines – including design, psychology, political science, and sociology’).

However, they argue that critiques of rational models exaggerate their limitations (2019: 23-6). For example:

  • these models need not rely solely on economic techniques or quantification, a narrow discussion or definition of the problem, or the sense that policy analysis should be comprehensive, and
  • it is not problematic for analyses to reflect their client’s values or for analysts to present ambiguous solutions to maintain wide support, partly because
  • we would expect the policy analysis to form only one part of a client’s information or strategy.

Further, they suggest that these critiques provide no useful alternative, to help guide new policy analysts. Yet, these guides are essential:

to be persuasive, and credible, analysts must situate the problem, defend their evaluative criteria, and be able to demonstrate that their policy recommendation is superior, on balance, to other alternative options in addressing the problem, as defined by the analyst. At a minimum, the analyst needs to present a clear and defensible ranking of options to guide the decisions of the policy makers’ (Meltzer and Schwartz, 2019: 4).

Meltzer and Schwartz (2019: 27-8) then explore ways to improve a 5-step model with insights from approaches such as ‘design thinking’, in which actors use a similar process – ‘empathize, define the problem, ideate, prototype, test and get feedback from others’ – to experiment with policy solutions without providing a narrow view on problem definition or how to evaluate responses.

Policy analysis and policy theory

One benefit to Meltzer and Schwartz’s approach is that it seeks to incorporate insights from policy theories and respond with pragmatism and hope. However, I think you also need to read the source material to get a better sense of those theories, key debates, and their implications. For example:

  1. Meltzer and Schwartz (2019: 32) note correctly that ‘incremental’ does not sum up policy change well. Indeed, Punctuated Equilibrium Theory shows that policy change is characterised by a huge number of small and a small number of huge changes.
  • However, the direct implications of PET are not as clear as they suggest. Baumgartner and Jones have both noted that they can measure these outcomes and identify the same basic distribution across a political system, but not explain or predict why particular policies change dramatically.
  • It is useful to recommend to policy analysts that they invest some hope in major policy change, but also sensible to note that – in the vast majority of cases – it does not happen.
  • On his point, see Mintrom on policy analysis for the long term, Weiss on the ‘enlightenment’ function of research and analysis, and Box 6.3 (from Understanding Public Policy), on the sense that (a) we can give advice to ‘budding policy entrepreneurs’ on how to be effective analysts, but (b) should note that all their efforts could be for nothing.

box 6.3

  1. Meltzer and Schwartz (2019: 32-3) tap briefly into the old debate on whether it is preferable to seek radical or incremental change. For more on that debate, see chapter 5 in the 1st ed of Understanding Public Policy in which Lindblom notes that proposals for radical/ incremental changes are not mutually exclusive.
  2. Perhaps explore the possible tension between Meltzer and Schwartz’s (2019: 33-4) recommendation that (a) policy analysis should be ‘evidence-based advice giving’, and (b) ‘flexible and open-ended’.
  • I think that Stone’s response would be that phrases such as ‘evidence based’ are not ‘flexible and open-ended’. Rather, they tend to symbolise a narrow view of what counts as evidence (see also Smith, and Hindess).
  • Further, note that the phrase ‘evidence based policymaking’ is a remarkably vague term (see the EBPM page), perhaps better seen as a political slogan than a useful description or prescription of policymaking.

 

Finally, if you read enough of these policy analysis texts, you get a sense that many are bunched together even if they describe their approach as new or distinctive.

  • Indeed, Meltzer and Schwarz (2019: 22-3) provide a table (containing Bardach and Patashnik, Patton et al, Stokey and Zeckhauser, Hammond et al, and Weimer & Vining) of ‘quite similar’ X-step approaches.
  • Weimer and Vining also discuss the implications of policy theories and present the sense that X-step policy analysis should be flexible and adaptive.
  • Many texts – including Radin, and Smith (2016) – focus on the value of case studies to think through policy analysis in particular contexts, rather than suggesting that we can produce a universal blueprint.

However, as Geva-May might suggest, this is not a bad thing if our aim is to generate the sense that policy analysis is a profession with its own practices and heuristics.

 

 

16 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell; and see Sousa on contestation).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

 

17 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco

This post contains preliminary notes for my keynote speech ‘The politics of evidence-based policymaking’ for the COPOLAD annual conference, ‘From evidence to practice: challenges in the field of drugs policies’ (14th June). I may amend them in the run up to the speech (and during their translation into Spanish).

COPOLAD (Cooperation Programme on Drugs Policies) is a ‘partnership cooperation programme between the European Union, Latin America and the Caribbean countries aiming at improving the coherence, balance and impact of drugs policies, through the exchange of mutual experiences, bi-regional coordination and the promotion of multisectoral, comprehensive and coordinated responses’. It is financed by the EU.

My aim is to draw on policy studies, and the case study of tobacco/ public health policy, to identify four lessons:

  1. ‘Evidence-based policymaking’ is difficult to describe and understand, but we know it’s a highly political process which differs markedly from ‘evidence based medicine’.
  2. Actors focus as much on persuasion to reduce ambiguity as scientific evidence to reduce uncertainty. They also develop strategies to navigate complex policymaking ‘systems’ or ‘environments’.
  3. Tobacco policy demonstrates three conditions for the proportionate uptake of evidence: it helps ‘reframe’ a policy problem; it is used in an environment conducive to policy change; and, policymakers exploit ‘windows of opportunity’ for change.
  4. Even the ‘best cases’ of tobacco control highlight a gap of 20-30 years between the production of scientific evidence and a proportionate policy response. In many countries it could be 50. I’ll use this final insight to identify some scenarios on how evidence might be used in areas, such as drugs policy, in which many of the ‘best case’ conditions are not met.

‘Evidence-based policymaking’ is highly political and difficult to understand

Evidence-based policymaking (EBPM) is so difficult to understand that we don’t know how to define it or each word in it! People use phrases like ‘policy-based evidence’, to express cynicism about the sincere use of evidence to guide policy, or ‘evidence informed policy’, to highlight its often limited impact. It is more important to try to define each element of EBPM – to identify what counts as evidence, what is policy, who are the policymakers, and what an ‘evidence-based’ policy would look like – but this is easier said than done.

In fact, it is far easier to say what EBPM is not:

It is not ‘comprehensively rational’

Comprehensive rationality’ describes, in part, the absence of ambiguity and uncertainty:

  • Policymakers translate their values into policy in a straightforward manner – they know what they want and about the problem they seek to solve.
  • Policymakers and governments can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and what is familiar to them, to make decisions quickly.

It does not take place in a policy cycle with well-ordered stages

Policy cycle’ describes the ides that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation.

It does not describe or explain policymaking well. Instead, we tend to identify the role of environments or systems.

When describing less ordered and predictable policy environments, we describe:

  • a wide range of actors (individuals and organisations) influencing policy at many levels of government
  • a proliferation of rules and norms followed by different levels or types of government
  • important relationships (‘networks’) between policymakers and powerful actors (with material resources, or the ability to represent a profession or social group)
  • a tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion
  • shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

When describing complex policymaking systems we show that, for example, (a) the same inputs of evidence or policy activity can have no, or a huge, effect, and (b) policy outcomes often ‘emerge’ in the absence of central government control (which makes it difficult to know how, and to whom, to present evidence or try to influence).

It does not resemble ‘evidence based medicine’ or the public health culture

In health policy we can identify an aim, associated with ‘evidence-based medicine’ (EBM), to:

(a) gather the best evidence on the effectiveness of policy interventions, based on a hierarchy of research methods which favours, for example, the systematic review of randomised control trials (RCTs)

(b) ensure that this evidence has a direct impact on healthcare and public health, to exhort practitioners to replace bad interventions with good, as quickly as possible.

Instead, (a) policymakers can ignore the problems raised by scientific evidence for long periods of time, only for (b) their attention to lurch, prompting them to beg, borrow, or steal information quickly from readily available sources. This can involve many sources of evidence (such as the ‘grey literature’) that some scientists would not describe as reliable.

Actors focus as much on persuasion to reduce ambiguity as scientific evidence to reduce uncertainty.

In that context, ‘evidence-based policymaking’ is about framing problems and adapting to complexity.

Framing refers to the ways in which policymakers understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one ‘image’ at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), who is responsible for policy, how much attention they pay, their demand for evidence on policy solutions, and what kind of solution they favour.

Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with reference to evidence. Rather, policy theories signal the strategies that actors adopt to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (punctuated equilibrium theory)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (narrative policy framework)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (advocacy coalition framework)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (multiple streams analysis).

This takes place in complex ‘systems’ or ‘environments’

A focus on this bigger picture shifts our attention from the use of evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multi-level policy process. It shows actors that:

  • They are competing with many others to present evidence in a particular way to secure a policymaker audience.
  • Support for particular solutions varies according to which organisation takes the lead and how it understands the problem.
  • Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others
  • There is a language – indicating which ideas, beliefs, or ways of thinking are most accepted by policymakers and their stakeholders – that takes time to learn.
  • Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift. However, major policy shifts are rare.

In other words, successful actors develop pragmatic strategies based on the policy process that exists, not the process they’d like to see

We argue that successful actors: identify where the ‘action is’ (in networks and organisations in several levels of government); learn and follow the ‘rules of the game’ within networks to improve strategies and help build up trust; form coalitions with actors with similar aims and beliefs; and, frame the evidence to appeal to the biases, beliefs, and priorities of policymakers.

Tobacco policy demonstrates three conditions for the proportionate uptake of evidence

Case studies allow us to turn this general argument into insights generated from areas such as public health.

There are some obvious and important differences between tobacco and (illegal) drugs policies, but an initial focus on tobacco allows us to consider the conditions that might have to be met to use the best evidence on a problem to promote (what we consider to be) a proportionate and effective solution.

We can then use the experience of a ‘best case scenario’ to identify the issues that we face in less ideal circumstances (first in tobacco, and second in drugs).

With colleagues, I have been examining:

Our studies help us identify the conditions under which scientific evidence, on the size of the tobacco problem and the effectiveness of solutions, translates into a public policy response that its advocates would consider to be proportionate.

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems.

Although scientific evidence helps reduce uncertainty, it does not reduce ambiguity. Rather, there is high competition to define problems, and the result of this competition helps determine the demand for subsequent evidence.

In tobacco, the evidence on smoking and then passive smoking helped raise attention to public health, but it took decades to translate into a proportionate response, even in ‘leading’ countries such as the UK.

The comparison with ‘laggard’ countries is crucial to show that the same evidence can produce a far more limited response, as policymakers compare the public health imperative with other ‘frames’, relating to their beliefs on personal responsibility, civil liberties, and the economic consequences of tobacco controls.

  1. The policy environment becomes conducive to policy change.

Public health debates take place in environments more or less conducive to policy change. In the UK, actors used scientific evidence to help reframe the problem. Then, this new understanding helped give the Department of Health a greater role, the health department fostered networks with public health and medical groups at the expense of the industry and, while pursuing policy change, policymakers emphasised the reduced opposition to tobacco control, smoking prevalence, and economic benefits to tobacco,.

In many other countries, these conditions are far less apparent: there are multiple tobacco frames (including economic and civil liberties); economic and trade departments are still central to policy; the industry remains a key player; and, policymakers pay more attention to opposition to tobacco controls (such as bans on smoking in public places) and their potential economic consequences.

Further, differences between countries have largely endured despite the fact that most countries are parties to the FCTC. In other words, a commitment to evidence basedpolicy transfer’ does not necessarily produce actual policy change.

  1. Actors generate and exploit ‘windows of opportunity’ for major policy change.

Even in favourable policy environments, it is not inevitable that major policy changes will occur. Rather, the UK’s experience of key policy instruments – such as legislation to ban smoking in public places (a major commitment of the FCTC) – shows the high level of serendipity involved in the confluence of three necessary but insufficient conditions:

  1. high policymaker attention to tobacco as a policy problem
  2. the production of solutions, introducing partial or comprehensive bans on smoking in public places, that are technically and politically feasible
  3. the willingness and ability of policymakers to choose the more restrictive solution.

In many other countries, there has been no such window of opportunity, or only an opportunity for a far weaker regulation.

So, this condition – the confluence of three ‘streams’ during a ‘window of opportunity’ – shows the major limits to the effect of scientific evidence. The evidence on the health effects of passive smoking have been available since the 1980s, but they only contributed to comprehensive smoking bans in the UK in the mid-2000s, and they remain unlikely in many other countries.

Comparing ‘best case’ and ‘worst case’ scenarios for policy change

These discussions help us clarify the kinds of conditions that need to be met to produce major ‘evidence based’ policy change, even when policymakers have made a commitment to it, or are pursuing an international agreement.

I provide a notional spectrum of ‘best’ and ‘worst’ case scenarios in relation to these conditions:

  1. Actors agree on how to gather and interpret scientific evidence.
  • Best case: governments fund effective ways to gather and interpret the most relevant evidence on the size of policy problems and the effectiveness of solutions. Policymakers can translate large amounts of evidence on complex situations into simple and effective stories (that everyone can understand) to guide action. This includes evidence of activity in one’s own country, and of transferable success from others.
  • Worst case: governments do not know the size of the problem or what solutions have the highest impacts. They rely on old stories that reinforce ineffective action, and do not know how to learn from the experience of other regions (note the ‘not invented hereissue).
  1. Actors ‘frame’ the problem simply and/or unambiguously.
  • Best case: governments maintain a consensus on how best to understand the cause of a policy problem and therefore which evidence to gather and solutions to seek.
  • Worst case: governments juggle many ‘frames’, there is unresolved competition to define the problem, and the best sources of evidence and solutions remain unclear.
  1. A new policy frame is not undermined by the old way of thinking about, and doing, things
  • Best case: the new frame sets the agenda for actors in existing organisations and networks; there is no inertia linked to the old way of thinking about and doing things.
  • Worst case: there is a new policy, but it is undermined by old beliefs, rules, pre-existing commitments (for example, we talk of ‘path dependence’ and ‘inheritance before choice’), or actors opposed to the new policy.
  1. There is a clear ‘delivery chain’ from policy choice to implementation
  • Best case: policymakers agree on a solution, they communicate their aims well, and they secure the cooperation of the actors crucial to policy delivery in many levels and types of government.
  • Worst case: policymakers communicate an ambiguous message and/ or the actors involved in policy delivery pursue different – and often contradictory – ways to try to solve the same problem.

In international cooperation, it is natural to anticipate and try to minimise at least some of these worst case scenarios. Problems are more difficult to solve when they are transnational. Our general sense of uncertainty and complexity is more apparent when there are many governments involved and we cannot rely on a single authoritative actor to solve problems. Each country (and regions within it) has its own beliefs and ways of doing things, and it is not easy to simply emulate another country (even if we think it is successful and know why). Some countries do not have access to the basic information (for example, on health and mortality, alongside statistics on criminal justice) that others take for granted when they monitor the effectiveness of policies.

Further, these obstacles exist in now-relatively-uncontroversial issues, such as tobacco, in which there is an international consensus on the cause of the problem and the appropriateness and effectiveness of public solutions. It is natural to anticipate further problems when we also apply public health (and, in this case, ‘harm reduction’) measures to more controversial areas such as illegal drugs.

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy, UK politics and policy

Policy Concepts in 1000 Words: Framing

framing main

(podcast download)

‘Framing’ is a metaphor to describe the ways in which we understand, and use language selectively to portray, policy problems. There are many ways to describe this process in many disciplines, including communications, psychological, and sociological research. There is also more than one way to understand the metaphor.

For example, I think that most scholars describe this image (from litemind) of someone deciding which part of the world on which to focus.

framing with hands

However, I have also seen colleagues use this image, of a timber frame, to highlight the structure of a discussion which is crucial but often unseen and taken for granted:

timber frame

  1. Intentional framing and cognition.

The first kind of framing relates to bounded rationality or the effect of our cognitive processes on the ways in which we process information (and influence how others process information):

  • We use major cognitive shortcuts to turn an infinite amount of information into the ‘signals’ we perceive or pay attention to.
  • These cognitive processes often produce interesting conclusions, such as when (a) we place higher value on the things we own/ might lose rather than the things we don’t own/ might gain (‘prospect theory’) or (b) we value, or pay more attention to, the things with which we are most familiar and can process more easily (‘fluency’).
  • We often rely on other people to process and select information on our behalf.
  • We are susceptible to simple manipulation based on the order (or other ways) in which we process information, and the form it takes.

In that context, you can see one meaning of framing: other actors portray information selectively to influence the ways in which we see the world, or which parts of the world capture our attention (here is a simple example of wind farms).

In policy theory, framing studies focus on ambiguity: there are many ways in which we can understand and define the same policy problem (note terms such as ‘problem definition’ and a ‘policy image’). Therefore, actors exercise power to draw attention to, and generate support for, one particular understanding at the expense of others. They do this with simple stories or the selective presentation of facts, often coupled with emotional appeals, to manipulate the ways in which we process information.

  1. Frames as structures

Think about the extent to which we take for granted certain ways to understand or frame issues. We don’t begin each new discussion with reference to ‘first principles’. Instead, we discuss issues with reference to:

(a) debates that have been won and may not seem worth revisiting (imagine, for example, the ways in which ‘socialist’ policies are treated in the US)

(b) other well-established ways to understand the world which, when they seem to dominate our ways of thinking, are often described as ‘hegemonic’ or with reference to paradigms.

In such cases, the timber frame metaphor serves two purposes:

(a) we can conclude that it is difficult but not impossible to change.

(b) if it is hidden by walls, we do not see it; we often take it for granted even though we should know it exists.

Framing the social, not physical, world

These metaphors can only take us so far, because the social world does not have such easily identifiable physical structures. Instead, when we frame issues, we don’t just choose where to look; we also influence how people describe what we are looking at. Or, ‘structural’ frames relate to regular patterns of behaviour or ways of thinking which are more difficult to identify than in a building. Consequently, we do not all describe structural constraints in the same way even though, ostensibly, we are looking at the same thing.

In this respect, for example, the well-known ‘Overton window’ is a sort-of helpful but also problematic concept, since it suggests that policymakers are bound to stay within the limits of what Kingdon calls the ‘national mood’. The public will only accept so much before it punishes you in events such as elections. Yet, of course, there is no such thing as the public mood. Rather, some actors (policymakers) make decisions with reference to their perception of such social constraints (how will the public react?) but they also know that they can influence how we interpret those constraints with reference to one or more proxies, including opinion polls, public consultations, media coverage, and direct action:

JEPP public opinion

They might get it wrong, and suffer the consequences, but it still makes sense to say that they have a choice to interpret and adapt to such ‘structural’ constraints.

Framing, power and the role of ideas

We can bring these two ideas about framing together to suggest that some actors exercise power to reinforce dominant ways to think about the world. Power is not simply about visible conflicts in which one group with greater material resources wins and another loses. It also relates to agenda setting. First, actors may exercise power to reinforce social attitudes. If the weight of public opinion is against government action, maybe governments will not intervene. The classic example is poverty – if most people believe that it is caused by fecklessness, what is the role of government? In such cases, power and powerlessness may relate to the (in)ability of groups to persuade the public, media and/ or government that there is a reason to make policy; a problem to be solved.  In other examples, the battle may be about the extent to which issues are private (with no legitimate role for government) or public (and open to legitimate government action), including: should governments intervene in disputes between businesses and workers? Should they intervene in disputes between husbands and wives? Should they try to stop people smoking in private or public places?

Second, policymakers can only pay attention to a tiny amount of issues for which they are responsible. So, actors exercise power to keep some issues on their agenda at the expense of others.  Issues on the agenda are sometimes described as ‘safe’: more attention to these issues means less attention to the imbalances of power within society.

45 Comments

Filed under 1000 words, agenda setting, PhD, public policy