Tag Archives: policy analysis

Policy Analysis in 750 words: William Dunn (2017) Public Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary. This book is a whopper, with almost 500 pages and 101 (excellent) discussions of methods, so 800 words over budget seems OK to me. If you disagree, just read every second word.  By the time you reach the cat hanging in there baby you are about 300 (150) words away from the end.

Dunn 2017 cover

William Dunn (2017) Public Policy Analysis 6th Ed. (Routledge)

Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical assessment, and communication of policy-relevant knowledge … to solve practical problemsIts practitioners are free to choose among a range of scientific methods, qualitative as well as quantitative, and philosophies of science, so long as these yield reliable knowledge’ (Dunn, 2017: 2-3).

Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’, to help solve problems with analysis that people can trust.

This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world should be and how solutions get us there (as opposed to policy studies/ research seeking primarily to explain what happens).

Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:

  1. The idea of ‘best practice’ characterised by 5-step plans.
  • In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather information; the role they perform in an organisation; the time constraints and incentive structures in organisations and political systems; the expectations and standards of their profession; and, the need to work with teams consisting of many professions/ disciplines (2017: 15-6)
  • The cost (in terms of time and resources) of conducting multiple research and analytical methods is high, and highly constrained in political environments (2017: 17-8; compare with Lindblom)
  1. The too-narrow idea of evidence-based policymaking
  • The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’ undermines a researcher’s ability to adapt well to the evidence-demands of policymakers (2017: 68; 4 compare with Why don’t policymakers listen to your evidence?).

To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3) and after (Qs4-5) policy intervention (2017: 5-7; 54-6):

  1. What is the policy problem to be solved?
  • For example, identify its severity, urgency, cause, and our ability to solve it.
  • Don’t define the wrong problem, such as by oversimplifying or defining it with insufficient knowledge.
  • Key aspects of problems including ‘interdependency’ (each problem is inseparable from a host of others, and all problems may be greater than the sum of their parts), ‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change rather than being solved), and ‘hierarchy’ (which level or type of government is responsible) (2017: 70; 75).
  • Problems vary in terms of how many relevant policymakers are involved, how many solutions are on the agenda, the level of value conflict, and the unpredictability of outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
  • ‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a problem, and ward against making too many assumptions about its nature and cause; produce models of cause-and-effect; and make a problem seem solve-able, such as by placing boundaries on its coverage. These methods foster creativity, which is useful when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69; 77; 81-107).
  • Problem definition draws on evidence, but is primarily the exercise of power to reduce ambiguity through argumentation, such as when defining poverty as the fault of the poor, the elite, the government, or social structures (2017: 79; see Stone).
  1. What effect will each potential policy solution have?
  • Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
  • ‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
  • However, even the most-effective expert/ theory-based methods to extrapolate from the past are flawed, and it is important to communicate levels of uncertainty (2017: 118-23; see Spiegelhalter).
  1. Which solutions should we choose, and why?
  • ‘Prescription’ methods help provide a consistent way to compare each potential solution, in terms of its feasibility and predicted outcome, rather than decide too quickly that one is superior (2017: 55; 190-2; 220-42).
  • They help to combine (a) an estimate of each policy alternative’s outcome with (b) a normative assessment.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer & Schwartz, and Stone on the meaning of these values).
  • For example, cost benefit analysis (CBA) is an established – but problematic – economics method based on finding one metric – such as a $ value – to predict and compare outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
  • Cost effectiveness analysis uses a $ value for costs, but compared with other units of measurement for benefits (such as outputs per $) (2017: 217-9)
  • Although such methods help us combine information and values to compare choices, note the inescapable role of power to decide whose values (and which outcomes, affecting whom) matter (2017: 204)
  1. What were the policy outcomes?
  • ‘Monitoring’ methods help identify (say): levels of compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly (such as on clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
  • Monitoring is crucial because it is so difficult to predict policy success, and unintended consequences are almost inevitable (2017: 250).
  • However, the data gathered are usually no more than proxy indicators of outcomes. Further, the choice of indicators reflect what is available, ‘particular social values’, and ‘the political biases of analysts’ (2017: 262)
  • The idea of ‘evidence based policy’ is linked strongly to the use of experiments and systematic review to identify causality (2017: 273-6; compare with trial-and-error learning in Gigerenzer, complexity theory, and Lindblom).
  1. Did the policy solution work as intended? Did it improve policy outcomes?
  • Although we frame policy interventions as ‘solutions’, few problems are ‘solved’. Instead, try to measure the outcomes and the contribution of your solution, and note that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).  
  • Policy evaluation is not an objective process in which we can separate facts from values.
  • Rather, values and beliefs are part of the criteria we use to gauge success (and even their meaning is contested – 2017: 322-32).
  • We can gather facts about the policy process, and the impacts of policy on people, but this information has little meaning until we decide whose experiences matter.

Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since policymaking is continuous, and evaluations of past choices inform current choices.

Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the impact of knowledge regarding one question on the other four (2017: 7-13; contrast with Meltzer & Schwartz, Thissen & Walker).

Developing arguments and communicating effectively

Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4 for advice on how to write briefs, memos, and executive summaries and prepare oral testimony).

He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:

  • The claim itself, such as a description (size, cause) or evaluation (importance, urgency) of a problem, and prescription of a solution
  • The things that support it (including reasoning, knowledge, authority)
  • Incorporating the things that could undermine it (including any ‘qualifier’, the communication of uncertainty about current knowledge, and counter-arguments).

The key stages of communication (2017: 392-7; 405; 432) include:

  1. ‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing with ‘political and organizational constraints’ and suggesting something that can actually be done.
  2. ‘Documentation’, focusing on synthesising information from many sources, organising it into a coherent argument, translating from jargon or a technical language, simplifying, summarising, and producing user-friendly visuals.
  3. ‘Utilization’, by making sure that (a) communications are tailored to the audience (its size, existing knowledge of policy and methods, attitude to analysts, and openness to challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn from each other.

 

hang-in-there-baby

 

Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy cycles

Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which culminated in new ways to express old ambitions, such as to:

  1. Use ‘systems thinking’, to understand the interdependence between many elements in complex policymaking systems (see also socio-technical and socio-ecological systems).
  • Note the huge difference between (a) policy analysis discussions of ‘systems thinking’ built on the hope that if we can understand them we can direct them, and (b) policy theory discussions that emphasise ‘emergence’ in the absence of central control (and presence of multi-centric policymaking).
  • Also note that Dunn (2017: 73) describes policy problems – rather than policymaking – as complex systems. I’ll write another post (short, I promise) on the many different (and confusing) ways to use the language of complexity.
  1. Promote ‘evidence based policy, as the new way to describe an old desire for ‘technocratic’ policymaking that accentuates scientific evidence and downplays politics and values (see also 2017: 60-4).

In that context, see Dunn’s (47-52) discussion of comprehensive versus bounded rationality:

  • Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in favour of a continuous process of ‘questioning and answering’.
  • This approach is a pragmatic response to the lack of order and predictability of policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5 step policy analyses (compare with Meltzer & Schwartz).

Dunn (2017: 41-7) also provides an unusually useful discussion of the policy cycle. Rather than seeing it as a mythical series of orderly stages, Dunn highlights:

  1. Lasswell’s original discussion of policymaking functions (or functional requirements of policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering knowledge), ‘promotion’ (persuasion and argumentation while defining problems), ‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is made and carried out), and ‘appraisal’ (2017: 42-3).
  2. The constant interaction between all notional ‘stages’ rather than a linear process: attention to a policy problem fluctuates, actors propose and adopt solutions continuously, actors are making policy (and feeding back on its success) as they implement, evaluation (of policy success) is not a single-shot document, and previous policies set the agenda for new policy (2017: 44-5).

In that context, it is no surprise that the impact of a single policy analyst is usually minimal (2017: 57). Sorry to break it to you. Hang in there, baby.

hang-in-there-baby

 

Leave a comment

Filed under 750 word policy analysis, public policy

Policy Analysis in 750 words: Wil Thissen and Warren Walker (2013) Public Policy Analysis

Thissen Walker 2013 cover

Please see the Policy Analysis in 750 words series overview before reading the summary. Please note that this is an edited book and the full list of authors (PDF) is here. I’m using the previous sentence as today’s excuse for not sticking to 750 words.

Wil Thissen and Warren Walker (editors) (2013) Public Policy Analysis: New Developments (Springer)

Our premise is that there is no single, let alone ‘one best’, way of conducting policy analyses (Thissen and Walker, 2013: 2)

Thissen and Walker (2013: 2) begin by identifying the proliferation of (a) policy analysts inside and outside government, (b) the many approaches and methods that could count as policy analysis (see Radin), and therefore (c) a proliferation of concepts to describe it.

Like Vining and Weimar, they distinguish between:

  1. Policy analysis, as the advice given to clients before they make a choice. Thissen and Walker (2013: 4) describe analysts working with a potential range of clients, when employed directly by governments or organisations, or acting more as entrepreneurs with multiple audiences in mind (compare with Bardach, Weimer & Vining, Mintrom).
  2. Policy process research, as the study of such actors within policymaking systems (see 500 and 1000).

Policy theory: implications for policy analysis

Policy process research informs our understanding of policy analysis, identifying what analysts and their clients (a) can and cannot do, which informs (b) what they should do.

As Enserink et al (2012: 12-3) describe, policy analysis (analysis for policy) will differ profoundly if the policy process is ‘chaotic and messy’ rather than ‘neat and rational’.

The range of policy concepts and theories (analysis of policy) at our disposal helps add meaning to policy analysis as a practice. Like Radin, Enserink et al trace historic attempts to seek ‘rational’ policy analysis then conclude that modern theories – describing policymaking complexity – are ‘more in line with political reality’ (2012: 13-6).

As such, policy analysis shifts from:

(a) A centralised process with few actors inside government, to (b) a messy process including many policymakers and influencers, inside and outside government

(a) Translating science into policy, to (b) a competition to frame issues and assess policy-relevant knowledge

(a) An ‘optimal’ solution from one perspective, to (b) a negotiated solution based on many perspectives (in which optimality is contested)

(a) Analysing a policy problem/ solution with a common metric (such as cost benefit analysis), to (b) developing skills relating to: stakeholder analysis, network management, collaboration, mediation or conflict resolution based on sensitivity to the role of different beliefs, and the analysis of policymaking institutions to help resolve fragmentation (2013: 17-34).

Their Table 2.1 (2012: 35) outlines these potential differences (pop your reading glasses on …. now!):

Enserink et al 2012 page 35

In many cases, the role of an analyst remains uncertain. If we follow the ACF story, does an analyst appeal to one coalition or seek to mediate between them? If we follow MSA, do they wait for a ‘window of opportunity’ or seek to influence problem definition and motivation to adopt certain solutions?

Policy Analysis: implications for policy theory

In that context, rather than identify a 5-step plan for policy analysis, Mayer et al (2013: 43-50) suggest that policy analysts tend to perform one or more of six activities:

  1. ‘Research and analyze’, to collect information relevant to policy problems.
  2. ‘Design and recommend’, to produce a range of potential solutions.
  3. ‘Clarify values and arguments’, to identify potential conflicts and facilitate high quality debate.
  4. ‘Advise strategically’, to help a policymaker choose an effective solution within their political context.
  5. ‘Democratize’, to pursue a ‘normative and ethical objective: it should further equal access to, and influence on, the policy process for all stakeholders’ (2013: 47)
  6. ‘Mediate’, to foster many forms of cooperation between governments, stakeholders (including business), researchers, and/ or citizens.

Styles of policy analysis

Policy analysts do not perform these functions sequentially or with equal weight.

Rather, Mayer et al (2013: 50-5) describe ‘six styles of policy analysis’ that vary according to the analyst’s ‘assumptions about science (epistemology), democracy, learning, and change’ (and these assumptions may change during the process):

  1. Rational, based on the idea that we can conduct research in a straightforward way within a well-ordered policy process (or modify the analysis to reflect limits to research and order).
  2. Argumentative, based on a competition to define policy problems and solutions (see Stone).
  3. Client advice, based on the assumption that analysis is part of a ‘political game’, and analysts bring knowledge of political strategy and policymaking complexity.
  4. Participatory, to facilitate a more equal access to information and debate among citizens.
  5. Process, based on the idea that the faithful adherence to good procedures aids high quality analysis (and perhaps mitigates an ‘erratic and volatile’ policy process)
  6. Interactive, based on the idea that the rehearsal of many competing perspectives is useful to policymaker deliberations (compare with reflexive learning).

In turn, these styles prompt different questions to evaluate the activities associated with analysis (2013: 56):

p56 Mayer et al

In relation to the six policy analysis activities,

  • the criteria for good policy analysis include: the quality of knowledge, usefulness of advice to clients and stakeholders, quality of argumentation, pragmatism of advice, transparency of processes, and ability to secure a mediated settlement (2013: 58).
  • The positive role for analysts includes ‘independent scientist’ or expert, ‘ethicist’, ‘narrator’, ‘counsellor’, ‘entrepreneur’,’ democratic advocate’, or ‘facilitator’ (2013: 59).

Further, their – rather complicated – visualisations of these roles (e.g. p60; compare with the Appendix) project the (useful) sense that (a) individuals face a trade-off between roles (even if they seek to combine some), and (b) many people making many trade-offs adds up to a complex picture of activity.

Therefore, we should bear in mind that

(a) there exist some useful 5-step guides for budding analyst, but

(b) even if they adopt a simple strategy, analysts will also need to find ways to understand and engage with a complex policymaking systems containing a huge number of analysts, policymakers, and influencers.

Policy Analysis styles: implications for problem definition and policy design

Thissen (2013: 66-9) extends the focus on policymaking context and policy analysis styles to problem definition, including:

  1. A rational approach relies on research knowledge to diagnose problems (the world is knowable, use the best scientific methods to produce knowledge, and subject the results to scientific scrutiny).
  2. A ‘political game model’ emphasises key actors and their perspectives, value conflicts, trust, and interdependence (assess the potential to make deals and use skills of mediation and persuasion to secure them).

These different starting points influence they ways in which analysts might take steps to identify: how people perceive policy problems, if other definitions are more useful, how to identify a problem’s cause and effect, and the likely effect of a proposed solution, communicate uncertainty, and relate the results to a ‘policy arena’ with its own rules on resolving conflict and producing policy instruments (2013: 70-84; 93-4).

Similarly, Bots (2013: 114) suggests that these styles inform a process of policy design, constructed to change people’s minds during repeated interactions with clients (such as by appealing to scientific evidence or argumentation).

Bruijn et al (2013: 134-5) situate such activities in modern discussions of policy analysis:

  1. In multi-centric systems, with analysts focused less on ‘unilateral decisions using command and control’ and more on ‘consultation and negotiation among stakeholders’ in networks.
  • The latter are necessary because there will always be contestation about what the available information tells us about the problem, often without a simple way to negotiate choices on solutions.
  1. In relation to categories of policy problems, including
  • ‘tamed’ (high knowledge/ technically solvable, with no political conflict)
  • ‘untamed ethical/political’ (technically solvable, with high moral and political conflict)
  • ‘untamed scientific’ (high consensus but low scientific knowledge)
  • ‘untamed’ problems (low consensus, low knowledge).

Put simply, ‘rational’ approaches may help address low knowledge, while other skills are required to manage processes such as conflict resolution and stakeholder engagement (2013: 136-40)

Policy Analysis styles: implications for models

Part 2 of the book relates such styles (and assumptions about how ‘rational’ and comprehensive our analyses can be) to models of policy analysis. For example,

  1. Walker and van Daalen (2013: 157-84) explore models designed to compare the status quo with a future state, often based on the (shaky) assumption that the world is knowable and we can predict with sufficient accuracy the impact of policy solutions.
  2. Hermans and Cunningham (2013: 185-213) describe models to trace agent behaviour in networks and systems, and create multiple possible scenarios, which could help explore the ‘implementability’ of policies.
  3. Walker et al (2013: 215-61) relate policy analysis styles to how analysts might deal with uncertainty.
  • Some models may serve primarily to reduce ‘epistemic’ uncertainty associated with insufficient knowledge about the future (perhaps with a focus on methods and statistical analysis).
  • Others may focus on resolving ambiguity, in which many actors may define/ interpret problems and feasible solutions in different ways.

Overall, this book contains one of the most extensive discussions of 101 different technical models for policy analysis, but the authors emphasise their lack of value without initial clarity about (a) our beliefs regarding the nature of policymaking and (b) the styles of analysis we should use to resolve policy problems. Few of these initial choices can be resolved simply with reference to scientific analysis or evidence.

1 Comment

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy, Research design

Policy Analysis in 750 words: Rachel Meltzer and Alex Schwartz (2019) Policy Analysis as Problem Solving

Please see the Policy Analysis in 750 words series overview before reading the summary. This post might well represent the largest breach of the ‘750 words’ limit, so please get comfortable. I have inserted a picture of a cat hanging in there baby after the main (*coughs*) 1400-word summary. The rest is bonus material, reflecting on the links between this book and the others in the series.

Meltzer Schwartz 2019 cover

Rachel Meltzer and Alex Schwartz (2019) Policy Analysis as Problem Solving (Routledge)

We define policy analysis as evidence-based advice giving, as the process by which one arrives at a policy recommendation to address a problem of public concern. Policy analysis almost always involves advice for a client’ (Meltzer and Schwartz, 2019: 15).

Meltzer and Schwartz (2019: 231-2) describe policy analysis as applied research, drawing on many sources of evidence, quickly, with limited time, access to scientific research, or funding to conduct a lot of new research (2019: 231-2). It requires:

  • careful analysis of a wide range of policy-relevant documents (including the ‘grey’ literature often produced by governments, NGOs, and think tanks) and available datasets
  • perhaps combined with expert interviews, focus groups, site visits, or an online survey (see 2019: 232-64 on methods).

Meltzer and Schwartz (2019: 21) outline a ‘five-step framework’ for client-oriented policy analysis. During each step, they contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’ (to reflect bounded, not comprehensive, rationality):

  1. ‘Define the problem’.

Problem definition is a political act of framing, not an exercise in objectivity (2019: 52-3). It is part of a narrative to evaluate the nature, cause, size, and urgency of an issue (see Stone), or perhaps to attach to an existing solution (2019: 38-40; compare with Mintrom).

In that context, ask yourself ‘Who is defining the problem? And for whom?’ and do enough research to be able to define it clearly and avoid misunderstanding among you and your client (2019: 37-8; 279-82):

  • Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour, their deadline, and their ability to make or influence the policies you might suggest (2019: 49; compare with Weimer and Vining).
  • Tailor your narrative to your audience, albeit while recognising the need to learn from ‘multiple perspectives’ (2019: 40-5).
  • Make it ‘concise’ and ‘digestible’, not too narrowly defined, and not in a way that already closes off discussion by implying a clear cause and solution (2019: 51-2).

In doing so:

  • Ask yourself if you can generate a timeline, identify key stakeholders, and place a ‘boundary’ on the problem.
  • Establish if the problem is urgent, who cares about it, and who else might care (or not) (2019 : 46).
  • Focus on the ‘central’ problem that your solution will address, rather than the ‘related’ and ‘underlying’ problems that are ‘too large and endemic to be solved by the current analysis’ (2019: 47).
  • Avoid misdiagnosing a problem with reference to one cause. Instead, ‘map’ causation with reference to (say) individual and structural causes, intended and unintended consequences, simple and complex causation, market or government failure, and/ or the ability to blame an individual or organisation (2019: 48-9).
  • Combine quantitative and qualitative data to frame problems in relation to: severity, trends in severity, novelty, proximity to your audience, and urgency or crisis (2019: 53-4).

During this process, interrogate your own biases or assumptions and how they might affect your analysis (2019: 50).

2. ‘Identify potential policy options (alternatives) to address the problem’.

Common sources of ideas include incremental changes from current policy, ‘client suggestions’, comparable solutions (from another time, place, or policy area), reference to common policy instruments, and ‘brainstorming’ or ‘design thinking’ (2019: 67-9; see box 2.3 and 7.1, below, from Understanding Public Policy).

box 2.3 2nd ed UPP

Identify a ‘wide range’ of possible solutions, then select the (usually 3-5) ‘most promising’ for further analysis (2019: 65). In doing so:

  • be careful not to frame alternatives negatively (e.g. ‘death tax’ – 2019: 66)
  • compare alternatives in ‘good faith’ rather than keeping some ‘off the table’ to ensure that your preferred solution looks good (2019: 66)
  • beware ‘ best practice’ ideas that are limited in terms of (a) applicability (if made at a smaller scale, or in a very different jurisdiction), and (b) evidence of success (2019: 70; see studies of policy learning and transfer)
  • think about how to modify existing policies according to scale or geographical coverage, who to include (and based on what criteria), for how long, using voluntary versus mandatory provisions, and ensuring oversight (2019: 71-3)
  • consider combinations of common policy instruments, such as regulations and economic penalties/ subsidies (2019: 73-7)
  • consider established ways to ‘brainstorm’ ideas (2019: 77-8)
  • note the rise of instruments derived from the study of psychology and behavioural public policy (2019: 79-90)
  • learn from design principles, including ‘empathy’, ‘co-creating’ policy with service users or people affected, ‘prototyping’ (2019: 90-1)

box 7.1

3. ‘Specify the objectives to be attained in addressing the problem and the criteria to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.

Your objectives relate to your problem definition and aims: what is the problem, what do you want to happen when you address it, and why?

  • For example, questions to your client may include: what is your organization’s ‘mission’, what is feasible (in terms of resources and politics), which stakeholders to you want to include, and how will you define success (2019: 105; 108-12)?

In that values-based context, your criteria relate to ways to evaluate each policy’s likely impact (2019: 106-7). They should ensure:

  • Comprehensiveness. E.g. how many people, and how much of their behaviour, can you influence while minimizing the ‘burden’ on people, businesses, or government? (2019: 113-4)
  • Mutual Exclusiveness. In other words, don’t have two objectives doing the same thing (2019: 114).

Common criteria include (2019: 116):

  1. Effectiveness. The size of its intended impact on the problem (2019: 117).
  2. Equity (fairness). The impact in terms of ‘vertical equity’ (e.g. the better off should pay more), ‘horizontal equity’ (e.g. you should not pay more if unmarried), fair process, fair outcomes, and ‘intergenerational’ equity (e.g. don’t impose higher costs on future populations) (2019: 118-19).
  3. Feasibility (administrative, political, and technical). The likelihood of this policy being adopted and implemented well (2019: 119-21)
  4. Cost (or financial feasibility). Who would bear the cost, and their willingness and ability to pay (2019: 122).
  5. Efficiency. To maximise the benefit while minimizing costs (2019: 122-3).

 

4. ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.

When explaining objectives and criteria,

  • ‘label’ your criteria in relation to your policy objectives (e.g. to ‘maximize debt reduction’) rather than using generic terms (2019: 123-7)
  • produce a table – with alternatives in rows, and criteria in columns – to compare each option
  • quantify your policies’ likely outcomes, such as in relation to numbers of people affected and levels of income transfer, or a percentage drop in the size of the problem, but also
  • communicate the degree of uncertainty related to your estimates (2019: 128-32; see Spiegelhalter)

Consider using cost-benefit analysis to identify (a) the financial and opportunity cost of your plans (what would you achieve if you spent the money elsewhere?), compared to (b) the positive impact of your funded policy (2019: 141-55).

  • The principle of CBA may be intuitive, but a thorough CBA process is resource-intensive, vulnerable to bias and error, and no substitute for choice. It requires you to make a collection of assumptions about human behaviour and likely costs and benefits, decide whose costs and benefits should count, turn all costs and benefits into a single measure, and imagine how to maximise winners and compensate losers (2019: 155-81; compare Weimer and Vining with Stone).
  • One alternative is cost-effectiveness analysis, which quantifies costs and relates them to outputs (e.g. number of people affected, and how) without trying to translate them into a single measure of benefit (2019: 181-3).
  • These measures can be combined with other thought processes, such as with reference to ‘moral imperatives’, a ‘precautionary approach’, and ethical questions on power/ powerlessness (2019: 183-4).

 

5. ‘Arrive at a recommendation’.

Predict the most likely outcomes of each alternative, while recognising high uncertainty (2019: 189-92). If possible,

  • draw on existing, comparable, programmes to predict the effectiveness of yours (2019: 192-4)
  • combine such analysis with relevant theories to predict human behaviour (e.g. consider price ‘elasticity’ if you seek to raise the price of a good to discourage its use) (2019: 193-4)
  • apply statistical methods to calculate the probability of each outcome (2019: 195-6), and modify your assumptions to produce a range of possibilities, but
  • note Spiegelhalter’s cautionary tales and anticipate the inevitable ‘unintended consequences’ (when people do not respond to policy in the way you would like) (2019: 201-2)
  • use these estimates to inform a discussion on your criteria (equity, efficiency, feasibility) (2019: 196-200)
  • present the results visually – such as in a ‘matrix’ – to encourage debate on the trade-offs between options
  • simplify choices by omitting irrelevant criteria and options that do not compete well with others (2019: 203-10)
  • make sure that your recommendation (a) flows from the analysis, and (b) is in the form expected by your client (2019: 211-12)
  • consider making a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups (2019: 212).

 

hang-in-there-baby

 

Policy analysis in a wider context

Meltzer and Schwartz’s approach makes extra sense if you have already read some of the other texts in the series, including:

  1. Weimer and Vining, which represents an exemplar of an X-step approach informed heavily by the study of economics and application of economic models such as cost-benefit-analysis (compare with Radin’s checklist).
  2. Geva-May on the existence of a policy analysis profession with common skills, heuristics, and (perhaps) ethics (compare with Meltzer and Schwartz, 2019: 282-93)
  3. Radin, on:
  • the proliferation of analysts across multiple levels of government, NGOs, and the private sector (compare with Meltzer and Schwartz, 2019: 269-77)
  • the historic shift of analysis from formulation to all notional stages (contrast with Meltzer and Schwartz, 2019: 16-7 on policy analysis not including implementation or evaluation)
  • the difficulty in distinguishing between policy analysis and advocacy in practice (compare with Meltzer and Schwartz, 2019: 276-8, who suggest that actors can choose to perform these different roles)
  • the emerging sense that it is difficult to identify a single client in a multi-centric policymaking system. Put another way, we might be working for a specific client but accept that their individual influence is low.
  1. Stone’s challenge to
  • a historic tendency for economics to dominate policy analysis,
  • the applicability of economic assumptions (focusing primarily on individualist behaviour and markets), and
  • the pervasiveness of ‘rationalist’ policy analysis built on X-steps.

Meltzer and Schwartz (2019: 1-3) agree that economic models are too dominant (identifying the value of insights from ‘other disciplines – including design, psychology, political science, and sociology’).

However, they argue that critiques of rational models exaggerate their limitations (2019: 23-6). For example:

  • these models need not rely solely on economic techniques or quantification, a narrow discussion or definition of the problem, or the sense that policy analysis should be comprehensive, and
  • it is not problematic for analyses to reflect their client’s values or for analysts to present ambiguous solutions to maintain wide support, partly because
  • we would expect the policy analysis to form only one part of a client’s information or strategy.

Further, they suggest that these critiques provide no useful alternative, to help guide new policy analysts. Yet, these guides are essential:

to be persuasive, and credible, analysts must situate the problem, defend their evaluative criteria, and be able to demonstrate that their policy recommendation is superior, on balance, to other alternative options in addressing the problem, as defined by the analyst. At a minimum, the analyst needs to present a clear and defensible ranking of options to guide the decisions of the policy makers’ (Meltzer and Schwartz, 2019: 4).

Meltzer and Schwartz (2019: 27-8) then explore ways to improve a 5-step model with insights from approaches such as ‘design thinking’, in which actors use a similar process – ‘empathize, define the problem, ideate, prototype, test and get feedback from others’ – to experiment with policy solutions without providing a narrow view on problem definition or how to evaluate responses.

Policy analysis and policy theory

One benefit to Meltzer and Schwartz’s approach is that it seeks to incorporate insights from policy theories and respond with pragmatism and hope. However, I think you also need to read the source material to get a better sense of those theories, key debates, and their implications. For example:

  1. Meltzer and Schwartz (2019: 32) note correctly that ‘incremental’ does not sum up policy change well. Indeed, Punctuated Equilibrium Theory shows that policy change is characterised by a huge number of small and a small number of huge changes.
  • However, the direct implications of PET are not as clear as they suggest. Baumgartner and Jones have both noted that they can measure these outcomes and identify the same basic distribution across a political system, but not explain or predict why particular policies change dramatically.
  • It is useful to recommend to policy analysts that they invest some hope in major policy change, but also sensible to note that – in the vast majority of cases – it does not happen.
  • On his point, see Mintrom on policy analysis for the long term, Weiss on the ‘enlightenment’ function of research and analysis, and Box 6.3 (from Understanding Public Policy), on the sense that (a) we can give advice to ‘budding policy entrepreneurs’ on how to be effective analysts, but (b) should note that all their efforts could be for nothing.

box 6.3

  1. Meltzer and Schwartz (2019: 32-3) tap briefly into the old debate on whether it is preferable to seek radical or incremental change. For more on that debate, see chapter 5 in the 1st ed of Understanding Public Policy in which Lindblom notes that proposals for radical/ incremental changes are not mutually exclusive.
  2. Perhaps explore the possible tension between Meltzer and Schwartz’s (2019: 33-4) recommendation that (a) policy analysis should be ‘evidence-based advice giving’, and (b) ‘flexible and open-ended’.
  • I think that Stone’s response would be that phrases such as ‘evidence based’ are not ‘flexible and open-ended’. Rather, they tend to symbolise a narrow view of what counts as evidence (see also Smith, and Hindess).
  • Further, note that the phrase ‘evidence based policymaking’ is a remarkably vague term (see the EBPM page), perhaps better seen as a political slogan than a useful description or prescription of policymaking.

 

Finally, if you read enough of these policy analysis texts, you get a sense that many are bunched together even if they describe their approach as new or distinctive.

  • Indeed, Melzer and Schwarz (2019: 22-3) provide a table (containing Bardach and Patashnik, Patton et al, Stokey and Zeckhauser, Hammond et al, and Weimer & Vining) of ‘quite similar’ X-step approaches.
  • Weimer and Vining also discuss the implications of policy theories and present the sense that X-step policy analysis should be flexible and adaptive.
  • Many texts – including Radin, and Smith (2016) – focus on the value of case studies to think through policy analysis in particular contexts, rather than suggesting that we can produce a universal blueprint.

However, as Geva-May might suggest, this is not a bad thing if our aim is to generate the sense that policy analysis is a profession with its own practices and heuristics.

 

 

4 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: Beryl Radin, B (2019) Policy Analysis in the Twenty-First Century

Please see the Policy Analysis in 750 words series overview before reading the summary. As usual, the 750-word description is more for branding than accuracy.

Beryl Radin (2019) Policy Analysis in the Twenty-First Century (Routledge)

Radin cover 2019

The basic relationship between a decision-maker (the client) and an analyst has moved from a two-person encounter to an extremely complex and diverse set of interactions’ (Radin, 2019: 2).

Many texts in this series continue to highlight the client-oriented nature of policy analysis (Weimer and Vining), but within a changing policy process that has altered the nature of that relationship profoundly.

This new policymaking environment requires new policy analysis skills and training (see Mintrom), and limits the applicability of classic 8-step (or 5-step) policy analysis techniques (2019: 82).

We can use Radin’s work to present two main stories of policy analysis:

  1. The old ways of making policy resembled a club, or reflected a clear government hierarchy, involving:
  • a small number of analysts, generally inside government (such as senior bureaucrats, scientific experts, and – in particular- economists),
  • giving technical or factual advice,
  • about policy formulation,
  • to policymakers at the heart of government,
  • on the assumption that policy problems would be solved via analysis and action.
  1. Modern policy analysis is characterised by a more open and politicised process in which:
  • many analysts, inside and outside government,
  • compete to interpret facts, and give advice,
  • about setting the agenda, and making, delivering, and evaluating policy,
  • across many policymaking venues,
  • often on the assumption that governments have a limited ability to understand and solve complex policy problems.

As a result, the client-analyst relationship is increasingly fluid:

In previous eras, the analyst’s client was a senior policymaker, the main focus was on the analyst-client relationship, and ‘both analysts and clients did not spend much time or energy thinking about the dimensions of the policy environment in which they worked’ (2019: 59). Now, in a multi-centric policymaking environment:

  1. It is tricky to identify the client.
  • We could imagine the client to be someone paying for the analysis, someone affected by its recommendations, or all policy actors with the ability to act on the advice (2019: 10).
  • If there is ‘shared authority’ for policymaking within one political system, a ‘client’ (or audience) may be a collection of policymakers and influencers spread across a network containing multiple types of government, non-governmental actors, and actors responsible for policy delivery (2019: 33).
  • The growth in international cooperation also complicates the idea of a single client for policy advice (2019: 33-4)
  • This shift may limit the ‘face-to-face encounters’ that would otherwise provide information for – and perhaps trust in – the analyst (2019: 2-3).
  1. It is tricky to identify the analyst
  • Radin (2019: 9-25) traces, from the post-war period in the US, a major expansion of policy analysts, from the notional centre of policymaking in federal government towards analysts spread across many venues, inside government (across multiple levels, ‘policy units’, and government agencies) and congressional committees, and outside government (such as in influential think tanks).
  • Policy analysts can also be specialist external companies contracted by organisations to provide advice (2019: 37-8).
  • This expansion shifted the image of many analysts, from a small number of trusted insiders towards many being treated as akin to interest groups selling their pet policies (2019: 25-6).
  • The nature – and impact – of policy analysis has always been a little vague, but now it seems more common to suggest that ‘policy analysts’ may really be ‘policy advocates’ (2019: 44-6).
  • As such, they may now have to work harder to demonstrate their usefulness (2019: 80-1) and accept that their analysis will have a limited impact (2019: 82, drawing on Weiss’ discussion of ‘enlightenment’).

Consequently, the necessary skills of policy analysis have changed:

Although many people value systematic policy analysis (and many rely on economists), an effective analyst does not simply apply economic or scientific techniques to analyse a problem or solution, or rely on one source of expertise or method, as if it were possible to provide ‘neutral information’ (2019: 26).

Indeed, Radin (2019: 31; 48) compares the old ‘acceptance that analysts would be governed by the norms of neutrality and objectivity’ with

(a) increasing calls to acknowledge that policy analysis is part of a political project to foster some notion of public good or ‘public interest’, and

(b)  Stone’s suggestion that the projection of reason and neutrality is a political strategy.

In other words, the fictional divide between political policymakers and neutral analysts is difficult to maintain.

Rather, think of analysts as developing wider skills to operate in a highly political environment in which the nature of the policy issue is contested, responsibility for a policy problem is unclear, and it is not clear how to resolve major debates on values and priorities:

  • Some analysts will be expected to see the problem from the perspective of a specific client with a particular agenda.
  • Other analysts may be valued for their flexibility and pragmatism, such as when they acknowledge the role of their own values, maintain or operate within networks, communicate by many means, and supplement ‘quantitative data’ with ‘hunches’ when required (2019: 2-3; 28-9).

Radin (2019: 21) emphasises a shift in skills and status

The idea of (a) producing new and relatively abstract ideas, based on high control over available information, at the top of a hierarchical organisation, makes way for (b) developing the ability to:

  • generate a wider understanding of organisational and policy processes, reflecting the diffusion of power across multiple policymaking venues
  • identify a map of stakeholders,
  • manage networks of policymakers and influencers,
  • incorporate ‘multiple and often conflicting perspectives’,
  • make and deliver more concrete proposals (2019: 59-74), while recognising
  • the contested nature of information, and the practices sued to gather it, even during multiple attempts to establish the superiority of scientific evidence (2019: 89-103),
  • the limits to a government’s ability to understand and solve problems (2019: 95-6),
  • the inescapable conflict over trade-offs between values and goals, which are difficult to resolve simply by weighting each goal (2019: 105-8; see Stone), and
  • do so flexibly, to recognise major variations in problem definition, attention and networks across different policy sectors and notional ‘stages’ of policymaking (2019: 75-9; 84).

Radin’s (2019: 48) overall list of relevant skills include:

  1. ‘Case study methods, Cost- benefit analysis, Ethical analysis, Evaluation, Futures analysis, Historical analysis, Implementation analysis, Interviewing, Legal analysis, Microeconomics, Negotiation, mediation, Operations research, Organizational analysis, Political feasibility analysis, Public speaking, Small- group facilitation, Specific program knowledge, Statistics, Survey research methods, Systems analysis’

They develop alongside analytical experience and status, from the early career analyst trying to secure or keep a job, to the experienced operator looking forward to retirement (2019: 54-5)

A checklist for policy analysts

Based on these skills requirements, the contested nature of evidence, and the complexity of the policymaking environment, Radin (2019: 128-31) produces a 4-page checklist of – 91! – questions for policy analysts.

For me, it serves two main functions:

  1. It is a major contrast to the idea that we can break policy analysis into a mere 5-8 steps (rather, think of these small numbers as marketing for policy analysis students, akin to 7-minute abs)
  2. It presents policy analysis as an overwhelming task with absolutely no guarantee of policy impact.

To me, this cautious, eyes-wide-open, approach is preferable to the sense that policy analysts can change the world if they just get the evidence and the steps right.

Further Reading:

  1. Iris Geva-May (2005) ‘Thinking Like a Policy Analyst. Policy Analysis as a Clinical Profession’, in Geva-May (ed) Thinking Like a Policy Analyst. Policy Analysis as a Clinical Profession (Basingstoke: Palgrave)

Although the idea of policy analysis may be changing, Geva-May (2005: 15) argues that it remains a profession with its own set of practices and ways of thinking. As with other professions (like medicine), it would be unwise to practice policy analysis without education and training or otherwise learning the ‘craft’ shared by a policy analysis community (2005: 16-17). For example, while not engaging in clinical diagnosis, policy analysts can draw on 5-step process to diagnose a policy problem and potential solutions (2005: 18-21). Analysts may also combine these steps with heuristics to determine the technical and political feasibility of their proposals (2005: 22-5), as they address inevitable uncertainty and their own bounded rationality (2005: 26-34; see Gigerenzer on heuristics). As with medicine, some aspects of the role – such as research methods – can be taught in graduate programmes, while others may be better suited to on the job learning (2005: 36-40). If so, it opens up the possibility that there are many policy analysis professions to reflect different cultures in each political system (and perhaps the venues within each system).

  1. Vining and Weimar’s take on the distinction between policy analysis and policy process research

 

1 Comment

Filed under 750 word policy analysis, public policy

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell; and see Sousa on contestation).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

 

4 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Policy Analysis in 750 words: Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

Please see the Policy Analysis in 750 words series overview before reading the summary. This post started off as 750 words before growing.

20191129_1725232826725708431485596.jpg

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences (Harvester)

‘If the claims of philosophy to a special kind of knowledge can be shown to be without foundation, if they are at best dogmatic or else incoherent, then methodology is an empty and futile pursuit and its prescriptions are vacuous’ (Hindess, 1977: 4).

This book may seem like a weird addition to a series on policy analysis.

However, it follows the path set by Carol Bacchi, asking whose interests we serve when we frame problems for policy analysis, and Linda Tuhiwai Smith, asking whose research counts when we do so.

One important answer is that the status of research and the framing of the problem result from the exercise of power, rather than the objectivity of analysts and natural superiority of some forms of knowledge.

In other posts on ‘the politics of evidence based policymaking’, I describe some frustrations among many scientists that their views on a hierarchy of knowledge based on superior methods are not shared by many policymakers.  These posts can satisfy different audiences: if you have a narrow view of what counts as good evidence, you can focus on the barriers between evidence and policy; if you have a broader view, you can wonder why those barriers seem higher for other forms of knowledge (e.g. Linda Tuhiwai Smith on the marginalisation of indigenous knowledge).

In this post, I encourage you to go a bit further down this path by asking how people accumulate knowledge in the first place.  For example, see introductory accounts by Chalmers, entertaining debates involving Feyerabend, and Hindess’ book to explore your assumptions about how we know what we know.

My take-home point from these texts is that we are only really able to describe convincingly the argument that we are not accumulating knowledge!

The simple insight from Chalmers’ introduction is that inductive (observational) methods to generate knowledge are circular:

  • we engage inductively to produce theory (to generalise from individual cases), but
  • we use theory to engage in any induction, such as to decide what is important to study, and what observations are relevant/irrelevant, and why.

In other words, we need theories of the world to identify the small number of things to observe (to allow us to filter out an almost unlimited amount of signals from out environments), but we need our observations to generate those theories!

Hindess shows that all claims to knowledge involve such circularity: we employ philosophy to identify the nature of the world (ontology) and how humans can generate valid knowledge of it (epistemology) to inform methodology, to state that scientific knowledge is only valid if it lives up to a prescribed method, then argue that the scientific knowledge validates the methodology and its underlying philosophy (1977: 3-22). If so, we are describing something that makes sense according to the rules and practices of its proponents, not an objective scientific method to help us accumulate knowledge.

Further, different social/ professional groups support different forms of working knowledge that they value for different reasons (such as to establish ‘reliability’ or ‘meaning’). To do so, they invent frameworks to help them theorise the world, such as to describe the relationship between concepts (and key concepts such as cause and effect). These frameworks represent a useful language to communicate about our world rather than simply existing independently of it and corresponding to it.

Hindess’ subsequent work explored the context in which we exercise power to establish the status of some forms of knowledge over others, to pursue political ends rather than simply the ‘objective’ goals of science. As described, it is as relevant now as it was then.

How do these ideas inform policy analysis?

Perhaps, by this stage, you are thinking: isn’t this a relativist argument, concluding that we should never assert the relative value of some forms of knowledge over others (like astronomy versus astrology)?

I don’t think so. Rather, it invites us to do two more sensible things:

  1. Accept that different approaches to knowledge may be ‘incommensurable’.
  • They may not share ‘a common set of perceptions’ (or even a set of comparable questions) ‘which would allow scientists to choose between one paradigm and the other . . . there will be disputes between them that cannot all be settled by an appeal to the facts’ (Hindess, 1988: 74)
  • If so, “there is no possibility of an extratheoretical court of appeal which can ‘validate’ the claims of one position against those of another” (Hindess, 1977: 226).
  1. Reject the sense of self-importance, and hubris, which often seems to accompany discussions of superior forms of knowledge. Don’t be dogmatic. Live by the maxim ‘don’t be an arse’. Reflect on the production, purpose, value, and limitations of our knowledge in different contexts (which Spiegelhalter does well).

On that basis, we can have honest discussions about why we should exercise power in a political system to favour some forms of knowledge over others in policy analysis, reflecting on:

  1. The relatively straightforward issue of internal consistency: is an approach coherent, and does it succeed on its own terms?
  • For example, do its users share a clear language, pursue consistent aims with systematic methods, find ways to compare and reinforce the value of each other’s findings, while contributing to a thriving research agenda (as discussed in box 13.3 below)?
  • Or, do they express their aims in other ways, such as to connect research to emancipation, or value respect for a community over the scientific study of that community?
  1. The not straightforward issue of overall consistency: how can we compare different forms of knowledge when they do not follow each other’s rules or standards?
  • g. what if one approach is (said to be) more rigorous and the other more coherent?
  • g. what if one produces more data but another produces more ownership?

In each case, the choice of criteria for comparison involves political choice (as part of a series of political choices), without the ability – described in relation to ‘cost benefit analysis’ – to translate all relevant factors into a single unit.

  1. The imperative to ‘synthesise’ knowledge.

Spiegelhalter provides a convincing description of the benefits of systematic review and ‘meta-analysis’ within a single, clearly defined, scientific approach containing high agreement on methods and standards for comparison.

However, this approach is not applicable directly to the review of multiple forms of knowledge.

So, what do people do?

  • E.g. some systematic reviewers apply the standards of their own field to all others, which (a) tends to produce the argument that very little high quality evidence exists because other people are doing it wrongly, and (b) perhaps exacerbates a tendency for policymakers to attach relatively low value to such evaluations.
  • E.g. policy analysts are more likely to apply different criteria: is it available, understandable, ‘usable’, and policy relevant (e.g. see ‘knowledge management for policy’)?

Each approach is a political choice to include/ exclude certain forms of knowledge according to professional norms or policymaking imperatives, not a technical process to identify the most objective information. If you are going to do it, you should at least be aware of what you are doing.

box 13.3 2nd ed UPP for HIndess post

2 Comments

Filed under 750 word policy analysis

Policy Analysis in 750 words: Linda Tuhiwai Smith (2012) Decolonizing Methodologies

Please see the  Policy Analysis in 750 words series overview before reading the summary. The reference to 750 words is increasingly misleading.

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 2nd edition (London: Zed Books)

 ‘Whose research is it? Who owns it? Whose interests does it serve? Who will benefit from it? Who has designed its questions and framed its cope? Who will carry it out? Who will write it up? How will its results be disseminated?’ (Smith, 2012: 10; see also 174-7)

Many texts in this series highlight the politics of policy analysis, but few (such as Bacchi) identify the politics of the research that underpins policy analysis.

You can find some discussion of these issues in the brief section on ‘co-production’, in wider studies of co-produced research and policy, and ‘evidence based policymaking’, and in posts on power and knowledge and feminist institutionalism. However, the implications rarely feed into standard policy analysis texts. This omission is important, because the production of knowledge – and the exercise of power to define whose knowledge counts – is as political as it gets.

Smith (2012) demonstrates this point initially by identifying multiple, often hidden, aspects of politics and power that relate to ‘research’ and ‘indigenous peoples’:

 

  1. The term ‘indigenous peoples’ is contested, and its meaning-in-use can range from
  • positive self-identification, to highlight common international experiences and struggles for self-determination but distinctive traditions; other terms include ‘First Nations’ in Canada or, in New Zealand, ‘Maori’ as opposed to ‘Pakeha’ (the colonizing population) (2012: 6)
  • negative external-identification, including – in some cases – equating ‘indigenous’ (or similar terms) with ‘dirtiness, savagery, rebellion and, since 9/11, terrorism’ (2012: xi-xii).

 

  1. From the perspective of ‘the colonized’, “the term ‘research’ is inextricably linked to European imperialism and colonialism” (2012: 1; 21-6). Western research practices (and the European ‘Enlightenment’) reflect and reinforce political practices associated with colonial rule (2012: 2; 23).

To the colonized, the ways in which academic research has been implicated in the throes of imperialism remains a painful memory’ (2012: back cover).

“The word itself, ‘research’, is probably one of the dirtiest words in the indigenous world’s vocabulary” (2012: xi).

 

  1. People in indigenous communities describe researchers who exploit ‘their culture, their knowledge, their resources’ (and, in some cases, their bodies) to bolster their own income, career or profession (2012: xi; 91-4; 102-7), in the context of a long history of subjugation and slavery that makes such practices possible (2012: 21-6; 28-9; 176-7), and “justified as being for ‘the good of mankind’” (2012: 26).

 

 

  1. Western researchers think – hubristically – that they can produce a general understanding of the practices and cultures of indigenous peoples (e.g. using anthropological methods). Instead, they produce – irresponsibly or maliciously – negative and often dehumanizing images that feed into policies ‘employed to deny the validity of indigenous peoples’ claim to existence’ and solve the ‘indigenous problem’ (2012: 1; 8-9; 26-9; 62-5; 71-2; 81-91; 94-6).

For example, research contributes to a tendency for governments to

  • highlight, within indigenous communities, indicators of inequality (in relation to factors such as health, education, crime, and family life), and relate it to
  • indigenous cultures and low intelligence, rather than
  • the ways in which colonial legacy and current policy contributes to poverty and marginalisation (2012: 4; 12; compare with Social Construction and Policy Design).

 

  1. Western researchers’ views on how to produce high-quality scientific evidence lead them to ‘see indigenous peoples, their values and practices as political hindrances that get in the way of good research’ (2012: xi; 66-71; compare with ‘hierarchy of evidence’). Similarly, the combination of a state’s formal laws and unwritten rules and assumptions can serve to dismiss indigenous community knowledge as not meeting evidential standards (2012: 44-9).

 

  1. Many indigenous researchers need to negotiate the practices and expectations of different groups, such as if they are portrayed as:
  • ‘insiders’ in relation to an indigenous community (and, for example, expected by that community to recognise the problems with Western research traditions)
  • ‘outsiders’, by (a) an indigenous community in relation to their ‘Western education’ (2012: 5), or (b) by a colonizing state commissioning insider research
  • less technically proficient or less likely to maintain confidentiality than a ‘non-indigenous researcher’ (2012: 12)

Can policy analysis be informed by a new research agenda?

In that context, Smith (2012: xiii; 111-25) outlines a new agenda built on the recognition that research is political and connected explicitly to political and policy aims (2012: xiii; compare with Feminism, Postcolonialism, and Critical Policy Studies)

At its heart is a commitment to indigenous community ‘self-determination’, ‘survival’, ‘recovery’, and ‘development’, aided by processes such as social movement mobilization and decolonization (2012: 121). This agenda informs the meaning of ethical conduct, signalling that research:

  • serves explicit political goals and requires researchers to reflect on their role as activists in an emancipatory project, in contrast to the disingenuous argument that science or scientists are objective (2012: 138-42; 166-77; 187-8; 193-5; 198-215; 217-26)
  • is not ‘something done only by white researchers to indigenous peoples’ (2012: 122),
  • is not framed so narrowly, in relation to specific methods or training, that it excludes (by definition) most indigenous researchers, community involvement in research design, and methods such as storytelling (2012: 127-38; 141; for examples of methods, see 144-63; 170-1)
  • requires distinctive methods and practices to produce knowledge, reinforced by mutual support during the nurturing of such practices
  • requires a code of respectful conduct that extends ‘beyond issues of individual consent and confidentiality’) (2012: 124; 179-81).

Wider context: informing the ‘steps’ to policy analysis

This project informs directly the ‘steps’ to policy analysis described in Bardach, Weimer and Vining, and Mintrom, including:

Problem definition

Mintrom describes the moral and practical value of engaging with stakeholders to help frame policy problems and design solutions (as part of a similarly-worded aim to transform and improve the world).

However, Smith (2012: 228-32; 13) describes such a profound gulf, in the framing of problems, that cannot be bridged simply via consultation or half-hearted ‘co-production’ exercises.

For example, if a government policy analyst relates poor health to individual and cultural factors in indigenous communities, and people in those communities relate it to colonization, land confiscation, minimal self-determination, and an excessive focus on individuals, what could we realistically expect from set-piece government-led stakeholder analyses built on research that has already set the policy agenda (compare with Bacchi)?

Rather, Smith (2012: 15-16) describes the need, within research practices, for continuous awareness of, and respect for, a community’s ‘cultural protocols, values and behaviours’ as part of ‘an ethical and respectful approach’. Indeed, the latter could have mutual benefits which underpin the long-term development of trust: a community may feel less marginalised by the analysis-to-policy process, and future analysts may be viewed with less suspicion.

Even so, a more respectful policy process is not the same as accepting that some communities may benefit more from writing about their own experiences than contributing to someone else’s story. Writing about the past, present, and future is an exercise of power to provide a dominant perspective with which to represent people and problems (2012: 29-41; 52-9)

Analysing and comparing solutions

Imagine a cost-benefit analysis designed to identify the most efficient outcomes by translating all of the predicted impacts on people into a single unit of analysis (such as a dollar amount, or quality-adjusted-life-years). Assumptions include that we can: (a) assign the same value to a notionally similar experience, and (b) produce winners from policy and compensate losers.

Yet, this calculation hinges on the power to decide how we should understand such experiences and place relative values on outcomes, and to take a calculation of their value to one population and generalise it to others. Smith’s analysis suggests that such processes will not produce outcomes that we can describe honestly as societal improvements. Rather, they feed into a choice to produce winners from policy and fail to compensate losers in an adequate or appropriate manner.

See also:

  1. In relation to policy theories

This post – Policy Concepts in 1000 Words: Feminism, Postcolonialism, and Critical Policy Studies – provides a tentative introduction to the ways in which many important approaches can inform policy theories, such as by

The 2nd edition of Understanding Public Policy summarises these themes as follows:

p49 2nd ed UPPp50 2nd ed UPP

  1. In relation to policy analysis

If you look back to the Policy Analysis in 750 words series overview, you will see that a popular way to address policy issues is through the ‘coproduction’ of research and policy, perhaps based on a sincere commitment to widen a definition of useful knowledge/ ways of thinking and avoid simply making policy from the ‘centre’ or ‘top down’.

Yet, the post you are now reading, summarising Decolonizing Methodologies, should prompt us to question the extent to which a process could be described sincerely as ‘coproduction’ if there is such an imbalance of power and incongruence of ideas between participants.

Although many key texts do not discuss ‘policy analysis’ directly, they provide ways to reflect imaginatively on this problem. I hope that I am not distorting their original messages, but please note that the following are my stylized interpretations of key texts.

Audre Lorde (2018*) The Master’s Tools Will Never Dismantle the Master’s House (Penguin) (*written from 1978-82)

Lorde Masters Tools

One issue with very quick client-oriented policy analysis is that it encourages analysts to (a) work with an already-chosen definition of the policy problem, and (b) use well-worn methods to collect information, including (c) engaging with ideas and people with whom they are already familiar.

Some forms of research and policy analysis may be more conducive to challenging existing frames and encouraging wider stakeholder engagement. Still, compare this mild shift from the status quo with a series of issues and possibilities identified by Lorde (2018):

  • Some people are so marginalised and dismissed that they struggle to communicate – about the ways in which they are oppressed, and how they might contribute to imagining a better world – in ways that would be valued (or even noticed) during stakeholder consultation (2018: 1-5 ‘Poetry is not a luxury’).
  • The ‘european-american male tradition’ only allows for narrowly defined (‘rational’) means of communication (2018: 6-15 ‘Uses of the Erotic’)

A forum can be designed ostensibly to foster communication and inclusivity, only to actually produce the opposite, by signalling to some participants that

  • they are a token afterthought, whose views and experiences are – at best – only relevant to a very limited aspect of a wide discussion, and
  • their differences will be feared, not celebrated, becoming a source of conflict, not mutual nurture or cooperation.

It puts marginalised people in the position of having to work hard simply to be heard. They learn that powerful people are only willing to listen if others do the work for them, because (a) they are ignorant of experiences other than their own, and/or (b) they profess ignorance strategically to suck the energy from people whose views they fear and do not understand. No one should feel immune from such criticism even if they profess to be acting with good intentions (2018: 16-21 ‘The Master’s Tools Will Never Dismantle the Master’s House’).

  • The correct response to racism is anger. Therefore, do not prioritise (a) narrow rules of civility, or the sensibilities of the privileged, if (b) your aim is to encourage conversations with people who are trying to express the ways in which they deal with overwhelming and continuous hatred, violence, and oppression (2018: 22-35, ‘Uses of Anger: Women Responding to Racism’)

Boaventura de Sousa Santos (2014) Epistemologies of the South: Justice Against Epistemicide (Routledge)

Sousa cover

Imagine global policy processes and policy analysis, in which some countries and international organisations negotiate agreements, influenced (or not) by critical social movements in pursuit of social justice. Santos (2014) identifies a series of obstacles including:

  • A tendency for Western (as part of the Global North) ways of thinking to dominate analysis, at the expense of insights from the Global South (2014: viii), producing
  • A tendency for ‘Western centric’ ideas to inform the sense that some concepts and collective aims – such as human dignity and human rights – can be understood universally, rather than through the lens of struggles that are specific to some regions (2014: 21; 38)
  • A lack of imagination or willingness to imagine different futures and conceptions of social justice (2014: 24)

Consequently, actors may come together to discuss major policy change on ostensibly the same terms, only for some groups to – intentionally and unintentionally – dominate thought and action and reinforce the global inequalities they propose to reduce.

Sarah Ahmed (2017) Living a Feminist Life (Duke University Press)

Ahmed cover.jpg

Why might your potential allies in ‘coproduction’ be suspicious of your motives, or sceptical about the likely outcomes of such an exchange? One theme throughout Smith’s (2012) book is that people often co-opt key terms (such as ‘decolonizing’) to perform the sense that they care about social change, to try to look like they are doing something important, while actually designing ineffective or bad faith processes to protect the status of themselves or their own institution or profession.

Ahmed (2017: 103) describes comparable initiatives – such as to foster ‘equality and diversity’ – as a public relations exercise for organisations, rather than a sincere desire to do the work. Consequently, there is a gap ‘between a symbolic commitment and a lived reality’ (2017: 90). Indeed, the aim may be to project a sense of transformation to hinder that transformation (2017: 90), coupled with a tendency to use a ‘safe’ and non-confrontational language (‘diversity’) to project the sense that we can only push people so far, at the expense of terms such as ‘racism’ that would signal challenge, confrontation, and a commitment to high impact (2017: chapter 4).

..

Putting these insights together suggests that a stated commitment to co-produced research and policy might begin with good intentions. Even so, a commitment to sincere engagement does not guarantee an audience or prevent you from exacerbating the very problems you profess to solve.

4 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy, Research design, Storytelling