Daily Archives: December 5, 2019

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell; and see Sousa on contestation).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

 

13 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Policy Analysis in 750 words: Using Statistics and Explaining Risk (Spiegelhalter and Gigerenzer)

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is close to 750 words if you divide it by 2.

David Spiegelhalter (2018) The Art of Statistics: Learning from Data (Pelican, hardback)

Gerd Gigerenzer (2015) Risk Savvy (Penguin)

Spiegelhalter cover

Policy analysis: the skilful consumption and communication of information

Some use the phrase ‘lies, damned lies, and statistics’ to suggest that people can manipulate the presentation of information to reinforce whatever case they want to make. Common examples include the highly selective sharing of data, and the use of misleading images to distort the size of an effect or strength of a relationship between ‘variables’ (when we try to find out if a change in one thing causes a change in another).

In that context, your first aim is to become a skilled consumer of information.

Or, you may be asked to gather and present data as part of your policy analysis, and don’t seek to mislead people (see Mintrom and compare with Riker).

Your second aim is to become an ethical and skilled communicator of information.

In each case, a good rule of thumb is to assume that the analysts who help policymakers learn how to consume and interpret evidence are more influential than the researchers who produce it.

Research and policy analysis are not so different

Although research is not identical to policy analysis, it highlights similar ambitions and issues. Indeed, Spiegelhalter’s (2018: 6-7) description of ‘using data to help us understand the world and make better judgements’ sounds like Mintrom, and the PPDAC approach – identify a problem, plan how to study it, collect and manage data, analyse, and draw/ communicate conclusions (2018: 14) – is not so different from the ‘steps’ to policy analysis that you will find in Bardach or Weimer and Vining.

PPDAC requires us to understand what people need to do to ‘turn the world into data’, such as to produce precise definitions of things and use observation and modelling to estimate their number or the likelihood of their occurrence (2018: 6-7).

More importantly, consider our inability to define things precisely – e.g. economic activity and unemployment, or wellbeing and happiness – and need to accept that our estimates (a) come with often high levels of uncertainty, and (b) are ‘only the starting point to real understanding of the world’ (2018: 8-9).

In that context, the technical skill to gather and analyse information is necessary for research, while the skill to communicate findings is necessary to avoid misleading your audience.

The pitfalls of information communication

Speigelhalter’s initial discussion highlights the great potential to mislead, via:

  1. deliberate manipulation,
  2. a poor grasp of statistics, and/ or
  3. insufficient appreciation of (a) your non-specialist audience’s potential reaction to (b) different ways to frame the same information (2018: 354-62), perhaps based on
  4. the unscientific belief that scientists are objective and can communicate the truth in a neutral way, rather than storytellers with imperfect data (2018: 68-9; 307; 338; 342-53).

Potentially influential communications include (2018: 19-38):

  1. The type of visual, with bar or line-based charts often more useful than pie charts (and dynamic often better than static – 2018: 71)
  2. The point at which you cut off the chart’s axis to downplay or accentuate the difference between results
  3. Framing the results positively (e.g. survival rate) versus negatively (e.g. death rate)
  4. Describing a higher relative risk (e.g. 18%) or absolute risk (e.g. from 6 in 100 to 7 in 100 cases)
  5. Describing risk in relation to decimal places, percentages, or numbers out of 100
  6. Using the wrong way to describe an average (mode, median, or mean – 2018: 46)
  7. Using a language familiar to specialists but confusing to – and subject to misinterpretation by – non-specialists (e.g. odds ratios)
  8. Translating numbers into words (e.g. what does ‘very likely’ mean?) to describe probability (2018: 320).

These problems with the supply of information combine with the ways that citizens and policymakers consume it.

People use cognitive shortcuts, such as emotions and heuristics, to process information (see p60 of Understanding Public Policy, reproduced below).

It can make them vulnerable to framing and manipulation, and prompt them to change their behaviour after misinterpreting evidence in relation to risk: e.g. eating certain foods (2018: 33), anticipating the weather, taking medicines, or refusing to fly after a vivid event (Gigerenzer, 2015: 2-13).

p60 UPP 2nd ed heuristics

Dealing with scientific uncertainty

Communication is important, but the underlying problem may be actual scientific uncertainty about the ability of our data to give us accurate knowledge of the world, such as when:

  1. We use a survey of a sample population, in the hope that (a) respondents provide accurate answers, and (b) their responses provide a representative picture of the population we seek to understand. In such cases, professional standards and practices exist to minimise, but not remove biases associated with questions and sampling (2018: 74).
  2. Some people ignore (and other people underestimate) the ‘margin of error’ in surveys, even though they could be larger than the reported change in data (2018: 189-92; 247).
  3. Alternatives to surveys have major unintended consequences, such as when government statistics are collected unsystematically or otherwise misrepresent outcomes (2018: 84-5)
  4. ‘Correlation does not equal causation’ (see also The Book of Why).
  • The cause of an association between two things could be either of those things, or another thing (2018: 95-9; 110-15).
  • It is usually prohibitively expensive to conduct and analyse research – such as multiple ‘randomised control trials’ to establish cause and effect in the same ways as medicines trials (2018: 104) – to minimise doubt.
  • Further, our complex and uncontrolled world is not as conducive to the experimental trials of social and economic policies.
  1. The misleading appearance of a short term trend often relates to ‘chance variation’ rather than a long-term trend (e.g. in PISA education tables or murder rates – 2018: 131; 249).
  2. The algorithms used to process huge amounts of data may contain unhelpful rules and misplaced assumptions that bias the results, and this problem is worse if the rules are kept secret (2018: 178-87)
  3. Calculating the probability of events is difficult to do, agree how to do, and to understand (2018: 216-20; 226; 239; 304-7).
  4. The likelihood of identifying ‘false positive’ results in research is high (2018: 278-80). Note the comparison to finding someone guilty when innocent, or innocent when guilty (2018: 284 and compare with Gigerenzer, 2015: 33-7; 161-8). However, the professional incentive to minimise these outcomes or admit the research’s limitations is low (2018: 278; 287; 294-302)

Developing statistical and risk ‘literacy’

In that context, Spiegelhalter (2019: 369-71) summarises key ways to consume data effectively, asking: how rigorous is the study, how much uncertainty remains, if the measures are chosen and communicated well, if you can trust the source to do the work well and not spin the results, if the claim fits with other evidence and has a good explanation, and if the effect is important and relevant to key populations. However:

  1. Many such texts describe how they would like the world to work, and give advice to people to help foster that world. The logical conclusion is that this world does not exist, and most people do not have the training, or use these tips, to describe or consume statistics and their implications well.
  2. Policymaking is about making choices, often under immense time and political pressure, in the face of uncertainty versus ambiguity, and despite our inability to understand policy problems or the likely impact of solutions.

I’m not suggesting that, as a result, you should go full Riker. Rather, as with most of the posts in this series, reflect on how you would act – and expect others to act – during the (very long/ not very likely) transition from your world to this better one. What if your collective task is to make just enough sense of the available information, and your options, to make good enough choices?

Gigerenzer cover

In that context, Gigerenzer (2015) identifies the steps we can take to become ‘risk savvy’.

Begin by rejecting (a) the psychological drive to seek the ‘safety blanket’ of certainty (and avoid the ‘fear of doing something wrong and being blamed’), which causes people to (b) place too much faith in necessarily-flawed technologies or tests to reduce uncertainty, instead of (c) learning some basic tools to assess risk while accepting the inevitability of uncertainty and ambiguity (2015: 18-20; 43; 32-40).

Then, employ simple ‘mind tools’ to assess and communicate risk in each case:

  1. Communicate risk using appropriate visuals, categories, and descriptions of risk (e.g. with reference to absolute risk and ‘natural frequences’, expressed as a proportion of 100 (not a % or decimal) and be sceptical if others do not (2015: 25-7; 168)
  • E.g. do not confuse calculations of risk based (a) on known frequencies (such as the coin toss), and (b) unknown frequencies (such as outcomes of complex systems) (2015: 21-6).
  • E.g. be aware of the difference between (a) the accuracy of a test of a problem (its ability to minimise false positive/ negative results), (b) the likelihood that you have the problem it is testing, and (c) the extent to which you will benefit from an intervention for that problem (2015: 33-7; 161-8; 194-7)
  1. Use heuristics that are shown to be efficient and reliable in particular situations.
  • rely frequently on ‘gut feeling’ (‘a judgment 1. that appears quickly in consciousness, 2. whose underlying reasons we are not fully aware of, yet 3. it is strong enough to act upon’)
  • accept the counterintuitive sense that ‘ignoring information can lead to better, faster, and safer decisions’ (2018: 30-1)
  • equate intuition with ‘unconscious intelligence based on personal experience and smart rules of thumb. You need both intuition and reasoning to be rational’ (2018: 123-4)
  • find efficient ways to trust in other people and practices (2018: 99-103)
  • ‘satisfice’ (choose the first option that satisfies an adequate threshold, rather than consider every option) (2018: 148-9)
  1. Value ‘good errors’ that allow us to learn efficiently (via ‘trial and error’) (2015: 47-51)

Wait a minute

I like these Gigerenzer-style messages a lot and, on reflection, seem to make most of my choices using my gut and trial-and-error (and I only electrocuted myself that one time; I’m told that I barked).

Some of his examples – e.g. ask if a hospital uses airline-style checklists (2015: 53; see also Radin’s checklist), or ask your doctor how they would treat their relative, not yours (2015: 63) – are intuitively appealing. The explainers on risk are profoundly important.

However, note that Gigerenzer devotes a lot of his book to describing the defensive nature of sectors such as business, medicine, and government, linked strongly the absence of the right ‘culture’ to allow learning through error.

Trial and error is a big feature in complexity theory, and Lindblom’s incrementalism, but also be aware that you are surrounded by people whose heuristic may be ‘make sure you don’t get the blame’ (or ‘procedure over performance’ (2018: 65). To recommend trial-and-error policy analysis may be a hard sell.

Further reading

This post is part of the Policy Analysis in 750 words series

The 500 and 1000 words series describe how people act under conditions of bounded rationality and policymaking complexity

Winners and losers: communicating the potential impacts of policies (by Cameron Brick, Alexandra Freeman, Steven Wooding, William Skylark, Theresa Marteau & David Spiegelhalter)

See Policy in 500 Words: Social Construction and Policy Design and ask yourself if Gigerenzer’s (2015: 69) ‘fear whatever your social group fears’ is OK when you are running from a lion, but not if you are cooperating with many target populations.

The study of punctuated equilibrium theory is particularly relevant, since its results reject the sense that policy change follows a ‘normal distribution’. See the chart below (from Theories of the Policy Process 2; also found in 5 Images of the Policy Process) and visit the Comparative Agendas Project to see how they gather the data.

True et al figure 6.2

10 Comments

Filed under 750 word policy analysis, public policy

Policy Analysis in 750 words: Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

Please see the Policy Analysis in 750 words series overview before reading the summary. This post started off as 750 words before growing.

20191129_1725232826725708431485596.jpg

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences (Harvester)

‘If the claims of philosophy to a special kind of knowledge can be shown to be without foundation, if they are at best dogmatic or else incoherent, then methodology is an empty and futile pursuit and its prescriptions are vacuous’ (Hindess, 1977: 4).

This book may seem like a weird addition to a series on policy analysis.

However, it follows the path set by Carol Bacchi, asking whose interests we serve when we frame problems for policy analysis, and Linda Tuhiwai Smith, asking whose research counts when we do so.

One important answer is that the status of research and the framing of the problem result from the exercise of power, rather than the objectivity of analysts and natural superiority of some forms of knowledge.

In other posts on ‘the politics of evidence based policymaking’, I describe some frustrations among many scientists that their views on a hierarchy of knowledge based on superior methods are not shared by many policymakers.  These posts can satisfy different audiences: if you have a narrow view of what counts as good evidence, you can focus on the barriers between evidence and policy; if you have a broader view, you can wonder why those barriers seem higher for other forms of knowledge (e.g. Linda Tuhiwai Smith on the marginalisation of indigenous knowledge).

In this post, I encourage you to go a bit further down this path by asking how people accumulate knowledge in the first place.  For example, see introductory accounts by Chalmers, entertaining debates involving Feyerabend, and Hindess’ book to explore your assumptions about how we know what we know.

My take-home point from these texts is that we are only really able to describe convincingly the argument that we are not accumulating knowledge!

The simple insight from Chalmers’ introduction is that inductive (observational) methods to generate knowledge are circular:

  • we engage inductively to produce theory (to generalise from individual cases), but
  • we use theory to engage in any induction, such as to decide what is important to study, and what observations are relevant/irrelevant, and why.

In other words, we need theories of the world to identify the small number of things to observe (to allow us to filter out an almost unlimited amount of signals from out environments), but we need our observations to generate those theories!

Hindess shows that all claims to knowledge involve such circularity: we employ philosophy to identify the nature of the world (ontology) and how humans can generate valid knowledge of it (epistemology) to inform methodology, to state that scientific knowledge is only valid if it lives up to a prescribed method, then argue that the scientific knowledge validates the methodology and its underlying philosophy (1977: 3-22). If so, we are describing something that makes sense according to the rules and practices of its proponents, not an objective scientific method to help us accumulate knowledge.

Further, different social/ professional groups support different forms of working knowledge that they value for different reasons (such as to establish ‘reliability’ or ‘meaning’). To do so, they invent frameworks to help them theorise the world, such as to describe the relationship between concepts (and key concepts such as cause and effect). These frameworks represent a useful language to communicate about our world rather than simply existing independently of it and corresponding to it.

Hindess’ subsequent work explored the context in which we exercise power to establish the status of some forms of knowledge over others, to pursue political ends rather than simply the ‘objective’ goals of science. As described, it is as relevant now as it was then.

How do these ideas inform policy analysis?

Perhaps, by this stage, you are thinking: isn’t this a relativist argument, concluding that we should never assert the relative value of some forms of knowledge over others (like astronomy versus astrology)?

I don’t think so. Rather, it invites us to do two more sensible things:

  1. Accept that different approaches to knowledge may be ‘incommensurable’.
  • They may not share ‘a common set of perceptions’ (or even a set of comparable questions) ‘which would allow scientists to choose between one paradigm and the other . . . there will be disputes between them that cannot all be settled by an appeal to the facts’ (Hindess, 1988: 74)
  • If so, “there is no possibility of an extratheoretical court of appeal which can ‘validate’ the claims of one position against those of another” (Hindess, 1977: 226).
  1. Reject the sense of self-importance, and hubris, which often seems to accompany discussions of superior forms of knowledge. Don’t be dogmatic. Live by the maxim ‘don’t be an arse’. Reflect on the production, purpose, value, and limitations of our knowledge in different contexts (which Spiegelhalter does well).

On that basis, we can have honest discussions about why we should exercise power in a political system to favour some forms of knowledge over others in policy analysis, reflecting on:

  1. The relatively straightforward issue of internal consistency: is an approach coherent, and does it succeed on its own terms?
  • For example, do its users share a clear language, pursue consistent aims with systematic methods, find ways to compare and reinforce the value of each other’s findings, while contributing to a thriving research agenda (as discussed in box 13.3 below)?
  • Or, do they express their aims in other ways, such as to connect research to emancipation, or value respect for a community over the scientific study of that community?
  1. The not straightforward issue of overall consistency: how can we compare different forms of knowledge when they do not follow each other’s rules or standards?
  • g. what if one approach is (said to be) more rigorous and the other more coherent?
  • g. what if one produces more data but another produces more ownership?

In each case, the choice of criteria for comparison involves political choice (as part of a series of political choices), without the ability – described in relation to ‘cost benefit analysis’ – to translate all relevant factors into a single unit.

  1. The imperative to ‘synthesise’ knowledge.

Spiegelhalter provides a convincing description of the benefits of systematic review and ‘meta-analysis’ within a single, clearly defined, scientific approach containing high agreement on methods and standards for comparison.

However, this approach is not applicable directly to the review of multiple forms of knowledge.

So, what do people do?

  • E.g. some systematic reviewers apply the standards of their own field to all others, which (a) tends to produce the argument that very little high quality evidence exists because other people are doing it wrongly, and (b) perhaps exacerbates a tendency for policymakers to attach relatively low value to such evaluations.
  • E.g. policy analysts are more likely to apply different criteria: is it available, understandable, ‘usable’, and policy relevant (e.g. see ‘knowledge management for policy’)?

Each approach is a political choice to include/ exclude certain forms of knowledge according to professional norms or policymaking imperatives, not a technical process to identify the most objective information. If you are going to do it, you should at least be aware of what you are doing.

box 13.3 2nd ed UPP for HIndess post

5 Comments

Filed under 750 word policy analysis