Tag Archives: policy analysis

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell; and see Sousa on contestation).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

 

14 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Policy Analysis in 750 words: Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

Please see the Policy Analysis in 750 words series overview before reading the summary. This post started off as 750 words before growing.

20191129_1725232826725708431485596.jpg

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences (Harvester)

‘If the claims of philosophy to a special kind of knowledge can be shown to be without foundation, if they are at best dogmatic or else incoherent, then methodology is an empty and futile pursuit and its prescriptions are vacuous’ (Hindess, 1977: 4).

This book may seem like a weird addition to a series on policy analysis.

However, it follows the path set by Carol Bacchi, asking whose interests we serve when we frame problems for policy analysis, and Linda Tuhiwai Smith, asking whose research counts when we do so.

One important answer is that the status of research and the framing of the problem result from the exercise of power, rather than the objectivity of analysts and natural superiority of some forms of knowledge.

In other posts on ‘the politics of evidence based policymaking’, I describe some frustrations among many scientists that their views on a hierarchy of knowledge based on superior methods are not shared by many policymakers.  These posts can satisfy different audiences: if you have a narrow view of what counts as good evidence, you can focus on the barriers between evidence and policy; if you have a broader view, you can wonder why those barriers seem higher for other forms of knowledge (e.g. Linda Tuhiwai Smith on the marginalisation of indigenous knowledge).

In this post, I encourage you to go a bit further down this path by asking how people accumulate knowledge in the first place.  For example, see introductory accounts by Chalmers, entertaining debates involving Feyerabend, and Hindess’ book to explore your assumptions about how we know what we know.

My take-home point from these texts is that we are only really able to describe convincingly the argument that we are not accumulating knowledge!

The simple insight from Chalmers’ introduction is that inductive (observational) methods to generate knowledge are circular:

  • we engage inductively to produce theory (to generalise from individual cases), but
  • we use theory to engage in any induction, such as to decide what is important to study, and what observations are relevant/irrelevant, and why.

In other words, we need theories of the world to identify the small number of things to observe (to allow us to filter out an almost unlimited amount of signals from out environments), but we need our observations to generate those theories!

Hindess shows that all claims to knowledge involve such circularity: we employ philosophy to identify the nature of the world (ontology) and how humans can generate valid knowledge of it (epistemology) to inform methodology, to state that scientific knowledge is only valid if it lives up to a prescribed method, then argue that the scientific knowledge validates the methodology and its underlying philosophy (1977: 3-22). If so, we are describing something that makes sense according to the rules and practices of its proponents, not an objective scientific method to help us accumulate knowledge.

Further, different social/ professional groups support different forms of working knowledge that they value for different reasons (such as to establish ‘reliability’ or ‘meaning’). To do so, they invent frameworks to help them theorise the world, such as to describe the relationship between concepts (and key concepts such as cause and effect). These frameworks represent a useful language to communicate about our world rather than simply existing independently of it and corresponding to it.

Hindess’ subsequent work explored the context in which we exercise power to establish the status of some forms of knowledge over others, to pursue political ends rather than simply the ‘objective’ goals of science. As described, it is as relevant now as it was then.

How do these ideas inform policy analysis?

Perhaps, by this stage, you are thinking: isn’t this a relativist argument, concluding that we should never assert the relative value of some forms of knowledge over others (like astronomy versus astrology)?

I don’t think so. Rather, it invites us to do two more sensible things:

  1. Accept that different approaches to knowledge may be ‘incommensurable’.
  • They may not share ‘a common set of perceptions’ (or even a set of comparable questions) ‘which would allow scientists to choose between one paradigm and the other . . . there will be disputes between them that cannot all be settled by an appeal to the facts’ (Hindess, 1988: 74)
  • If so, “there is no possibility of an extratheoretical court of appeal which can ‘validate’ the claims of one position against those of another” (Hindess, 1977: 226).
  1. Reject the sense of self-importance, and hubris, which often seems to accompany discussions of superior forms of knowledge. Don’t be dogmatic. Live by the maxim ‘don’t be an arse’. Reflect on the production, purpose, value, and limitations of our knowledge in different contexts (which Spiegelhalter does well).

On that basis, we can have honest discussions about why we should exercise power in a political system to favour some forms of knowledge over others in policy analysis, reflecting on:

  1. The relatively straightforward issue of internal consistency: is an approach coherent, and does it succeed on its own terms?
  • For example, do its users share a clear language, pursue consistent aims with systematic methods, find ways to compare and reinforce the value of each other’s findings, while contributing to a thriving research agenda (as discussed in box 13.3 below)?
  • Or, do they express their aims in other ways, such as to connect research to emancipation, or value respect for a community over the scientific study of that community?
  1. The not straightforward issue of overall consistency: how can we compare different forms of knowledge when they do not follow each other’s rules or standards?
  • g. what if one approach is (said to be) more rigorous and the other more coherent?
  • g. what if one produces more data but another produces more ownership?

In each case, the choice of criteria for comparison involves political choice (as part of a series of political choices), without the ability – described in relation to ‘cost benefit analysis’ – to translate all relevant factors into a single unit.

  1. The imperative to ‘synthesise’ knowledge.

Spiegelhalter provides a convincing description of the benefits of systematic review and ‘meta-analysis’ within a single, clearly defined, scientific approach containing high agreement on methods and standards for comparison.

However, this approach is not applicable directly to the review of multiple forms of knowledge.

So, what do people do?

  • E.g. some systematic reviewers apply the standards of their own field to all others, which (a) tends to produce the argument that very little high quality evidence exists because other people are doing it wrongly, and (b) perhaps exacerbates a tendency for policymakers to attach relatively low value to such evaluations.
  • E.g. policy analysts are more likely to apply different criteria: is it available, understandable, ‘usable’, and policy relevant (e.g. see ‘knowledge management for policy’)?

Each approach is a political choice to include/ exclude certain forms of knowledge according to professional norms or policymaking imperatives, not a technical process to identify the most objective information. If you are going to do it, you should at least be aware of what you are doing.

box 13.3 2nd ed UPP for HIndess post

5 Comments

Filed under 750 word policy analysis

Policy Analysis in 750 words: Linda Tuhiwai Smith (2012) Decolonizing Methodologies

Please see the  Policy Analysis in 750 words series overview before reading the summary. The reference to 750 words is increasingly misleading.

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 2nd edition (London: Zed Books)

 ‘Whose research is it? Who owns it? Whose interests does it serve? Who will benefit from it? Who has designed its questions and framed its cope? Who will carry it out? Who will write it up? How will its results be disseminated?’ (Smith, 2012: 10; see also 174-7)

Many texts in this series highlight the politics of policy analysis, but few (such as Bacchi) identify the politics of the research that underpins policy analysis.

You can find some discussion of these issues in the brief section on ‘co-production’, in wider studies of co-produced research and policy, and ‘evidence based policymaking’, and in posts on power and knowledge and feminist institutionalism. However, the implications rarely feed into standard policy analysis texts. This omission is important, because the production of knowledge – and the exercise of power to define whose knowledge counts – is as political as it gets.

Smith (2012) demonstrates this point initially by identifying multiple, often hidden, aspects of politics and power that relate to ‘research’ and ‘indigenous peoples’:

 

  1. The term ‘indigenous peoples’ is contested, and its meaning-in-use can range from
  • positive self-identification, to highlight common international experiences and struggles for self-determination but distinctive traditions; other terms include ‘First Nations’ in Canada or, in New Zealand, ‘Maori’ as opposed to ‘Pakeha’ (the colonizing population) (2012: 6)
  • negative external-identification, including – in some cases – equating ‘indigenous’ (or similar terms) with ‘dirtiness, savagery, rebellion and, since 9/11, terrorism’ (2012: xi-xii).

 

  1. From the perspective of ‘the colonized’, “the term ‘research’ is inextricably linked to European imperialism and colonialism” (2012: 1; 21-6). Western research practices (and the European ‘Enlightenment’) reflect and reinforce political practices associated with colonial rule (2012: 2; 23).

To the colonized, the ways in which academic research has been implicated in the throes of imperialism remains a painful memory’ (2012: back cover).

“The word itself, ‘research’, is probably one of the dirtiest words in the indigenous world’s vocabulary” (2012: xi).

 

  1. People in indigenous communities describe researchers who exploit ‘their culture, their knowledge, their resources’ (and, in some cases, their bodies) to bolster their own income, career or profession (2012: xi; 91-4; 102-7), in the context of a long history of subjugation and slavery that makes such practices possible (2012: 21-6; 28-9; 176-7), and “justified as being for ‘the good of mankind’” (2012: 26).

 

 

  1. Western researchers think – hubristically – that they can produce a general understanding of the practices and cultures of indigenous peoples (e.g. using anthropological methods). Instead, they produce – irresponsibly or maliciously – negative and often dehumanizing images that feed into policies ‘employed to deny the validity of indigenous peoples’ claim to existence’ and solve the ‘indigenous problem’ (2012: 1; 8-9; 26-9; 62-5; 71-2; 81-91; 94-6).

For example, research contributes to a tendency for governments to

  • highlight, within indigenous communities, indicators of inequality (in relation to factors such as health, education, crime, and family life), and relate it to
  • indigenous cultures and low intelligence, rather than
  • the ways in which colonial legacy and current policy contributes to poverty and marginalisation (2012: 4; 12; compare with Social Construction and Policy Design).

 

  1. Western researchers’ views on how to produce high-quality scientific evidence lead them to ‘see indigenous peoples, their values and practices as political hindrances that get in the way of good research’ (2012: xi; 66-71; compare with ‘hierarchy of evidence’). Similarly, the combination of a state’s formal laws and unwritten rules and assumptions can serve to dismiss indigenous community knowledge as not meeting evidential standards (2012: 44-9).

 

  1. Many indigenous researchers need to negotiate the practices and expectations of different groups, such as if they are portrayed as:
  • ‘insiders’ in relation to an indigenous community (and, for example, expected by that community to recognise the problems with Western research traditions)
  • ‘outsiders’, by (a) an indigenous community in relation to their ‘Western education’ (2012: 5), or (b) by a colonizing state commissioning insider research
  • less technically proficient or less likely to maintain confidentiality than a ‘non-indigenous researcher’ (2012: 12)

Can policy analysis be informed by a new research agenda?

In that context, Smith (2012: xiii; 111-25) outlines a new agenda built on the recognition that research is political and connected explicitly to political and policy aims (2012: xiii; compare with Feminism, Postcolonialism, and Critical Policy Studies)

At its heart is a commitment to indigenous community ‘self-determination’, ‘survival’, ‘recovery’, and ‘development’, aided by processes such as social movement mobilization and decolonization (2012: 121). This agenda informs the meaning of ethical conduct, signalling that research:

  • serves explicit political goals and requires researchers to reflect on their role as activists in an emancipatory project, in contrast to the disingenuous argument that science or scientists are objective (2012: 138-42; 166-77; 187-8; 193-5; 198-215; 217-26)
  • is not ‘something done only by white researchers to indigenous peoples’ (2012: 122),
  • is not framed so narrowly, in relation to specific methods or training, that it excludes (by definition) most indigenous researchers, community involvement in research design, and methods such as storytelling (2012: 127-38; 141; for examples of methods, see 144-63; 170-1)
  • requires distinctive methods and practices to produce knowledge, reinforced by mutual support during the nurturing of such practices
  • requires a code of respectful conduct that extends ‘beyond issues of individual consent and confidentiality’) (2012: 124; 179-81).

Wider context: informing the ‘steps’ to policy analysis

This project informs directly the ‘steps’ to policy analysis described in Bardach, Weimer and Vining, and Mintrom, including:

Problem definition

Mintrom describes the moral and practical value of engaging with stakeholders to help frame policy problems and design solutions (as part of a similarly-worded aim to transform and improve the world).

However, Smith (2012: 228-32; 13) describes such a profound gulf, in the framing of problems, that cannot be bridged simply via consultation or half-hearted ‘co-production’ exercises.

For example, if a government policy analyst relates poor health to individual and cultural factors in indigenous communities, and people in those communities relate it to colonization, land confiscation, minimal self-determination, and an excessive focus on individuals, what could we realistically expect from set-piece government-led stakeholder analyses built on research that has already set the policy agenda (compare with Bacchi)?

Rather, Smith (2012: 15-16) describes the need, within research practices, for continuous awareness of, and respect for, a community’s ‘cultural protocols, values and behaviours’ as part of ‘an ethical and respectful approach’. Indeed, the latter could have mutual benefits which underpin the long-term development of trust: a community may feel less marginalised by the analysis-to-policy process, and future analysts may be viewed with less suspicion.

Even so, a more respectful policy process is not the same as accepting that some communities may benefit more from writing about their own experiences than contributing to someone else’s story. Writing about the past, present, and future is an exercise of power to provide a dominant perspective with which to represent people and problems (2012: 29-41; 52-9)

Analysing and comparing solutions

Imagine a cost-benefit analysis designed to identify the most efficient outcomes by translating all of the predicted impacts on people into a single unit of analysis (such as a dollar amount, or quality-adjusted-life-years). Assumptions include that we can: (a) assign the same value to a notionally similar experience, and (b) produce winners from policy and compensate losers.

Yet, this calculation hinges on the power to decide how we should understand such experiences and place relative values on outcomes, and to take a calculation of their value to one population and generalise it to others. Smith’s analysis suggests that such processes will not produce outcomes that we can describe honestly as societal improvements. Rather, they feed into a choice to produce winners from policy and fail to compensate losers in an adequate or appropriate manner.

See also:

  1. In relation to policy theories

This post – Policy Concepts in 1000 Words: Feminism, Postcolonialism, and Critical Policy Studies – provides a tentative introduction to the ways in which many important approaches can inform policy theories, such as by

The 2nd edition of Understanding Public Policy summarises these themes as follows:

p49 2nd ed UPPp50 2nd ed UPP

  1. In relation to policy analysis

If you look back to the Policy Analysis in 750 words series overview, you will see that a popular way to address policy issues is through the ‘coproduction’ of research and policy, perhaps based on a sincere commitment to widen a definition of useful knowledge/ ways of thinking and avoid simply making policy from the ‘centre’ or ‘top down’.

Yet, the post you are now reading, summarising Decolonizing Methodologies, should prompt us to question the extent to which a process could be described sincerely as ‘coproduction’ if there is such an imbalance of power and incongruence of ideas between participants.

Although many key texts do not discuss ‘policy analysis’ directly, they provide ways to reflect imaginatively on this problem. I hope that I am not distorting their original messages, but please note that the following are my stylized interpretations of key texts.

Audre Lorde (2018*) The Master’s Tools Will Never Dismantle the Master’s House (Penguin) (*written from 1978-82)

Lorde Masters Tools

One issue with very quick client-oriented policy analysis is that it encourages analysts to (a) work with an already-chosen definition of the policy problem, and (b) use well-worn methods to collect information, including (c) engaging with ideas and people with whom they are already familiar.

Some forms of research and policy analysis may be more conducive to challenging existing frames and encouraging wider stakeholder engagement. Still, compare this mild shift from the status quo with a series of issues and possibilities identified by Lorde (2018):

  • Some people are so marginalised and dismissed that they struggle to communicate – about the ways in which they are oppressed, and how they might contribute to imagining a better world – in ways that would be valued (or even noticed) during stakeholder consultation (2018: 1-5 ‘Poetry is not a luxury’).
  • The ‘european-american male tradition’ only allows for narrowly defined (‘rational’) means of communication (2018: 6-15 ‘Uses of the Erotic’)

A forum can be designed ostensibly to foster communication and inclusivity, only to actually produce the opposite, by signalling to some participants that

  • they are a token afterthought, whose views and experiences are – at best – only relevant to a very limited aspect of a wide discussion, and
  • their differences will be feared, not celebrated, becoming a source of conflict, not mutual nurture or cooperation.

It puts marginalised people in the position of having to work hard simply to be heard. They learn that powerful people are only willing to listen if others do the work for them, because (a) they are ignorant of experiences other than their own, and/or (b) they profess ignorance strategically to suck the energy from people whose views they fear and do not understand. No one should feel immune from such criticism even if they profess to be acting with good intentions (2018: 16-21 ‘The Master’s Tools Will Never Dismantle the Master’s House’).

  • The correct response to racism is anger. Therefore, do not prioritise (a) narrow rules of civility, or the sensibilities of the privileged, if (b) your aim is to encourage conversations with people who are trying to express the ways in which they deal with overwhelming and continuous hatred, violence, and oppression (2018: 22-35, ‘Uses of Anger: Women Responding to Racism’)

Boaventura de Sousa Santos (2014) Epistemologies of the South: Justice Against Epistemicide (Routledge)

Sousa cover

Imagine global policy processes and policy analysis, in which some countries and international organisations negotiate agreements, influenced (or not) by critical social movements in pursuit of social justice. Santos (2014) identifies a series of obstacles including:

  • A tendency for Western (as part of the Global North) ways of thinking to dominate analysis, at the expense of insights from the Global South (2014: viii), producing
  • A tendency for ‘Western centric’ ideas to inform the sense that some concepts and collective aims – such as human dignity and human rights – can be understood universally, rather than through the lens of struggles that are specific to some regions (2014: 21; 38)
  • A lack of imagination or willingness to imagine different futures and conceptions of social justice (2014: 24)

Consequently, actors may come together to discuss major policy change on ostensibly the same terms, only for some groups to – intentionally and unintentionally – dominate thought and action and reinforce the global inequalities they propose to reduce.

Sarah Ahmed (2017) Living a Feminist Life (Duke University Press)

Ahmed cover.jpg

Why might your potential allies in ‘coproduction’ be suspicious of your motives, or sceptical about the likely outcomes of such an exchange? One theme throughout Smith’s (2012) book is that people often co-opt key terms (such as ‘decolonizing’) to perform the sense that they care about social change, to try to look like they are doing something important, while actually designing ineffective or bad faith processes to protect the status of themselves or their own institution or profession.

Ahmed (2017: 103) describes comparable initiatives – such as to foster ‘equality and diversity’ – as a public relations exercise for organisations, rather than a sincere desire to do the work. Consequently, there is a gap ‘between a symbolic commitment and a lived reality’ (2017: 90). Indeed, the aim may be to project a sense of transformation to hinder that transformation (2017: 90), coupled with a tendency to use a ‘safe’ and non-confrontational language (‘diversity’) to project the sense that we can only push people so far, at the expense of terms such as ‘racism’ that would signal challenge, confrontation, and a commitment to high impact (2017: chapter 4).

..

Putting these insights together suggests that a stated commitment to co-produced research and policy might begin with good intentions. Even so, a commitment to sincere engagement does not guarantee an audience or prevent you from exacerbating the very problems you profess to solve.

12 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy, Research design, Storytelling

Policy Analysis in 750 words: Michael Mintrom (2012) Contemporary Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary. This summary is not 750 words. I can only apologise.

Michael Mintrom (2012) Contemporary Policy Analysis (Oxford University Press)

Mintrom (2012: xxii; 17) describes policy analysis as ‘an enterprise primarily motivated by the desire to generate high quality information to support high-quality decisions’ and stop policymakers ‘from making ill-considered choices’ (2012: 17). It is about giving issues more ‘serious attention and deep thought’ than busy policymakers, rather than simply ‘an exercise in the application of techniques’ to serve clients (2012: 20; xxii).

It begins with six ‘Key Steps in Policy Analysis’ (2012: 3-5):

  1. ‘Engage in problem definition’

Problem definition influences the types of solutions that will be discussed (although, in some cases, solutions chase problems).

Define the nature and size of a policy problem, and the role of government in solving it (from maximal to minimal), while engaging with many stakeholders with different views (2012: 3; 58-60).

This task involves a juggling act. First, analysts should engage with their audience to work out what they need and when (2012 : 81). However, second, they should (a) develop ‘critical abilities’, (b) ask themselves ‘why they have been presented in specific ways, what their sources might be, and why they have arisen at this time’, and (c) present ‘alternative scenarios’ (2012: 22; 20; 27).

  1. ‘Propose alternative responses to the problem’

Governments use policy instruments – such as to influence markets, tax or subsidize activity, regulate behaviour, provide services (directly, or via commissioning or partnership), or provide information – as part of a coherent strategy or collection of uncoordinated measures (2012: 30-41). In that context, try to:

  • Generate knowledge about how governments have addressed comparable problems (including, the choice to not intervene if an industry self-regulates).
  • Identify the cause of a previous policy’s impact and if it would have the same effect now (2012: 21).
  • If minimal comparable information is available, consider wider issues from which to learn (2012: 76-7; e.g. alcohol policy based on tobacco).

Consider the wider:

 

  1. ‘Choose criteria for evaluating each alternative policy response’

There are no natural criteria, but ‘effectiveness, efficiency, fairness, and administrative efficiency’ are common (2012: 21). ‘Effective institutions’ have a marked impact on social and economic life and provide political stability (2012: 49). Governments can promote ‘efficient’ policies by (a) producing the largest number of winners and (b) compensating losers (2012: 51-2; see Weimer and Vining on Kaldor-Hicks). They can prioritise environmental ‘sustainability’ to mitigate climate change, the protection of human rights and ‘human flourishing’, and/or a fair allocation of resources (2012: 52-7).

  1. ‘Project the outcomes of pursuing each policy alternative’

Estimate the costs of a new policy, in comparison with current policy, and in relation to factors such as (a) overall savings to society, and/or (b) benefits to certain populations (any policy will benefit some social groups more than others). Mintrom (2012: 21) emphasises ‘prior knowledge and experience’ and ‘synthesizing’ work by others alongside techniques such as cost-benefit analyses.

  1. ‘Identify and analyse trade-offs among alternatives’

Use your criteria and projections to compare each alternative in relation to their likely costs and benefits.

  1. ‘Report findings and make an argument for the most appropriate response’

Mintrom (2012: 5) describes a range of advisory roles.

(a) Client-oriented advisors identify the beliefs of policymakers and anticipate the options worth researching (although they should not simply tell clients what they want to hear – 2012: 22). They may only have the time to answer a client’s question quickly and on their own. Or, they need to create and manage a team project (2012: 63-76).

(b) Other actors, ‘who want to change the world’, research options that are often not politically feasible in the short term but are too important to ignore (such as gender mainstreaming or action to address climate change).

In either case, the format of a written report – executive summary, contents, background, analytical strategy, analysis and findings (perhaps including a table comparing goals and trade-offs between alternatives), discussion, recommendation, conclusion, annex – may be similar (2012: 82-6).

Wider context: the changing role of policy analysts

Mintrom (2012: 5-7) describes a narrative – often attributed to Radin – of the changing nature of policy analysis, comparing:

  1. (a) a small group of policy advisors, (b) with a privileged place in government, (c) giving allegedly technical advice, using economic techniques such as cost-benefit analysis.
  2. (a) a much larger profession, (b) spread across – and outside of – government (including external consultants), and (c) engaging more explicitly in the politics of policy analysis and advice.

It reflects wider changes in government, (a) from the ‘clubby’ days to a much more competitive environment debating a larger number and wider range of policy issues, subject to (b) factors such as globalisation that change the task/ context of policy analysis.

If so, any advice on how to do policy analysis has to be flexible, to incorporate the greater diversity of actors and the sense that complex policymaking systems require flexible skills and practices rather than standardised techniques and outputs.

The ethics of policy analysis

In that context, Mintrom (2012: 95-108) emphasises the enduring role for ethical policy analysis, which can relate to:

  1. ‘Universal’ principles such as fairness, compassion, and respect
  2. Specific principles to project the analyst’s integrity, competence, responsibility, respectfulness, and concern for others
  3. Professional practices, such as to
  • engage with many stakeholders in problem definition (to reflect a diversity of knowledge and views)
  • present a range of feasible solutions, making clear their distributional effects on target populations, opportunity costs (what policies/ outcomes would not be funded if this were), and impact on those who implement policy
  • be honest about (a) the method of calculation, and (b) uncertainty, when projecting outcomes
  • clarify the trade-offs between alternatives (don’t stack-up the evidence for one)
  • maximise effective information sharing, rather than exploiting the limited attention of your audience (compare with Riker).
  1. New analytical strategies (2012: 114-15; 246-84)
  1. the extent to which social groups are already ‘systematically disadvantaged’,
  2. the causes (such as racism and sexism) of – and potential solutions to – these outcomes, to make sure
  3. that new policies reduce or do not perpetuate disadvantages, even when
  4. politicians may gain electorally from scapegoating target populations and/ or
  5. there are major obstacles to transformative policy change.

Therefore, while Mintrom’s (2012: 3-5; 116) ‘Key Steps in Policy Analysis’ are comparable to Bardach and Weimer and Vining, his emphasis is often closer to Bacchi’s.

The entrepreneurial policy analyst

Mintrom (2012: 307-13) ends with a discussion of the intersection between policy entrepreneurship and analysis, highlighting the benefits of ‘positive thinking’, creativity, deliberation, and leadership. He expands on these ideas further in So you want to be a policy entrepreneur?

15 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: William Riker (1986) The Art of Political Manipulation

Please see the Policy Analysis in 750 words series overview before reading the summary.

William H. Riker (1986) The Art of Political Manipulation (New Haven: Yale University Press)

Most texts in this series describe the politics of policy analysis, in which your aim is to communicate with a client to help them get what they want, subject to professional standards and ethics (Smith, Bardach, and Weimer and Vining).

Such texts suggest that the evidence will not speak for itself, and that your framing of information could make a big difference between success and failure. However, they tend to dance around the question of how to exercise power to maximise your success.

The consequence may be some bland Aristotle-style advice, in which you should seek to be a persuasive narrator by combining:

  • Pathos. The appeal to an audience’s emotions to maximise interest in a problem.
  • Logos. The concise presentation of information and logic to make a persuasive case.
  • Ethos. The credibility of the presenter, based on their experience and expertise.

Studies of narrative suggest that these techniques have some impact. Narrators tap into their audience’s emotions and beliefs, make a problem seem ‘concrete’ and urgent, and romanticise a heroic figure or cause. However, their success depends heavily on the context, and stories tend to be most influential of the audiences predisposed to accept them.

If so, a key option is to exploit a tendency for people to possess many contradictory beliefs, which suggests that (a) they could support many different goals or policy solutions, and (b) their support may relate strongly to the context and rules that determine the order and manner in which they make choices.

In other words, you may not be able to ‘change their minds’, but you can encourage them to pay more attention to, and place more value on, one belief (or one way to understand a policy problem) at the expense of another. This strategy could make the difference between belief and action.

Riker (1986: ix) uses the term ‘heresthetic’ to describe ‘structuring the world so you can win’. People ‘win politically because they have set up the situation in such a way that other people will want to join them’. Examples include:

  1. Designing the order in which people make choices, because many policy preferences are ‘intransitive’: if A is preferred to B and B to C, A is not necessarily preferred to C.
  2. Exploiting the ways in which people deal with ‘bounded rationality’ (the limits to their ability to process information to make choices).

For example, what if people are ‘cognitive misers’, seeking to process information efficiently rather than comprehensively? What if they combine cognition and emotion to make choices efficiently? Riker highlights the potential value of some combination of the following strategies:

  1. Make your preferred problem framing or solution as easy to understand as possible.
  2. Make other problems/ solutions difficult to process, such as by presenting them in the abstract and providing excessive detail.
  3. Emphasize the high cognitive cost to the examination of all other options.
  4. Experiment with choice-rule options that consolidate the vote for your preferred option while splitting the vote of others.
  5. Design the comparison of a small number of options to make sure that yours is the most competitive.
  6. Design the framing of choice (for example, is a vote primarily about the substantial issue or confidence in its proponents?).
  7. Design the selection of criteria to evaluate options.
  8. Design a series of votes, in sequence, to allow you to trade votes with others.
  9. Conspire to make sure that the proponent of your preferred choice is seen as heroic (and the proponent of another choice as of flawed character and intellect).
  10. Ensure that people make or vote for choices quickly, to ward off the possibility of further analysis and risk of losing control of the design of choice.
  11. Make sure that you engage in these strategies without being detected or punished.

The point of this discussion is not to recommend that policy analysts become Machiavellian manipulators, fixing their eye on the prize, and doing anything to win.

Rather, it is to highlight the wider agenda setting context that you face when presenting evidence, values, and options.

It is a truism in policy studies that the evidence does not speak for itself. Instead, people engage in effective communication and persuasion to assign meaning to the evidence.

Similarly, it would be a mistake to expect success primarily from a well written and argued policy analysis document. Rather, much of its fate depends on who is exploiting the procedures and rules that influence how people make choices.

See also:

Evidence-based policymaking: political strategies for scientists living in the real world

Three habits of successful policy entrepreneurs

Evidence-informed policymaking: context is everything

Please note: some of this text comes from Box 4.3 in Understanding Public Policy 2nd ed

box 4.3 Riker topbox 4.3 Riker bottom

 

6 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis (usually) in 750 words: David Weimer and Adrian Vining (2017) Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary.

Please note that this book is the longest in the series (almost 500 pages), so a 750 word summary would have been too heroic.

David Weimer and Adrian Vining (2017) Policy Analysis: Concepts and Practice 6th Edition (Routledge)

Weimer and Vining (2017: 23-8; 342-75) describe policy analysis in seven steps:

  1. ‘Write to Your Client’

Having a client such as an elected policymaker (or governmental or nongovernmental organization) requires you to: address the question they ask, by their chosen deadline, in a clear and concise way that they can understand (and communicate to others) quickly (2017: 23; 370-4).

Their sample documents are 18 pages, including an executive summary and summary table.

  1. ‘Understand the Policy Problem’

First, ‘diagnose the undesirable condition’, such as by

  • placing your client’s initial ‘diagnosis’ in a wider perspective (e.g. what is the role of the state, and what is its capacity to intervene?), and
  • providing relevant data (usually while recognising that you are not an expert in the policy problem).

Second, frame it as ‘a market or government failure (or maybe both)’, to

  • show how individual or collective choices produce inefficient allocations of resources and poor outcomes (2017: 59-201 and 398-434 provides a primer on economics), and
  • identify the ways in which people have addressed comparable problems in other policy areas (2017: 24).
  1. ‘Be Explicit About Values’ (and goals)

Identify the values that you seek to prioritise, such as ‘efficiency’, ‘equity’, and ‘human dignity’.

Treat values as self-evident goals. They exist alongside the ‘instrumental goals’ – such as ‘sustainable public finance or political feasibility’ – necessary to generate support for policy solutions.

‘Operationalise’ those goals to help identify the likely consequences of different choices.

For example, define efficiency in relation to (a) the number of outputs per input and/or (b) a measurable or predictable gain in outcomes, such as ‘quality-adjusted life years’ in a population (2017: 25-6).

Weimer and Vining describe two analyses of efficiency at length:

  • Cost Benefit Analysis (CBA) to (a) identify the most efficient outcomes by (b) translating all of the predicted impacts of an alternative into a single unit of analysis (such as a dollar amount), on the assumption (c) that we can produce winners from policy and compensate losers (see Kaldor-Hicks) (2017: 352-5 and 398-434).
  • Public Agency Strategic Analysis (PASA) to identify ways in which public organisations can change to provide more benefits (such as ‘public value’) with the same resources (2017: 435-50).
  1. ‘Specify Concrete Policy Alternatives’

Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).

Compare specific and well-worked alternatives, such as from ‘academic policy researchers’ or ‘advocacy organizations’.

Identify the potential to adopt and tailor more generic policy instruments (see 2017: 205-58 on the role of taxes, expenditure, regulation, staffing, and information-sharing; and compare with Hood and Margetts).

Engage in ‘borrowing’ proposals or models from credible sources, and ‘tinkering’ (using only the relevant elements of a proposal) to make sure they are relevant to your problem (2017: 26-7; 359).

  1. ‘Predict and Value Impacts’

Ideally, you would have the time and resources to (a) produce new research and/or (b) ‘conduct a meta-analysis’ of relevant evaluations to (c) provide ‘confident assessments of impacts’ and ‘engage in highly touted evidence-based policy making’ (see EBPM).

However, ‘short deadlines’ and limited access to ‘directly relevant data’ prompt you to patch together existing research that does not answer your question directly (see 2017: 327-39; 409-11).

Consequently, ‘your predictions of the impacts of a unique policy alternative must necessarily be guided by logic and theory, rather than systematic empirical evidence’ (2017: 27) and ‘we must balance sometimes inconsistent evidence to reach conclusions about appropriate assertions’ (2017: 328).

  1. ‘Consider the Trade-Offs’

It is almost inevitable that, if you compare multiple feasible alternatives, each one will fulfil certain goals more than others.

Producing, and discussing with your clients, a summary table allows you make value-based choices about trade-offs – such as between the most equitable or efficient choice – in the context of a need to manage costs and predict political feasibility (2017: 28; 356-8).

  1. ‘Make a Recommendation’

‘Unless your client asks you not to do so, you should explicitly recommend one policy’ (2017: 28).

Even so, your analysis of alternatives is useful to (a) show your work (to emphasise the value of policy analysis), and (b) anticipate a change in circumstances (that affects the likely impact of each choice) or the choice by your client to draw different conclusions.

Policy analysis in a wider context: comparisons with other texts

  1. Policy analysis requires flexibility and humility

As with Smith (and Bardach), note how flexible this advice must be, to reflect factors such as:

  • the (unpredictable) effect that different clients and contexts have on your task
  • the pressure on your limited time and resources
  • the ambiguity of broad goals such as equity and human dignity
  • a tendency of your clients to (a) not know, or (b) choose not to reveal their goals before you complete your analysis of possible policy solutions (2017: 347-9; compare with Lindblom)
  • the need to balance many factors – (a) answering your client’s question with confidence, (b) describing levels of uncertainty and ambiguity, and (c) recognising the benefit of humility – to establish your reputation as a provider of credible and reliable analysis (2017: 341; 363; 373; 453).
  1. Policy analysis as art and craft as well as science

While some proponents of EBPM may identify the need for highly specialist scientific research proficiency, Weimer and Vining (2017: 30; 34-40) describe:

  • the need to supplement a ‘solid grounding’ in economics and statistics with political awareness (the ‘art and craft of policy analysis’), and
  • the ‘development of a professional mind-set’ rather than perfecting ‘technical skills’ (see the policy analysis profession described by Radin).

This approach requires some knowledge of policy theories (see 1000 and 500) to appreciate the importance of factors such as networks, institutions, beliefs and motivation, framing, lurches of attention, and windows of opportunity to act (compare with ‘how far should you go?’).

Indeed, pp259-323 has useful discussions of (a) strategies including ‘co-optation’, ‘compromise’, ‘rhetoric’, Riker’s ‘heresthetics’, (b) the role of narrative in ‘writing implementation scenarios’, and (c) the complexity of mixing many policy interventions.

  1. Normative and ethical requirements for policy analysis

Bacchi’s primary focus is to ask fundamental questions about what you are doing and why, and to challenge problem definitions that punish powerless populations.

In comparison, Weimer and Vining emphasise the client orientation which limits your time, freedom, and perhaps inclination to challenge so strongly.

Still, this normative role is part of an ethical duty to:

  • balance a ‘responsibility to client’ with ‘analytical integrity’ and ‘adherence to one’s personal conception of the good society’, and challenge the client if they undermine professional values (2017: 43-50)
  • reflect on the extent to which a policy analyst should seek to be an ‘Objective Technician’, ‘Client’s Advocate’ or ‘Issue Advocate’ (2017: 44; compare with Pielke and Jasanoff)
  • recognise the highly political nature of seemingly technical processes such as cost-benefit-analysis (see 2017: 403-6 on ‘Whose Costs and Benefits Count’), and
  • encourage politicians to put ‘aside their narrow personal and political interests for the greater good’ (2017: 454).

 

20 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: Catherine Smith (2016) Writing Public Policy

Please see the Policy Analysis in 750 words series overview before reading the summary.

Catherine Smith (2016) Writing Public Policy (Oxford University Press)

Smith focuses on the communication of policy analysis within US government. Effective communication requires conceptual and contextual awareness’. Policy actors communicate from a particular viewpoint, representing their role, interests, and objectives.

In government, policy analysts often write (1) on behalf of policymakers, projecting a specific viewpoint, and (2) for policy makers, requiring them to (a) work remarkably quickly to (b) produce concise reports to (c) reflect the need to process information efficiently.

Actors outside government are less constrained by (1), but still need to write in a similar way. Their audience makes quick judgements on presentations: the source of information, its relevance, and if they should read it fully.

‘General Method of Communicating in a Policy Process’

Smith identifies the questions to ask yourself when communicating policy analysis, summarised as follows:

‘Step 1: Prepare’

  • To what policy do I refer?
  • Which audiences are relevant?
  • What is the political context, and the major sites of agreement/ disagreement?
  • How do I frame the problem, and which stories are relevant to my audience?

‘Step 2: Plan’

  • What is this communication’s purpose?
  • What is my story and message?
  • What is my role and interest?
  • ‘For whom does this communication speak?’
  • Who is my audience?
  • What will they learn?
  • What is the context and timeframe?
  • What should be the form, content, and tone of the communication?

‘Step 3. Produce’

  • Make a full draft, seek comments during a review, then revise.

Smith provides two ‘checklists’ to assess such communications:

  1. Effectiveness. Speak with an audience in mind, highlight a well-defined problem and purpose, project authority, and use the right form of communication.
  2. Excellence. Focus on clarity, precision, conciseness, and credibility.

Smith then focuses on specific aspects of this general method, including:

  • Framing involves describing the nature of the problem – its scope, and who is affected – and connecting this definition to current or new solutions.
  • Evaluation requires critical skills to question ‘conventional wisdom’ and assess the selective use of information by others. Use the ‘general method’ to ask how others frame problems and solutions, then provide a fresh perspective (compare with Bacchi).
  • Know the Record involves researching previous solutions. This process reflects the importance of ‘precedent’: telling a story of previous attempts to solve the problem helps provide context for new debates (and project your knowledgeability).
  • Know the Arguments involves engaging with the ideas of your allies and competitors. Understand your own position, make a reasoned argument in relation to others, present a position paper, establish its scope (the big picture or specific issue), and think strategically (and ethically) about how to maximise its impact in current political debates.
  • Inform Policymakers suggests maximising policymaker interest by keeping communication concise, polite, and tailored to a policymaker’s values and interests.
  • Public Comment focuses on the importance of working with administrative officials even after legislation is passed (especially if ‘street level bureaucrats’ make policy as they deliver).

Policy analysis in a wider context

Although Smith does not focus on policy process theories, knowledge of policy processes guides this advice. For example, Smith advises that:

  • There is no linear and orderly policy cycle in which to present written analysis. The policymaking environment is more complex and less predictable than this model suggests (although Smith still distinguishes heavily between legislation to make policy and administration to deliver – compare with the ACF)
  • There is no blueprint or uniform template for writing policy analysis. The mix of policy problems is too diverse to manage with one approach, and ‘context’ may be more important than the ‘content’ of your proposal. Consequently, Smith provides a huge number of real-world examples to highlight the need to adapt policy analysis to the task at hand (see also Bacchi on analysts creating problems as they frame them).
  • Policy communication is not a rational/ technical process. It is a political exercise, built on the use of values to frame and try to solve problems. Analysis takes place in often highly divisive debates. People communicate using stories, and they use framing and persuasion techniques. They need to tailor their arguments to specific audiences, rather than hoping that one document could appeal to everyone (see Deborah Stone’s Policy Paradox).
  • Everyone may have the ability to frame issues, but only some policymakers ‘have authority to decide’ to pay attention to and interpret problems (see PET).
  • Communication comes in many forms to reflect many possible venues (such as, in the US context, processes of petition and testimony to public hearings alongside appeals to the executive and legislative branches)

See also: Policy Analysis in 750 words (the overview)

 

8 Comments

Filed under 750 word policy analysis, public policy

Policy Analysis in 750 words: Eugene Bardach’s (2012) Eightfold Path

Please see the Policy Analysis in 750 words series overview before reading the summary.

Eugene Bardach (2012) A Practical Guide for Policy Analysis 4th ed. (CQ Press)

Bardach (2012) describes policy analysis in eight steps:

  1. ‘Define the problem’.

Provide a diagnosis of a policy problem, using rhetoric and eye-catching data to generate attention.

  1. ‘Assemble some evidence’.

Gather relevant data efficiently (to reflect resource constraints such as time pressures). Think about which data are essential and when you can substitute estimation for research. Speak with the consumers of your evidence to anticipate their reaction.

  1. ‘Construct the alternatives’.

Identify the relevant and feasible policy solutions that your audience might consider, preferably by identifying how the solution would work if implemented as intended. Think of solutions as on a spectrum of acceptability, according to the extent to which your audience will accept (say) market or state action. Your list can include things governments already do (such as tax or legislate), or a new policy design. Focus on the extent to which you are locking-in policymakers to your solution even if it proves to be ineffective (if you need to invest in new capital).

  1. ‘Select the criteria’.

Use value judgements to decide which solution will produce the best outcome. Recognise the political nature of policy evaluation, based your measures to determine success. Typical measures relate to efficiency, equity and fairness, the trade-off between individual freedom and collective action, the extent to which a policy process involves citizens in deliberation, and the impact on a policymaker’s popularity.

  1. ‘Project the outcomes’.

Focus on the outcomes that key actors care about (such as value for money), and quantify and visualise your predictions if possible. Prediction involves estimation based on experience (or guesswork), so do not over-claim. Establish if your solutions will meet an agreed threshold of effectiveness in terms of the money to be spent, or, present many scenarios based on changing your assumptions underpinning each prediction.

  1. ‘Confront the trade-offs’.

Compare the pros and cons of each solution, such as how much of a bad service policymakers will accept to cut costs, or how much security is provided by a reduction in freedom. Assess technical and political feasibility; some solutions may be technically effective but too unpopular. Establish a baseline to help measure the impact of marginal policy changes, and compare costs and benefits in relation to something tangible (such as money).

  1. ‘Decide’.

Examine your case through the eyes of a policymaker. Ask yourself: if this is such a good solution, why hasn’t it been done already?

  1. ‘Tell your story’.

Identify your target audience and tailor your case. Weigh up the benefits of oral versus written presentation. Provide an executive summary. Focus on coherence and clarity.  Keep it simple and concise. Avoid jargon.

Policy analysis in a wider context: psychology and complexity

Bardach’s classic book provides a great way to consider the wider context in which you might construct policy advice (see pp6-9):

  1. Policymaker psychology.

People engage emotionally with information. Any advice to keep it concise is incomplete without a focus on framing and persuasion. Simplicity helps reduce cognitive load, while framing helps present the information in relation to the beliefs of your audience. If so, ‘there is no way to appeal to all audiences with the same information’ or to make an ‘evidence based’ case. To pretend to be an objective policy analyst is a cop-out. To provide long, rigorous, and meticulous reports that few people read is futile. Tell a convincing story with a clear moral, or frame policy analysis to grab your audience’s attention and generate enthusiasm to solve a problem.

  1. Policymaking complexity.

Policymakers operate in a policymaking environment of which they have limited knowledge and even less control. There is no all-powerful ‘centre’ making policy from the ‘top down’. We need to incorporate this environment into policy analysis: which actors make and influence policy; the rules they follow, the networks they form, the ideas that dominate debate; and the policy context and events that influence their attention to problems and optimism about solutions.

These factors warn us against ‘single shot’ policy analysis in which there is a one size fits all solution, and the idea that the selection of a policy solution from the ‘top’ sets in motion an inevitable cycle of legitimation, implementation, and evaluation. A simple description of a problem and its solution may be attractive, but success may also depend on persuading your audience at ‘the centre’ about the need to: (a) learn continuously and adapt their strategies through processes such as trial and error, and (b) cooperate with many other ‘centres’ to address problems that no single actor can solve.

 

 

16 Comments

Filed under 750 word policy analysis

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Evidence-informed policymaking: context is everything

I thank James Georgalakis for inviting me to speak at the inaugural event of IDS’ new Evidence into Policy and Practice Series, and the audience for giving extra meaning to my story about the politics of ‘evidence-based based policymaking’. The talk (using powerpoint) and Q&A is here:

 

James invited me to respond to some of the challenges raised to my talk – in his summary of the event – so here it is.

I’m working on a ‘show, don’t tell’ approach, leaving some of the story open to interpretation. As a result, much of the meaning of this story – and, in particular, the focus on limiting participation – depends on the audience.

For example, consider the impact of the same story on audiences primarily focused on (a) scientific evidence and policy, or (b) participation and power.

Normally, when I talk about evidence and policy, my audience is mostly people with scientific or public health backgrounds asking why do policymakers ignore scientific evidence? I am usually invited to ruffle feathers, mostly by challenging a – remarkably prevalent – narrative that goes like this:

  • We know what the best evidence is, since we have produced it with the best research methods (the ‘hierarchy of evidence’ argument).
  • We have evidence on the nature of the problem and the most effective solutions (the ‘what works’ argument).
  • Policymakers seems to be ignoring our evidence or failing to act proportionately (the ‘evidence-policy barriers’ argument).
  • Or, they cherry-pick evidence to suit their agenda (the ‘policy based evidence’ argument).

In that context, I suggest that there are many claims to policy-relevant knowledge, policymakers have to ignore most information before making choices, and they are not in control of the policy process for which they are ostensibly in charge.

Limiting participation as a strategic aim

Then, I say to my audience that – if they are truly committed to maximising the use of scientific evidence in policy – they will need to consider how far they will go to get what they want. I use the metaphor of an ethical ladder in which each rung offers more influence in exchange for dirtier hands: tell stories and wait for opportunities, or demonise your opponents, limit participation, and humour politicians when they cherry-pick to reinforce emotional choices.

It’s ‘show don’t tell’ but I hope that the take-home point for most of the audience is that they shouldn’t focus so much on one aim – maximising the use of scientific evidence – to the detriment of other important aims, such as wider participation in politics beyond a reliance on a small number of experts. I say ‘keep your eyes on the prize’ but invite the audience to reflect on which prizes they should seek, and the trade-offs between them.

Limited participation – and ‘windows of opportunity’ – as an empirical finding

NASA launch

I did suggest that most policymaking happens away from the sphere of ‘exciting’ and ‘unruly’ politics. Put simply, people have to ignore almost every issue almost all of the time. Each time they focus their attention on one major issue, they must – by necessity – ignore almost all of the others.

For me, the political science story is largely about the pervasiveness of policy communities and policymaking out of the public spotlight.

The logic is as follows. Elected policymakers can only pay attention to a tiny proportion of their responsibilities. They delegate the rest to bureaucrats at lower levels of government. Bureaucrats lack specialist knowledge, and rely on other actors for information and advice. Those actors trade information for access. In many cases, they develop effective relationships based on trust and a shared understanding of the policy problem.

Trust often comes from a sense that everyone has proven to be reliable. For example, they follow norms or the ‘rules of the game’. One classic rule is to contain disputes within the policy community when actors don’t get what they want: if you complain in public, you draw external attention and internal disapproval; if not, you are more likely to get what you want next time.

For me, this is key context in which to describe common strategic concerns:

  • Should you wait for a ‘window of opportunity’ for policy change? Maybe. Or, maybe it will never come because policymaking is largely insulated from view and very few issues reach the top of the policy agenda.
  • Should you juggle insider and outsider strategies? Yes, some groups seem to do it well and it is possible for governments and groups to be in a major standoff in one field but close contact in another. However, each group must consider why they would do so, and the trade-offs between each strategy. For example, groups excluded from one venue may engage (perhaps successfully) in ‘venue shopping’ to get attention from another. Or, they become discredited within many venues if seen as too zealous and unwilling to compromise. Insider/outsider may seem like a false dichotomy to experienced and well-resourced groups, who engage continuously, and are able to experiment with many approaches and use trial-and-error learning. It is a more pressing choice for actors who may have only one chance to get it right and do not know what to expect.

Where is the power analysis in all of this?

image policy process round 2 25.10.18

I rarely use the word power directly, partly because – like ‘politics’ or ‘democracy’ – it is an ambiguous term with many interpretations (see Box 3.1). People often use it without agreeing its meaning and, if it means everything, maybe it means nothing.

However, you can find many aspects of power within our discussion. For example, insider and outsider strategies relate closely to Schattschneider’s classic discussion in which powerful groups try to ‘privatise’ issues and less powerful groups try to ‘socialise’ them. Agenda setting is about using resources to make sure issues do, or do not, reach the top of the policy agenda, and most do not.

These aspects of power sometimes play out in public, when:

  • Actors engage in politics to turn their beliefs into policy. They form coalitions with actors who share their beliefs, and often romanticise their own cause and demonise their opponents.
  • Actors mobilise their resources to encourage policymakers to prioritise some forms of knowledge or evidence over others (such as by valuing scientific evidence over experiential knowledge).
  • They compete to identify the issues most worthy of our attention, telling stories to frame or define policy problems in ways that generate demand for their evidence.

However, they are no less important when they play out routinely:

  • Governments have standard operating procedures – or institutions – to prioritise some forms of evidence and some issues routinely.
  • Many policy networks operate routinely with few active members.
  • Certain ideas, or ways of understanding the world and the nature of policy problems within it, becomes so dominant that they are unspoken and taken for granted as deeply held beliefs. Still, they constrain or facilitate the success of new ‘evidence based’ policy solutions.

In other words, the word ‘power’ is often hidden because the most profound forms of power often seem to be hidden.

In the context of our discussion, power comes from the ability to define some evidence as essential and other evidence as low quality or irrelevant, and therefore define some people as essential or irrelevant. It comes from defining some issues as exciting and worthy of our attention, or humdrum, specialist and only relevant to experts. It is about the subtle, unseen, and sometimes thoughtless ways in which we exercise power to harness people’s existing beliefs and dominate their attention as much as the transparent ways in which we mobilise resources to publicise issues. Therefore, to ‘maximise the use of evidence’ sounds like an innocuous collective endeavour, but it is a highly political and often hidden use of power.

See also:

I discussed these issues at a storytelling workshop organised by the OSF:

listening-new-york-1-11-16

See also:

Policy in 500 Words: Power and Knowledge

The politics of evidence-based policymaking

Palgrave Communications: The politics of evidence-based policymaking

Using evidence to influence policy: Oxfam’s experience

The UK government’s imaginative use of evidence to make policy

 

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Taking lessons from policy theory into practice: 3 examples

Notes for ANZSOG/ ANU Crawford School/ UNSW Canberra workshop. Powerpoint here. The recording of the lecture (skip to 2m30) and Q&A is here (right click to download mp3 or dropbox link):

The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?

The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).

However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:

  1. Central government policymakers may accept the descriptive accuracy of policy theories emphasising limited central control, but not the recommendation that they should let go, share power, and describe their limits to the public.
  2. Scientists may accept key limitations to ‘evidence based policymaking’ but reject the idea that they should respond by becoming better storytellers or more manipulative operators.
  3. Researchers and practitioners struggle to resolve hard choices when combining evidence and ‘coproduction’ while ‘scaling up’ policy interventions. Evidence choice is political choice. Can we do more than merely encourage people to accept this point?

I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.

Example 1: the policy cycle endures despite its descriptive inaccuracy

cycle

The policy cycle does not describe and explain the policy process well:

  • If we insist on keeping the cycle metaphor, it is more accurate to see the process as a huge set of policy cycles that connect with each other in messy and unpredictable ways.
  • The cycle approach also links strongly to the idea of ‘comprehensive rationality’ in which a small group of policymakers and analysts are in full possession of the facts and full control of the policy process. They carry out their aims through a series of stages.

Policy theories provide more descriptive and explanatory usefulness. Their insights include:

  • Limited choice. Policymakers inherit organisations, rules, and choices. Most ‘new’ choice is a revision of the old.
  • Limited attention. Policymakers must ignore almost all of the policy problems for which they are formally responsible. They pay attention to some, and delegate most responsibility to civil servants. Bureaucrats rely on other actors for information and advice, and they build relationships on trust and information exchange.
  • Limited central control. Policy may appear to be made at the ‘top’ or in the ‘centre’, but in practice policymaking responsibility is spread across many levels and types of government (many ‘centres’). ‘Street level’ actors make policy as they deliver. Policy outcomes appear to ‘emerge’ locally despite central government attempts to control their fate.
  • Limited policy change. Most policy change is minor, made and influenced by actors who interpret new evidence through the lens of their beliefs. Well-established beliefs limit the opportunities of new solutions. Governments tend to rely on trial-and-error, based on previous agreements, rather than radical policy change based on a new agenda. New solutions succeed only during brief and infrequent windows of opportunity.

However, the cycle metaphor endures because:

  • It provides a simple model of policymaking with stages that map onto important policymaking functions.
  • It provides a way to project policymaking to the public. You know how we make policy, and that we are in charge, so you know who to hold to account.

In that context, we may want to be pragmatic about our advice:

  1. One option is via complexity theory, in which scholars generally encourage policymakers to accept and describe their limits:
  • Accept routine error, reduce short-term performance management, engage more in trial and error, and ‘let go’ to allow local actors the flexibility to adapt and respond to their context.
  • However, would a government in the Westminster tradition really embrace this advice? No. They need to balance (a) pragmatic policymaking, and (b) an image of governing competence.
  1. Another option is to try to help improve an existing approach.

Further reading (blog posts):

The language of complexity does not mix well with the language of Westminster-style accountability

Making Sense of Policymaking: why it’s always someone else’s fault and nothing ever changes

Two stories of British politics: the Westminster model versus Complex Government

Example 2: how to deal with a lack of ‘evidence based policymaking’

I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.

I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.

So, I wrote The Politics of Evidence-Based Policymaking and one zillion blog posts to help identify the ways in which policy theories could help explain the relationship between evidence and policy.

Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.

But how exactly do we turn empirical descriptions of policymaking into practical advice?

For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:

  1. they have a broader view on what counts as good evidence (see ANZSOG description)
  2. they have to ignore almost all information (a nice way into bounded rationality and policymaker psychology)
  3. they do not understand or control the process in which they seek to use evidence (a way into ‘the policy process’)

Cairney 2017 image of the policy process

We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?

Examples include:

  • ‘How to do it’ advice. I compare tips for individuals (from experienced practitioners) with tips based on policy concepts. They are quite similar-looking tips – e.g. find out where the action is, learn the rules, tell good stories, engage allies, seek windows of opportunity – but I describe mine as 5 impossible tasks!
  • Organisational reform. I describe work with the European Commission Joint Research Centre to identify 8 skills or functions of an organisation bringing together the supply/demand of knowledge.
  • Ethical dilemmas. I use key policy theories to ask people how far they want to go to privilege evidence in policy. It’s fun to talk about these things with the type of scientist who sees any form of storytelling as manipulation.

Further reading:

Is Evidence-Based Policymaking the same as good policymaking?

A 5-step strategy to make evidence count

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Principles of science advice to government: key problems and feasible solutions

Example 3: how to encourage realistic evidence-informed policy transfer

This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.

I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!

For me, the context is potentially overwhelming:

So, when we start to focus on sharing lessons, we will have three things to discover:

  1. What is the evidence for success, and from where does it come? Governments often project success without backing it up.
  2. What story do policymakers tell about the problem they are trying to solve, the solutions they produced, and why? Two different governments may be framing and trying to solve the same problem in very different ways.
  3. Was the policy introduced in a comparable policymaking system? People tend to focus on political system comparability (e.g. is it unitary or federal?), but I think the key is in policymaking system comparability (e.g. what are the rules and dominant ideas?).

To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.

 

The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018

I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), IMAJINE, Policy learning and transfer

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

This post provides (a generous amount of) background for my ANZSOG talk Teaching evidence based policy to fly: transferring sound policies across the world.

The event’s description sums up key conclusions in the literature on policy learning and policy transfer:

  1. technology and ‘entrepreneurs’ help ideas spread internationally, and domestic policymakers can use them to be more informed about global policy innovation, but
  2. there can be major unintended consequences to importing ideas, such as the adoption of policy solutions with poorly-evidenced success, or a broader sense of failed transportation caused by factors such as a poor fit between the aims of the exporter/importer.

In this post, I connect these conclusions to broader themes in policy studies, which suggest that:

  1. policy learning and policy transfer are political processes, not ‘rational’ or technical searches for information
  2. the use of evidence to spread policy innovation requires two interconnected choices: what counts as good evidence, and what role central governments should play.
  3. the following ’11 question guide’ to evidence based policy transfer serves more as a way to reflect than a blueprint for action.

As usual, I suggest that we focus less on how we think we’d like to do it, and more on how people actually do it.

anzog auckland transfer ad

Policy transfer describes the use of evidence about policy in one political system to help develop policy in another. Taken at face value, it sounds like a great idea: why would a government try to reinvent the wheel when another government has shown how to do it?

Therefore, wouldn’t it be nice if I turned up to the lecture, equipped with a ‘blueprint’ for ‘evidence based’ policy transfer, and declared how to do it in a series of realistic and straightforward steps? Unfortunately, there are three main obstacles:

  1. ‘Evidence based’ is a highly misleading description of the use of information in policy.
  2. To transfer a policy blueprint completely, in this manner, would require all places and contexts to be the same, and for the policy process to be technocratic and apolitical.
  3. There are general academic guides on how to learn lessons from others systematically – such as Richard Rose’s ‘practical guide’  – but most academic work on learning and transfer does not suggest that policymakers follow this kind of advice.

Rose 10 lessons rotated

Instead, policy learning is a political process – involving the exercise of power to determine what and how to learn – and it is difficult to separate policy transfer from the wider use of evidence and ideas in policy processes.

Let’s take each of these points in turn, before reflecting on their implications for any X-step guide:

3 reasons why ‘evidence based’ does not describe policymaking

In a series of ANZSOG talks on ‘evidence based policymaking’ (EBPM), I describe three main factors, all of which are broadly relevant to transfer:

  1. There are many forms of policy-relevant evidence and few policymakers adhere to a strict ‘hierarchy’ of knowledge.

Therefore, it is unclear how one government can, or should, generate evidence of another government’s policy success.

  1. Policymakers must find ways to ignore most evidence – such as by combining ‘rational’ and ‘irrational’ cognitive shortcuts – to be able to act quickly.

The generation of policy transfer lessons is a highly political process in which actors adapt to this need to prioritise information while competing with each other. They exercise power to: prioritise some information and downplay the rest, define the nature of the policy problem, and evaluate the success of another government’s solutions. There is a strong possibility that policymakers will import policy solutions without knowing if, and why, they were successful.

  1. They do not control the policy process in which they engage.

We should not treat ‘policy transfer’ as separate from the policy process in which policymakers and influencers engage. Rather, the evidence of international experience competes with many other sources of ideas and evidence within a complex policymaking system.

The literature on ‘policy learning’ tells a similar story

Studies of the use of evaluation evidence (perhaps to answer the question: was this policy successful?) have long described policymakers using the research process for many different purposes, from short term problem-solving and long-term enlightenment, to putting off decisions or using evidence cynically to support an existing policy.

We should therefore reject the temptation to (a) equate ‘policy learning’ with a simplistic process that we might associate with teachers transmitting facts to children, or (b) assume that adults simply change their beliefs when faced with new evidence. Rather, Dunlop and Radaelli describe policy learning as a political process in the following ways:

1.It is collective and rule-bound

Individuals combine cognition and emotion to process information, in organisations with rules that influence their motive and ability to learn, and in wider systems, in which many actors cooperate and compete to establish the rules of evidence gathering and analysis, or policymaking environments that constrain or facilitate their action.

2.’Evidence based’ is one of several types of policy learning

  • Epistemic. Primarily by scientific experts transmitting knowledge to policymakers.
  • Reflection. Open dialogue to incorporate diverse forms of knowledge and encourage cooperation.
  • Bargaining. Actors learn how to cooperate and compete effectively.
  • Hierarchy. Actors with authority learn how to impose their aims; others learn the limits to their discretion.

3.The process can be ‘dysfunctional’: driven by groupthink, limited analysis, and learning how to dominate policymaking, not improve policy.

Their analysis can produce relevant take-home points such as:

  • Experts will be ineffective if they assume that policy learning is epistemic. The assumption will leave them ill-prepared to deal with bargaining.
  • There is more than one legitimate way to learn, such as via deliberative processes that incorporate more perspectives and forms of knowledge.

What does the literature on transfer tell us?

‘Policy transfer’ can describe a spectrum of activity:

  • driven voluntarily, by a desire to learn from the story of another government’s policy’s success. In such cases, importers use shortcuts to learning, such as by restricting their search to systems with which they have something in common (such as geography or ideology), learning via intermediaries such as ‘entrepreneurs’, or limiting their searches for evidence of success.
  • driven by various forms of pressure, including encouragement by central (or supranational) governments, international norms or agreements, ‘spillover’ effects causing one system to respond to innovation by another, or demands by businesses to minimise the cost of doing business.

In that context, some of the literature focuses on warning against unsuccessful policy transfer caused by factors such as:

  • Failing to generate or use enough evidence on what made the initial policy successful
  • Failing to adapt that policy to local circumstances
  • Failing to back policy change with sufficient resources

However, other studies highlight some major qualifications:

  • If the process is about using ideas about one system to inform another, our attention may shift from ‘transfer’ to ‘translation’ or ‘transformation’, and the idea of ‘successful transfer’ makes less sense
  • Transfer success is not the same as implementation success, which depends on a wider range of factors
  • Nor is it the same as ‘policy success’, which can be assessed by a mix of questions to reflect political reality: did it make the government more re-electable, was the process of change relatively manageable, and did it produce intended outcomes?

The use of evidence to spread policy innovation requires a combination of profound political and governance choices

When encouraging policy diffusion within a political system, choices about: (a) what counts as ‘good’ evidence of policy success have a major connection to (b) what counts as good governance.

For example, consider these ideal-types or models in table 1:

Table 1 3 ideal types of EBBP

In one scenario, we begin by relying primarily on RCT evidence (multiple international trials) and import a relatively fixed model, to ensure ‘fidelity’ to a proven intervention and allow us to measure its effect in a new context. This choice of good evidence limits the ability of subnational policymakers to adapt policy to local contexts.

In another scenario, we begin by relying primary on governance principles, such as to respect local discretion as well as incorporate practitioner and user experience as important knowledge claims. The choice of governance model relates closely to a less narrow sense of what counts as good evidence, but also a more limited ability to evaluate policy success scientifically.

In other words, the political choice to privilege some forms of evidence is difficult to separate from another political choice to privilege the role of one form of government.

Telling a policy transfer story: 11 questions to encourage successful evidence based policy transfer  

In that context, these steps to evidence-informed policy transfer serve more to encourage reflection than provide a blueprint for action. I accept that 11 is less catchy than 10.

  1. What problem did policymakers say they were trying to solve, and why?
  2. What solution(s) did they produce?
  3. Why?

Points 1-3 represent the classic and necessary questions from policy studies: (1) ‘what is policy?’ (2)  ‘how much did policy change?’ and (3) why? Until we have a good answer, we do not know how to draw comparable lessons. Learning from another government’s policy choices is no substitute for learning from more meaningful policy change.

4. Was the project introduced in a country or region which is sufficiently comparable? Comparability can relate to the size and type of country, the nature of the problem, the aims of the borrowing/ lending government and their measures of success.

5. Was it introduced nationwide, or in a region which is sufficiently representative of the national experience (it is not an outlier)?

6. How do we account for the role of scale, and the different cultures and expectations in each policy field?

Points 4-6 inform initial background discussions of case study reports. We need to focus on comparability when describing the context in which the original policy developed. It is not enough to state that two political systems are different. We need to identify the relevance and implications of the differences, from another government’s definition of the problem to the logistics of their task.

7. Has the project been evaluated independently, subject to peer review and/ or using measures deemed acceptable to the government?

8. Has the evaluation been of a sufficient period in proportion to the expected outcomes?

9. Are we confident that this project has been evaluated the most favourably – i.e. that our search for relevant lessons has been systematic, based on recognisable criteria (rather than reputations)?

10. Are we identifying ‘Good practice’ based on positive experience, ‘Promising approaches’ based on positive but unsystematic findings, ‘Research–based’ or based on ‘sound theory informed by a growing body of empirical research’, or ‘Evidence–based’, when ‘the programme or practice has been rigorously evaluated and has consistently been shown to work’?

Points 7-10 raise issues about the relationships between (a) what evidence we should use to evaluate success or potential, and (b) how long we should wait to declare success.

11. What will be the relationship between evidence and governance?

Should we identify the same basic model and transfer it uniformly, tell a qualitative story about the model and invite people to adapt it, or focus pragmatically on an eclectic range of evidential sources and focus on the training of the actors who will implement policy?

In conclusion

Information technology has allowed us to gather a huge amount of policy-relevant information across the globe. However, it has not solved the limitations we face in defining policy problems clearly, gathering evidence on policy solutions systematically, and generating international lessons that we can use to inform domestic policy processes.

This rise in available evidence is not a substitute for policy analysis and political choice. These choices range from how to adjudicate between competing policy preference, to how to define good evidence and good government. A lack of attention to these wider questions helps explain why – at least from some perspectives – policy transfer seems to fail.

Paul Cairney Auckland Policy Transfer 12.10.18

 

 

6 Comments

Filed under Evidence Based Policymaking (EBPM), Policy learning and transfer

The Politics of Evidence-Based Policymaking: ANZSOG talks

This post introduces a series of related talks on ‘the politics of evidence-based policymaking’ (EBPM) that I’m giving as part of larger series of talks during this ANZOG-funded/organised trip.

The EBPM talks begin with a discussion of the same three points: what counts as evidence, why we must ignore most of it (and how), and the policy process in which policymakers use some of it. However, the framing of these points, and the ways in which we discuss the implications, varies markedly by audience. So, in this post, I provide a short discussion of the three points, then show how the audience matters (referring to the city as a shorthand for each talk).

The overall take-home points are highly practical, in the same way that critical thinking has many practical applications (in other words, I’m not offering a map, toolbox, or blueprint):

  • If you begin with (a) the question ‘why don’t policymakers use my evidence?’ I like to think you will end with (b) the question ‘why did I ever think they would?’.
  • If you begin by taking the latter as (a) a criticism of politics and policymakers, I hope you will end by taking it as (b) a statement of the inevitability of the trade-offs that must accompany political choice.
  • We may address these issues by improving the supply and use of evidence. However, it is more important to maintain the legitimacy of the politicians and political systems in which policymakers choose to ignore evidence. Technocracy is no substitute for democracy.

3 ways to describe the use of evidence in policymaking

  1. Discussions of the use of evidence in policy often begin as a valence issue: who wouldn’t want to use good evidence when making policy?

However, it only remains a valence issue when we refuse to define evidence and justify what counts as good evidence. After that, you soon see the political choices emerge. A reference to evidence is often a shorthand for scientific research evidence, and good often refers to specific research methods (such as randomised control trials). Or, you find people arguing very strongly in the almost-opposite direction, criticising this shorthand as exclusionary and questioning the ability of scientists to justify claims to superior knowledge. Somewhere in the middle, we find that a focus on evidence is a good way to think about the many forms of information or knowledge on which we might make decisions, including: a wider range of research methods and analyses, knowledge from experience, and data relating to the local context with which policy would interact.

So, what begins as a valence issue becomes a gateway to many discussions about how to understand profound political choices regarding: how we make knowledge claims, how to ‘co-produce’ knowledge via dialogue among many groups, and the relationship between choices about evidence and governance.

  1. It is impossible to pay attention to all policy relevant evidence.

There is far more information about the world than we are able to process. A focus on evidence gaps often gives way to the recognition that we need to find effective ways to ignore most evidence.

There are many ways to describe how individuals combine cognition and emotion to limit their attention enough to make choices, and policy studies (to all intents and purposes) describe equivalent processes – described, for example, as ‘institutions’ or rules – in organisations and systems.

One shortcut between information and choice is to set aims and priorities; to focus evidence gathering on a small number of problems or one way to define a problem, and identify the most reliable or trustworthy sources of evidence (often via evidence ‘synthesis’). Another is to make decisions quickly by relying on emotion, gut instinct, habit, and existing knowledge or familiarity with evidence.

Either way, agenda setting and problem definition are political processes that address uncertainty and ambiguity. We gather evidence to reduce uncertainty, but first we must reduce ambiguity by exercising power to define the problem we seek to solve.

  1. It is impossible to control the policy process in which people use evidence.

Policy textbooks (well, my textbook at least!) provide a contrast between:

  • The model of a ‘policy cycle’ that sums up straightforward policymaking, through a series of stages, over which policymakers have clear control. At each stage, you know where evidence fits in: to help define the problem, generate solutions, and evaluate the results to set the agenda for the next cycle.
  • A more complex ‘policy process’, or policymaking environment, of which policymakers have limited knowledge and even less control. In this environment, it is difficult to know with whom engage, the rules of engagement, or the likely impact of evidence.

Overall, policy theories have much to offer people with an interest in evidence-use in policy, but primarily as a way to (a) manage expectations, to (b) produce more realistic strategies and less dispiriting conclusions. It is useful to frame our aim as to analyse the role of evidence within a policy process that (a) we don’t quite understand, rather than (b) we would like to exist.

The events themselves

Below, you will find a short discussion of the variations of audience and topic. I’ll update and reflect on this discussion (in a revised version of this post) after taking part in the events.

Social science and policy studies: knowledge claims, bounded rationality, and policy theory

For Auckland and Wellington A, I’m aiming for an audience containing a high proportion of people with a background in social science and policy studies. I describe the discussion as ‘meta’ because I am talking about how I talk about EBPM to other audiences, then inviting discussion on key parts of that talk, such as how to conceptualise the policy process and present conceptual insights to people who have no intention of deep dives into policy theory.

I often use the phrase ‘I’ve read it, so you don’t have to’ partly as a joke, but also to stress the importance of disciplinary synthesis when we engage in interdisciplinary (and inter-professional) discussion. If so, it is important to discuss how to produce such ‘synthetic’ accounts.

I tend to describe key components of a policymaking environment quickly: many policy makers and influencers spread across many levels and types of government, institutions, networks, socioeconomic factors and events, and ideas. However, each of these terms represents a shorthand to describe a large and diverse literature. For example, I can describe an ‘institution’ in a few sentences, but the study of institutions contains a variety of approaches.

Background post: I know my audience, but does my other audience know I know my audience?

Academic-practitioner discussions: improving the use of research evidence in policy

For Wellington B and Melbourne, the audience is an academic-practitioner mix. We discuss ways in which we can encourage the greater use of research evidence in policy, perhaps via closer collaboration between suppliers and users.

Discussions with scientists: why do policymakers ignore my evidence?

Sydney UNSW focuses more on researchers in scientific fields (often not in social science).  I frame the question in a way that often seems central to scientific researcher interest: why do policymakers seem to ignore my evidence, and what can I do about it?

Then, I tend to push back on the idea that the fault lies with politics and policymakers, to encourage researchers to think more about the policy process and how to engage effectively in it. If I’m trying to be annoying, I’ll suggest to a scientific audience that they see themselves as ‘rational’ and politicians as ‘irrational’. However, the more substantive discussion involves comparing (a) ‘how to make an impact’ advice drawn from the personal accounts of experienced individuals, giving advice to individuals, and (b) the sort of advice you might draw from policy theories which focus more on systems.

Background post: What can you do when policymakers ignore your evidence?

Early career researchers: the need to build ‘impact’ into career development

Canberra UNSW is more focused on early career researchers. I think this is the most difficult talk because I don’t rely on the same joke about my role: to turn up at the end of research projects to explain why they failed to have a non-academic impact.  Instead, my aim is to encourage intelligent discussion about situating the ‘how to’ advice for individual researchers into a wider discussion of policymaking systems.

Similarly, Brisbane A and B are about how to engage with practitioners, and communicate well to non-academic audiences, when most of your work and training is about something else entirely (such as learning about research methods and how to engage with the technical language of research).

Background posts:

What can you do when policymakers ignore your evidence? Tips from the ‘how to’ literature from the science community

What can you do when policymakers ignore your evidence? Encourage ‘knowledge management for policy’

See also:

  1. A similar talk at LSHTM (powerpoint and audio)

2. European Health Forum Gastein 2018 ‘Policy in Evidence’ (from 6 minutes)

https://webcasting.streamdis.eu/Mediasite/Play/8143157d976146b4afd297897c68be5e1d?catalog=62e4886848394f339ff678a494afd77f21&playFrom=126439&autoStart=true

 

See also:

Evidence-based policymaking and the new policy sciences

 

8 Comments

Filed under Evidence Based Policymaking (EBPM)

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see this post on the meaning of ‘irrational’). Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Now go to:

What can you do when policymakers ignore your evidence?

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes how to be a ‘policy entrepreneur’.

See also the 750 Words series to explore the implications for policy analysis

17 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

How to write theory-driven policy analysis

Writing theory driven policy analysis 10.11.17

(or right click to download this lecture which accompanies my MPP)

Here is a guide to writing theory-driven policy analysis. Your aim is to identify a policy problem and solution, know your audience, and account for the complexity of policymaking.

At first, it may seem like daunting task to put together policy analysis and policy theory. On its own, policy analysis seems difficult but relatively straightforward: use evidence to identify and measure a policy problem, compare the merits of one or more solution, and make a recommendation on the steps to take us from policy to action.

However, policy process research tells us that people will engage emotionally with that evidence, and that policymakers operate in a complex system of which they have very limited knowledge and control.

So, how can we produce a policy analysis paper to which people will pay attention, and respond positively and effectively, under such circumstances? I focus on developing the critical analysis that will help you produce effective and feasible analysis. To do so, I show how policy analysis forms part of a collection of exercises to foster analysis informed by theory and reflection.

Aims of this document:

  1. Describe the context. There are two fields of study – theory and analysis – which do not always speak to each other. Theory can inform analysis, but it is not always clear how. I show the payoff to theory-driven policy analysis and the difference between it and regular analysis. Note the two key factors that policy analysis should address: your audience will engage emotionally with your analysis, and the feasibility of your solutions depends on the complexity of the policy environment.
  2. Describe how the coursework helps you combine policy theory and policy analysis. Policy analysis is one of four tasks. There is a reflection, to let you ‘show your work’; how your knowledge of policy theory guides your description of a problem and feasible solutions. The essay allows you to expand on theory, to describe how and why policy changes (and therefore what a realistic policy analysis would look like). The blogs encourage new communication skills. In one, you explore how you would expect a policy maker or influencer to sell the recommendations in your policy analysis. In another, you explain complex concepts to a non-academic audience.

Background notes.

I have written this document as if part of a book to be called Teaching Public Policy and co-authored with Dr Emily St Denny.

For that audience, I have two aims: (1) to persuade policy scholars-as-teachers to adopt this kind of coursework in their curriculum; and, (2) to show students how to complete it effectively.

If you prefer shorter advice, see Writing a policy paper and blog post and Writing an essay on politics, policymaking, and policy change.

If you are interested in more background reading, see: The New Policy Sciences (by Paul Cairney and Chris Weible) which describes the need to combine policy theory-driven research with policy analysis; and, Practical Lessons from Policy Theories which describes eight attempts by scholars to translate policy theory into lessons that can be used for policy analysis.

The theories make more sense if you have read the corresponding 1000 Words posts (based on Cairney, 2012). Some of the forthcoming text will look familiar if you read my blog because I am consolidating several individual posts into an overall discussion.

I’m not quite there yet (the chapter is a first draft, a bit scrappy at times, and longer than a chapter should be), so all comments welcome (in the comments bit).

Writing theory driven policy analysis 10.11.17

Cairney 2017 image of the policy process

8 Comments

Filed under public policy

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Evidence based policymaking: 7 key themes

7 themes of EBPM

I looked back at my blog posts on the politics of ‘evidence based policymaking’ and found that I wrote quite a lot (particularly from 2016). Here is a list based on 7 key themes.

1. Use psychological insights to influence the use of evidence

My most-current concern. The same basic theme is that (a) people (including policymakers) are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you (b) bombard them with information, or (c) call them idiots.

Three ways to communicate more effectively with policymakers (shows how to use psychological insights to promote evidence in policymaking)

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid? (yes)

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself? (older paper, linking studies of psychology with studies of EBPM)

Older posts on the same theme:

Is there any hope for evidence in emotional debates and chaotic government? (yes)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

These complaints about ignoring science seem biased and naïve – and too easy to dismiss

How can we close the ‘cultural’ gap between the policymakers and scientists who ‘just don’t get it’?

2. How to use policy process insights to influence the use of evidence

I try to simplify key insights about the policy process to show to use evidence in it. One key message is to give up on the idea of an orderly policy process described by the policy cycle model. What should you do if a far more complicated process exists?

Why don’t policymakers listen to your evidence?

The Politics of Evidence Based Policymaking: 3 messages (3 ways to say that you should engage with the policy process that exists, not a mythical process that will never exist)

Three habits of successful policy entrepreneurs (shows how entrepreneurs are influential in politics)

Why doesn’t evidence win the day in policy and policymaking? and What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco and There is no blueprint for evidence-based policy, so what do you do? (3 posts describing the conditions that must be met for evidence to ‘win the day’)

Writing for Impact: what you need to know, and 5 ways to know it (explains how our knowledge of the policy process helps communicate to policymakers)

How can political actors take into account the limitations of evidence-based policy-making? 5 key points (presentation to European Parliament-European University Institute ‘Policy Roundtable’ 2016)

Evidence Based Policy Making: 5 things you need to know and do (presentation to Open Society Foundations New York 2016)

What 10 questions should we put to evidence for policy experts? (part of a series of videos produced by the European Commission)

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

My argument here is that EBPM is about deciding at the same time what is: (1) good evidence, and (2) a good way to make and deliver policy. If you just focus on one at a time – or consider one while ignoring the other – you cannot produce a defendable way to promote evidence-informed policy delivery.

Kathryn Oliver and I have just published an article on the relationship between evidence and policy (summary of and link to our article on this very topic)

We all want ‘evidence based policy making’ but how do we do it? (presentation to the Scottish Government on 2016)

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

What Works (in a complex policymaking system)?

How Far Should You Go to Make Sure a Policy is Delivered?

4. Face up to your need to make profound choices to pursue EBPM

These posts have arisen largely from my attendance at academic-practitioner conferences on evidence and policy. Many participants tell the same story about the primacy of scientific evidence challenged by post-truth politics and emotional policymakers. I don’t find this argument convincing or useful. So, in many posts, I challenge these participants to think about more pragmatic ways to sum up and do something effective about their predicament.

Political science improves our understanding of evidence-based policymaking, but does it produce better advice? (shows how our knowledge of policymaking clarifies dilemmas about engagement)

The role of ‘standards for evidence’ in ‘evidence informed policymaking’ (argues that a strict adherence to scientific principles may help you become a good researcher but not an effective policy influencer)

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators (you have to make profound ethical and strategic choices when seeking to maximise the use of evidence in policy)

Principles of science advice to government: key problems and feasible solutions (calling yourself an ‘honest broker’ while complaining about ‘post-truth politics’ is a cop out)

What sciences count in government science advice? (political science, obvs)

I know my audience, but does my other audience know I know my audience? (compares the often profoundly different ways in which scientists and political scientists understand and evaluate EBPM – this matters because, for example, we rarely discuss power in scientist-led debates)

Is Evidence-Based Policymaking the same as good policymaking? (no)

Idealism versus pragmatism in politics and policymaking: … evidence-based policymaking (how to decide between idealism and pragmatism when engaging in politics)

Realistic ‘realist’ reviews: why do you need them and what might they look like? (if you privilege impact you need to build policy relevance into systematic reviews)

‘Co-producing’ comparative policy research: how far should we go to secure policy impact? (describes ways to build evidence advocacy into research design)

The Politics of Evidence (review of – and link to – Justin Parkhurt’s book on the ‘good governance’ of evidence production and use)

20170512_095446

5. For students and researchers wanting to read/ hear more

These posts are relatively theory-heavy, linking quite clearly to the academic study of public policy. Hopefully they provide a simple way into the policy literature which can, at times, be dense and jargony.

‘Evidence-based Policymaking’ and the Study of Public Policy

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Practical Lessons from Policy Theories (series of posts on the policy process, offering potential lessons for advocates of evidence use in policy)

Writing a policy paper and blog post 

12 things to know about studying public policy

Can you want evidence based policymaking if you don’t really know what it is? (defines each word in EBPM)

Can you separate the facts from your beliefs when making policy? (no, very no)

Policy Concepts in 1000 Words: Success and Failure (Evaluation) (using evidence to evaluate policy is inevitably political)

Policy Concepts in 1000 Words: Policy Transfer and Learning (so is learning from the experience of others)

Four obstacles to evidence based policymaking (EBPM)

What is ‘Complex Government’ and what can we do about it? (read about it)

How Can Policy Theory Have an Impact on Policy Making? (on translating policy theories into useful advice)

The role of evidence in UK policymaking after Brexit (argues that many challenges/ opportunities for evidence advocates will not change after Brexit)

Why is there more tobacco control policy than alcohol control policy in the UK? (it’s not just because there is more evidence of harm)

Evidence Based Policy Making: If You Want to Inject More Science into Policymaking You Need to Know the Science of Policymaking and The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty and Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty and The barriers to evidence based policymaking in environmental policy (early versions of what became the chapters of the book)

6. Using storytelling to promote evidence use

This is increasingly a big interest for me. Storytelling is key to the effective conduct and communication of scientific research. Let’s not pretend we’re objective people just stating the facts (which is the least convincing story of all). So far, so good, except to say that the evidence on the impact of stories (for policy change advocacy) is limited. The major complication is that (a) the story you want to tell and have people hear interacts with (b) the story that your audience members tell themselves.

Combine Good Evidence and Emotional Stories to Change the World

Storytelling for Policy Change: promise and problems

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

7. The major difficulties in using evidence for policy to reduce inequalities

These posts show how policymakers think about how to combine (a) often-patchy evidence with (b) their beliefs and (c) an electoral imperative to produce policies on inequalities, prevention, and early intervention. I suggest that it’s better to understand and engage with this process than complain about policy-based-evidence from the side-lines. If you do the latter, policymakers will ignore you.

The UK government’s imaginative use of evidence to make policy 

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

Two myths about the politics of inequality in Scotland

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

A ‘decisive shift to prevention’: how do we turn an idea into evidence based policy?

Can the Scottish Government pursue ‘prevention policy’ without independence?

Note: these issues are discussed in similar ways in many countries. One example that caught my eye today:

 

All of this discussion can be found under the EBPM category: https://paulcairney.wordpress.com/category/evidence-based-policymaking-ebpm/T

See also the special issue on maximizing the use of evidence in policy

Palgrave C special

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Storytelling, UK politics and policy

Policy in 500 Words: The Policy Process

We talk a lot about ‘the policy process’ without really saying what it is. If you are new to policy studies, maybe you think that you’ll learn what it is eventually if you read enough material. This would be a mistake! Instead, when you seek a definition of the policy process, you’ll find two common responses.

  1. Many will seek to define policy or public policy instead of ‘the policy process’.
  2. Some will describe the policy process as a policy cycle with stages.

Both responses seem inadequate: one avoids giving an answer, and another gives the wrong answer!

However, we can combine elements of each approach to give you just enough of a sense of ‘the policy process’ to continue reading:

  1. The beauty of the ‘what is policy?’ question is that we don’t give you an answer. I give you a working definition to help raise further questions. Look at the questions we need to ask if we begin with the definition, ‘the sum total of government action, from signals of intent to the final outcomes’.
  2. The beauty of the policy cycle approach is that it provides a simple way to imagine policy ‘dynamics’, or events and choices producing a sequence of other events and choices. Look at the stages to identify many different tasks within one ‘process’, and to get the sense that policymaking is continuous and often ‘its own cause’.

There are more complicated but better ways of describing policymaking dynamics

This picture is the ‘policy process’ equivalent of my definition of public policy. It captures the main elements of the policy process described (in different ways) by most policy theories. It is there to give you enough of an answer to help you ask the right questions.

Cairney 2017 image of the policy process

In the middle is ‘policy choice’. At the heart of most policy theory is ‘bounded rationality’, which describes (a) the cognitive limits of people, and (b) how they overcome those limits to make decisions. They use ‘rational’ and ‘irrational’ shortcuts to action.

Surrounding choice is what we’ll call the ‘policy environment’, containing: policymakers in many levels and types of government, the ideas or beliefs they share, the rules they follow, the networks they form with influencers, and the ‘structural’ or socioeconomic context in which they operate.

This picture is only the beginning of analysis, raising further questions that will make more sense when you read further, including: should policymaker choice be at the centre of this picture? Why are there arrows (describing the order of choice) in the cycle but not in my picture?

Take home message for students: don’t describe ‘the policy process’ without giving the reader some sense of its meaning. Its definition overlaps with ‘policy’ considerably, but the ‘process’ emphasises modes and dynamics of policymaking, while ‘policy’ emphasises outputs. Then, think about how each policy model or theory tries, in different ways, to capture the key elements of the process. A cycle focuses on ‘stages’ but most theories in this series focus on ‘environments’.

 

 

 

 

 

5 Comments

Filed under 500 words, public policy

Policy concepts in 1000 or 500 words

Imagine that your audience is a group of scientists who have read everything and are only interested in something new. You need a new theory, method, study, or set of results to get their attention.

Let’s say that audience is a few hundred people, or half a dozen in each subfield. It would be nice to impress them, perhaps with some lovely jargon and in-jokes, but almost no-one else will know or care what you are talking about.

Imagine that your audience is a group of budding scientists, researchers, students, practitioners, or knowledge-aware citizens who are new to the field and only interested in what they can pick up and use (without devoting their life to each subfield). Novelty is no longer your friend. Instead, your best friends are communication, clarity, synthesis, and a constant reminder not to take your knowledge and frame of reference for granted.

Let’s say that audience is a few gazillion people. If you want to impress them, imagine that you are giving them one of the first – if not the first – ways of understanding your topic. Reduce the jargon. Explain your problem and why people should care about how you try to solve it. Clear and descriptive titles. No more in-jokes (just stick with the equivalent of ‘I went to the doctor because a strawberry was growing in my arse, and she gave me some cream for it’).

At least, that’s what I’ve been telling myself lately. As things stand, my most-read post of all time is destined to be on the policy cycle, and most people read it because it’s the first entry on a google search. Most readers of that post may never read anything else I’ve written (over a million words, if I cheat a bit with the calculation). They won’t care that there are a dozen better ways to understand the policy process. I have one shot to make it interesting, to encourage people to read more. The same goes for the half-dozen other concepts (including multiple streams, punctuated equilibrium theory, the Advocacy Coalition Framework) which I explain to students first because I now do well in google search (go on, give it a try!).

I also say this because I didn’t anticipate this outcome when I wrote those posts. Now, a few years on, I’m worried that they are not very good. They were summaries of chapters from Understanding Public Policy, rather than first principles discussions, and lots of people have told me that UPP is a little bit complicated for the casual reader. So, when revising it, I hope to make it better, and by better I mean to appeal to a wider audience without dumping the insights. I have begun by trying to write 500-words posts as, I hope, improvements on the 1000-word versions. However, I am also open to advice on the originals. Which ones work, and which ones don’t? Where are the gaps in exposition? Where are the gaps in content?

This post is 500 words.

https://paulcairney.wordpress.com/1000-words/

https://paulcairney.wordpress.com/500-words/

Leave a comment

Filed under 1000 words, 500 words, Uncategorized