Tag Archives: evidence-based policymaking

Policy Analysis in 750 Words: Political feasibility and policy success

Policy studies and policy analysis guidebooks identify the importance of feasible policy solutions:

  • Technical feasibility: will this solution work as intended if implemented?
  • Political feasibility: will it be acceptable to enough powerful people?

For example, Kingdon treats feasibility as one of three conditions for major policy change during a ‘window of opportunity’: (1) there is high attention to the policy problem, (2) a feasible solution already exists, and (3) key policymakers have the motive and opportunity to select it.

Guidebooks relate this requirement initially to your policymaker client: what solutions will they rule out, to the extent that they are not even worth researching as options (at least for the short term)?

Further, this assessment relates to types of policy ‘tool’ or ‘instrument’: one simple calculation is that ‘redistributive’ measures are harder to sell than ‘distributive’, while both may be less attractive than regulation (although complex problems likely require a mix of instruments).

These insights connect to Lindblom’s classic vision of:

  1. Incremental analysis. It is better to research in-depth a small number of feasible options than spread your resources too thinly to consider all possibilities.
  2. Strategic analysis. The feasibility of a solution relates strongly to current policy. The more radical a departure from the current negotiated position, the harder it will be to sell.

As many posts in the Policy Analysis in 750 words series describe, this advice is not entirely  useful for actors who seek rapid and radical departures from the status quo. Lindblom’s response to such critics was to seek radical change via a series of non-radical steps (at least in political systems like the US), which (broadly speaking) represents one of two possible approaches.

While incrementalism is not as popular as it once was (as a description of, or prescription for, policymaking), it tapped into the enduring insight that policymaking systems produce huge amounts of minor change. Rapid and radical policy change is rare, and it is even rarer to be able to connect it to influential analysis and action (at least in the absence of a major event). This knowledge should not put people off trying, but rather help them understand the obstacles that they seek to overcome.

Relating feasible solutions and strategies to ‘policy success’

One way to incorporate this kind of advice is to consider how (especially elected) policymakers would describe their own policy success. The determination of success and failure is a highly contested and political process (not simply a technical exercise called ‘evaluation’), and policymakers may refer – often implicitly – to the following questions when seeking success:

  1. Political. Will this policy boost my government’s credibility and chances of re-election?
  2. Process. Will it be straightforward to legitimise and maintain support for this policy?
  3. Programmatic. Will it achieve its stated objectives and produce beneficial outcomes if implemented?

The benefit to analysts, in asking themselves these questions, is that they help to identify the potential solutions that are technically but not politically feasible (or vice versa).

The absence of clear technical feasibility does not necessarily rule out solutions with wider political benefits (for example, it can be beneficial to look like you are trying to do something good). Hence the popular phrase ‘good politics, bad policy’.

Nor does a politically unattractive option rule out a technically feasible solution (not all politicians flee the prospect of ‘good policy, bad politics’). However, it should prompt attention to hard choices about whose support to seek, how long to wait, or how hard to push, to seek policy change. You can see this kind of thinking as ‘entrepreneurial‘ or ‘systems thinking’ depending on how much faith you have in agency in highly-unequal political contexts.

Further reading

It is tempting to conclude that these obstacles to ‘good policy’ reflect the pathological nature of politics. However, if we want to make this argument, we should at least do it well:

1. You can find this kind of argument in fields such as public health and climate change studies, where researchers bemoan the gap between (a) their high-quality evidence on an urgent problem and (b) a disproportionately weak governmental response. To do it well, we need to separate analytically (or at least think about): (a) the motivation and energy of politicians (usually the source of most criticism of low ‘political will’), and (b) the policymaking systems that constrain even the most sincere and energetic policymakers. See the EBPM page for more.

2. Studies of Social Construction and Policy Design are useful to connect policymaking research with a normative agenda to address ‘degenerative’ policy design.

Leave a comment

Filed under 750 word policy analysis

Policy Analysis in 750 Words: Changing things from the inside

How should policy actors seek radical changes to policy and policymaking?

This question prompts two types of answer:

1. Be pragmatic, and change things from the inside

Pragmatism is at the heart of most of the policy analysis texts in this series. They focus on the needs and beliefs of clients (usually policymakers). Policymakers are time-pressed, so keep your analysis short and relevant. See the world through their eyes. Focus on solutions that are politically as well as technically feasible. Propose non-radical steps, which may add up to radical change over the long-term.

This approach will seem familiar to students of research ‘impact’ strategies which emphasise relationship-building, being available to policymakers, and responding to the agendas of governments to maximise the size of your interested audience.

It will also ring bells for advocates of radical reforms in policy sectors such as (public) health and intersectoral initiatives such as gender mainstreaming:

  • Health in All Policies is a strategy to encourage radical changes to policy and policymaking to improve population health.  Common advice includes to: identify to policymakers how HiAP fits into current policy agendas, seek win-win strategies with partners in other sectors, and go to great lengths to avoid the sense that you are interfering in their work (‘health imperialism’).
  • Gender mainstreaming is a strategy to consider gender in all aspect of policy and policymaking. An equivalent playbook involves steps to: clarify what gender equality is, and what steps may help achieve it; make sure that these ideas translate across all levels and types of policymaking; adopt tools to ensure that gender is a part of routine government business (such as budget processes); and, modify existing policies or procedures while increasing the representation of women in powerful positions.

In other words, the first approach is to pursue your radical agenda via non-radical means, using a playbook that is explicitly non-confrontational.  Use your insider status to exploit opportunities for policy change.

2. Be radical, and challenge things from the outside

Challenging the status quo, for the benefit of marginalised groups, is at the heart of critical policy analysis:

  • Reject the idea that policy analysis is a rationalist, technical, or evidence-based process. Rather, it involves the exercise of power to (a) depoliticise problems to reduce attention to current solutions, and (b) decide whose knowledge counts.
  • Identify and question the dominant social constructions of problems and populations, asking who decides how to portray these stories and who benefits from their outcomes.

This approach resonates with frequent criticisms of ‘impact’ advice, emphasising the importance of producing research independent of government interference, to challenge policies that further harm already-marginalised populations.

It will also rings bells among advocates of more confrontational strategies to seek radical changes to policy and policymaking. They include steps to: find more inclusive ways to generate and share knowledge, produce multiple perspectives on policy problems and potential solutions, focus explicitly on the impact of the status quo on marginalised populations, politicise issues continuously to ensure that they receive sufficient attention, and engage in outsider strategies to protest current policies and practices.

Does this dichotomy make sense?

It is tempting to say that this dichotomy is artificial and that we can pursue the best of both worlds, such as working from within when it works and resorting to outsider action and protest when it doesn’t.

However, the blandest versions of this conclusion tend to ignore or downplay the politics of policy analysis in favour of more technical fixes. Sometimes collaboration and consensus politics is a wonderful feat of human endeavour. Sometimes it is a cynical way to depoliticise issues, stifle debate, and marginalise unpopular positions.

This conclusion also suggests that it is possible to establish what strategies work, and when, without really saying how (or providing evidence for success that would appeal to audiences associated with both approaches). Indeed, a recurrent feature of research in these fields is that most attempts to produce radical change prove to be dispiriting struggles. Non-radical strategies tend to be co-opted by more powerful actors, to mainstream new ways of thinking without changing the old. Radical strategies are often too easy to dismiss or counter.

The latter point reminds us to avoid excessively optimistic overemphasis on the strategies of analysts and advocates at the expense of context and audience. The 500 and 1000 words series perhaps tip us too far in the other direction, but provide a useful way to separate (analytically) the reasons for often-minimal policy change. To challenge dominant forms of policy and policymaking requires us to separate the intentional sources of inertia from the systemic issues that would constrain even the most sincere and energetic reformer.

Further reading

This post forms one part of the Policy Analysis in 750 words series, including posts on the role of analysts and marginalised groups. It also relates to work with St Denny, Kippin, and Mitchell (drawing on this draft paper) and posts on ‘evidence based policymaking’.

1 Comment

Filed under 750 word policy analysis

Policy Analysis in 750 Words: Two approaches to policy learning and transfer

This post forms one part of the Policy Analysis in 750 words series. It draws on work for an in-progress book on learning to reduce inequalities. Some of the text will seem familiar if you have read other posts. Think of it as an adventure game in which the beginning is the same but you don’t know the end.

Policy learning is the use of new information to update policy-relevant knowledge. Policy transfer involves the use of knowledge about policy and policymaking in one government to inform policy and policymaking in another.

These processes may seem to relate primarily to research and expertise, but they require many kinds of political choices (explored in this series). They take place in complex policymaking systems over which no single government has full knowledge or control.

Therefore, while the agency of policy analysts and policymakers still matters, they engage with a policymaking context that constrains or facilitates their action.

Two approaches to policy learning: agency and context-driven stories

Policy analysis textbooks focus on learning and transfer as an agent-driven process with well-established  guidance (often with five main steps). They form part of a functionalist analysis where analysts identify the steps required to turn comparative analysis into policy solutions, or part of a toolkit to manage stages of the policy process.

Agency is less central to policy process research, which describes learning and transfer as contingent on context. Key factors include:

Analysts compete to define problems and determine the manner and sources of learning, in a multi-centric environment where different contexts will constrain and facilitate action in different ways. For example, varying structural factors – such as socioeconomic conditions – influence the feasibility of proposed policy change, and each centre’s institutions provide different rules for gathering, interpreting, and using evidence.

The result is a mixture of processes in which:

  1.  Learning from experts is one of many possibilities. For example, Dunlop and Radaelli also describe ‘reflexive learning’, ‘learning through bargaining’, and ‘learning in the shadow hierarchy’
  2.  Transfer takes many forms.

How should analysts respond?

Think of two different ways to respond to this description of the policy process with this lovely blue summary of concepts. One is your agency-centred strategic response. The other is me telling you why it won’t be straightforward.

An image of the policy process (see 5 images)

There are many policy makers and influencers spread across many policymaking ‘centres’

  1. Find out where the action is and tailor your analysis to different audiences.
  2. There is no straightforward way to influence policymaking if multiple venues contribute to policy change and you don’t know who does what.

Each centre has its own ‘institutions’

  1. Learn the rules of evidence gathering in each centre: who takes the lead, how do they understand the problem, and how do they use evidence?
  2. There is no straightforward way to foster policy learning between political systems if each is unaware of each other’s unwritten rules. Researchers could try to learn their rules to facilitate mutual learning, but with no guarantee of success.

Each centre has its own networks

  1. Form alliances with policymakers and influencers in each relevant venue.
  2. The pervasiveness of policy communities complicates policy learning because the boundary between formal power and informal influence is not clear.

Well-established ‘ideas’ tend to dominate discussion

  1. Learn which ideas are in good currency. Tailor your advice to your audience’s beliefs.
  2. The dominance of different ideas precludes many forms of policy learning or transfer. A popular solution in one context may be unthinkable in another.

Many policy conditions (historic-geographic, technological, social and economic factors) command the attention of policymakers and are out of their control. Routine events and non-routine crises prompt policymaker attention to lurch unpredictably.

  1. Learn from studies of leadership in complex systems or the policy entrepreneurs who find the right time to exploit events and windows of opportunity to propose solutions.
  2. The policy conditions may be so different in each system that policy learning is limited and transfer would be inappropriate. Events can prompt policymakers to pay disproportionately low or high attention to lessons from elsewhere, and this attention relates weakly to evidence from analysts.

Feel free to choose one or both forms of advice. One is useful for people who see analysts and researchers as essential to major policy change. The other is useful if it serves as a source of cautionary tales rather than fatalistic responses.

See also:

Policy Concepts in 1000 Words: Policy Transfer and Learning

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Policy learning to reduce inequalities: a practical framework

Three ways to encourage policy learning

Epistemic versus bargaining-driven policy learning

The ‘evidence-based policymaking’ page explores these issues in more depth

Leave a comment

Filed under 750 word policy analysis, IMAJINE, Policy learning and transfer, public policy

Policy Analysis in 750 Words: How to deal with ambiguity

This post forms one part of the Policy Analysis in 750 words series. It draws on this 500 Words post, then my interpretation of co-authored work with Drs Emily St Denny and John Boswell (which I would be delighted to share if it gets published). It trails off at the end.

In policy studies, ambiguity describes the ability to entertain more than one interpretation of a policy problem. There are many ways to frame issues as problems. However, only some frames receive high policymaker attention, and policy change relates strongly to that attention. Resolving ambiguity in your favour is the prize.

Policy studies focus on different aspects of this dynamic, including:

  1. The exercise of power, such as of the narrator to tell stories and the audience to engage with or ignore them.
  2. Policy learning, in which people collaborate (and compete) to assign concrete meaning to abstract aims.
  3. A complex process in which many policymakers and influencers are cooperating/ competing to define problems in many policymaking centres.

They suggest that resolving ambiguity affects policy in different ways, to influence the:

The latter descriptions, reflecting multi-centric policymaking, seem particularly relevant to major contemporary policy problems – such as global public health and climate crises – in which cooperation across (and outside of) many levels and types of government is essential.

Resolving ambiguity in policy analysis texts

This context helps us to interpret common (Step 1) advice in policy analysis textbooks: define a policy problem for your client, using your skills of research and persuasion but tailoring your advice to your client’s interests and beliefs. Yet, gone are the mythical days of elite analysts communicating to a single core executive in charge of formulating and implementing all policy instruments. Many analysts engage with many centres producing (or co-producing) many instruments. Resolving ambiguity in one centre does not guarantee the delivery of your aims across many.

Two ways to resolve ambiguity in policy analysis

Classic debates would highlight two different responses:

  • ‘Top down’ accounts see this issue through the lens of a single central government, examining how to reassert central control by minimising implementation gaps.

Policy analysis may focus on (a) defining the policy problem, and (b) ensuring the implementation of its solution.

  • ‘Bottom up’ accounts identify the inevitability (and legitimacy) of policy influence in multiple centres. Policy analysis may focus on how to define the problem in cooperation with other centres, or to set a strategic direction and encourage other centres to make sense of it in their context.

This terminology went out of fashion, but note the existence of each tendency in two ideal-type approaches to contemporary policy problems:

1. Centralised and formalised approaches.

Seek clarity and order to address urgent policy problems. Define the policy problem clearly, translate that definition into strategies for each centre, and develop a common set of effective ‘tools’ to ensure cooperation and delivery.

Policy analysis may focus on technical aspects, such as how to create a fine-detail blueprint for action, backed by performance management and accountability measures that tie actors to specific commitments.

The tagline may be: ambiguity is a problem to be solved, to direct policy actors towards a common goal.

2. Decentralised, informal, collaborative approaches.

Seek collaboration to make sense of, and address, problems. Reject a single definition of the problem, encourage actors in each centre (or in concert) to deliberate to make sense of problems together, and co-create the rules to guide a continuous process of collective behaviour.

Policy analysis may focus on how to contribute to a collaborative process of sense-making and rule-making.

The tagline may be: ambiguity presents an opportunity to energise policy actors, to harness the potential for innovation arising from deliberation.

Pick one approach and stick with it?

Describing these approaches in such binary terms makes the situation – and choice between approaches – look relatively straightforward. However, note the following issues:

  • Many policy sectors (and intersectoral agendas) are characterised by intense disagreement on which choice to make. These disagreements intersect with others (such as when people seek not only transformative policy change to solve global problems, but also equitable process and outcomes).
  • Some sectors seem to involve actors seeking the best of both worlds (centralise and localise, formalise and deliberate) without recognising the trade-offs and dilemmas that arise.
  • I have described these options as choices, but did not establish if anyone is in the position to make or contribute to that choice.

In that context, resolving ambiguity in your favour may still be the prize, but where would you even begin?

Further reading

Well, that was an unsatisfying end to the post, eh? Maybe I’ll write a better one when some things are published. In the meantime, some of these papers and posts explore some of these issues:

Leave a comment

Filed under Uncategorized

Policy Analysis in 750 Words: Separating facts from values

This post begins by reproducing Can you separate the facts from your beliefs when making policy?(based on the 1st edition of Understanding Public Policy) …

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.

It continues with an edited excerpt from p59 of Understanding Public Policy, which explores the implications of bounded rationality for contemporary accounts of ‘evidence-based policymaking’:

‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.

Implications for policy analysis

Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.

A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.

Most relevant posts in the 750 series

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 

Carol Bacchi (2009) Analysing Policy: What’s the problem represented to be? 

Deborah Stone (2012) Policy Paradox

Who should be involved in the process of policy analysis?

William Riker (1986) The Art of Political Manipulation

Using Statistics and Explaining Risk (David Spiegelhalter and Gerd Gigerenzer)

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

See also

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

1 Comment

Filed under 750 word policy analysis, Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Policy Analysis in 750 Words: How to communicate effectively with policymakers

This post forms one part of the Policy Analysis in 750 words series overview. The title comes from this article by Cairney and Kwiatkowski on ‘psychology based policy studies’.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts. How might we combine insights to think about effective communication?

1. Insights from policy analysis texts

Most texts in this series relate communication to understanding your audience (or client) and the political context. Your audience has limited attention or time to consider problems. They may have a good antennae for the political feasibility of any solution, but less knowledge of (or interest in) the technical details. In that context, your aim is to help them treat the problem as worthy of their energy (e.g. as urgent and important) and the solution as doable. Examples include:

  • Bardach: communicating with a client requires coherence, clarity, brevity, and minimal jargon.
  • Dunn: argumentation involves defining the size and urgency of a problem, assessing the claims made for each solution, synthesising information from many sources into a concise and coherent summary, and tailoring reports to your audience.
  • Smith: your audience makes a quick judgement on whether or not to read your analysis. Ask yourself questions including: how do I frame the problem to make it relevant, what should my audience learn, and how does each solution relate to what has been done before? Maximise interest by keeping communication concise, polite, and tailored to a policymaker’s values and interests.

2. Insights from studies of policymaker psychology

These insights emerged from the study of bounded rationality: policymakers do not have the time, resources, or cognitive ability to consider all information, possibilities, solutions, or consequences of their actions. They use two types of informational shortcut associated with concepts such as cognition and emotion, thinking ‘fast and slow’, ‘fast and frugal heuristics’, or, if you like more provocative terms:

  • ‘Rational’ shortcuts. Goal-oriented reasoning based on prioritizing trusted sources of information.
  • ‘Irrational’ shortcuts. Emotional thinking, or thought fuelled by gut feelings, deeply held beliefs, or habits.

We can use such distinctions to examine the role of evidence-informed communication, to reduce:

  • Uncertainty, or a lack of policy-relevant knowledge. Focus on generating ‘good’ evidence and concise communication as you collate and synthesise information.
  • Ambiguity, or the ability to entertain more than one interpretation of a policy problem. Focus on argumentation and framing as you try to maximise attention to (a) one way of defining a problem, and (b) your preferred solution.

Many policy theories describe the latter, in which actors: combine facts with emotional appeals, appeal to people who share their beliefs, tell stories to appeal to the biases of their audience, and exploit dominant ways of thinking or social stereotypes to generate attention and support. These possibilities produce ethical dilemmas for policy analysts.

3. Insights from studies of complex policymaking environments

None of this advice matters if it is untethered from reality.

Policy analysis texts focus on political reality to note that even a perfectly communicated solution is worthless if technically feasible but politically unfeasible.

Policy process texts focus on policymaking reality: showing that ideal-types such as the policy cycle do not guide real-world action, and describing more accurate ways to guide policy analysts.

For example, they help us rethink the ‘know your audience’ mantra by:

Identifying a tendency for most policy to be processed in policy communities or subsystems:

Showing that many policymaking ‘centres’ create the instruments that produce policy change

Gone are the mythical days of a small number of analysts communicating to a single core executive (and of the heroic researcher changing the world by speaking truth to power). Instead, we have many analysts engaging with many centres, creating a need to not only (a) tailor arguments to different audiences, but also (b) develop wider analytical skills (such as to foster collaboration and the use of ‘design principles’).

How to communicate effectively with policymakers

In that context, we argue that effective communication requires analysts to:

1. Understand your audience and tailor your response (using insights from psychology)

2. Identify ‘windows of opportunity’ for influence (while noting that these windows are outside of anyone’s control)

3. Engage with real world policymaking rather than waiting for a ‘rational’ and orderly process to appear (using insights from policy studies).

See also:

Why don’t policymakers listen to your evidence?

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

Entrepreneurial policy analysis

1 Comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization

By Paul Cairney and Sean Kippin

This post summarizes a key section of our review of education equity policymaking [see the full article for references to the studies summarized here].

One of the main themes is that many governments present a misleading image of their education policies. There are many variations on this theme, in which policymakers:

  1. Describe the energetic pursuit of equity, and use the right language, as a way to hide limited progress.
  2. Pursue ‘equity for all’ initiatives that ignore or downplay the specific importance of marginalization and minoritization, such as in relation to race and racism, immigration, ethnic minorities, and indigenous populations.
  3. Pursue narrow definitions of equity in terms of access to schools, at the expense of definitions that pay attention to ‘out of school’ factors and social justice.

Minoritization is a strong theme in US studies in particular. US experiences help us categorise multiple modes of marginalisation in relation to race and migration, driven by witting and unwitting action and explicit and implicit bias:

  • The social construction of students and parents. Examples include: framing white students as ‘gifted’ and more deserving of merit-based education (or victims of equity initiatives); framing non-white students as less intelligent, more in need of special needs or remedial classes, and having cultural or other learning ‘deficits’ that undermine them and disrupt white students; and, describing migrant parents as unable to participate until they learn English.
  • Maintaining or failing to challenge inequitable policies. Examples include higher funding for schools and colleges with higher white populations, and tracking (segregating students according to perceived ability), which benefit white students disproportionately.
  • Ignoring social determinants or ‘out of school’ factors.
  • Creating the illusion of equity with measures that exacerbate inequalities. For example, promoting school choice policies while knowing that the rules restrict access to sought-after schools.
  • Promoting initiatives to ignore race, including so-called ‘color blind’ or ‘equity for all’ initiatives.
  • Prioritizing initiatives at the expense of racial or socio-economic equity, such as measures to boost overall national performance at the expense of targeted measures.
  • Game playing and policy subversion, including school and college selection rules to restrict access and improve metrics.

The wider international – primarily Global North – experience suggests that minoritization and marginalization in relation to race, ethnicity, and migration is a routine impediment to equity strategies, albeit with some uncertainty about which policies would have the most impact.

Other country studies describe the poor treatment of citizens in relation to immigration status or ethnicity, often while presenting the image of a more equitable system. Until recently, Finland’s global reputation for education equity built on universalism and comprehensive schools has contrasted with its historic ‘othering’ of immigrant populations. Japan’s reputation for containing a homogeneous population, allowing its governments to present an image of classless egalitarianism and harmonious society, contrasts with its discrimination against foreign students. Multiple studies of Canadian provinces provide the strongest accounts of the symbolic and cynical use of multiculturalism for political gains and economic ends:

As in the US, many countries use ‘special needs’ categories to segregate immigrant and ethnic minority populations. Mainstreaming versus special needs debates have a clear racial and ethnic dimension when (1) some groups are more likely to be categorised as having learning disabilities or behavioural disorders, and (2) language and cultural barriers are listed as disabilities in many countries. Further, ‘commonwealth’ country studies identify the marginalisation of indigenous populations in ways comparable to the US marginalisation of students of colour.

Overall, these studies generate the sense that the frequently used language of education equity policy can signal a range of possibilities, from (1) high energy and sincere commitment to social justice, to (2) the cynical use of rhetoric and symbolism to protect historic inequalities.

Examples:

  • Turner, E.O., and Spain, A.K., (2020) ‘The Multiple Meanings of (In)Equity: Remaking School District Tracking Policy in an Era of Budget Cuts and Accountability’, Urban Education, 55, 5, 783-812 https://doi.org/10.1177%2F0042085916674060
  • Thorius, K.A. and Maxcy, B.D. (2015) ‘Critical Practice Analysis of Special Education Policy: An RTI Example’, Remedial and Special Education, 36, 2, 116-124 https://doi.org/10.1177%2F0741932514550812
  • Felix, E.R. and Trinidad, A. (2020) ‘The decentralization of race: tracing the dilution of racial equity in educational policy’, International Journal of Qualitative Studies in Education, 33, 4, 465-490 https://doi.org/10.1080/09518398.2019.1681538
  • Alexiadou, N. (2019) ‘Framing education policies and transitions of Roma students in Europe’, Comparative Education, 55, 3,  https://doi.org/10.1080/03050068.2019.1619334

See also: https://paulcairney.wordpress.com/2017/09/09/policy-concepts-in-500-words-social-construction-and-policy-design/

2 Comments

Filed under education policy, Evidence Based Policymaking (EBPM), Policy learning and transfer, Prevention policy, public policy

The UK government’s lack of control of public policy

This post first appeared as Who controls public policy? on the UK in a Changing Europe website. There is also a 1-minute video, but you would need to be a completist to want to watch it.

Most coverage of British politics focuses on the powers of a small group of people at the heart of government. In contrast, my research on public policy highlights two major limits to those powers, related to the enormous number of problems that policymakers face, and to the sheer size of the government machine.

First, elected policymakers simply do not have the ability to properly understand, let alone solve, the many complex policy problems they face. They deal with this limitation by paying unusually high attention to a small number of problems and effectively ignoring the rest.

Second, policymakers rely on a huge government machine and network of organisations (containing over 5 million public employees) essential to policy delivery, and oversee a statute book which they could not possibly understand.

In other words, they have limited knowledge and even less control of the state, and have to make choices without knowing how they relate to existing policies (or even what happens next).

These limits to ministerial powers should prompt us to think differently about how to hold them to account. If they only have the ability to influence a small proportion of government business, should we blame them for everything that happens in their name?

My approach is to apply these general insights to specific problems in British politics. Three examples help to illustrate their ability to inform British politics in new ways.

First, policymaking can never be ‘evidence based’. Some scientists cling to the idea that the ‘best’ evidence should always catch the attention of policymakers, and assume that ‘speaking truth to power’ helps evidence win the day.

As such, researchers in fields like public health and climate change wonder why policymakers seem to ignore their evidence.

The truth is that policymakers only have the capacity to consider a tiny proportion of all available information. Therefore, they must find efficient ways to ignore almost all evidence to make timely choices.

They do so by setting goals and identifying trusted sources of evidence, but also using their gut instinct and beliefs to rule out most evidence as irrelevant to their aims.

Second, the UK government cannot ‘take back control’ of policy following Brexit simply because it was not in control of policy before the UK joined. The idea of control is built on the false image of a powerful centre of government led by a small number of elected policymakers.

This way of thinking assumes that sharing power is simply a choice. However, sharing power and responsibility is borne of necessity because the British state is too large to be manageable.

Governments manage this complexity by breaking down their responsibilities into many government departments. Still, ministers can only pay attention to a tiny proportion of issues managed by each department. They delegate most of their responsibilities to civil servants, agencies, and other parts of the public sector.

In turn, those organisations rely on interest groups and experts to provide information and advice.

As a result, most public policy is conducted through small and specialist ‘policy communities’ that operate out of the public spotlight and with minimal elected policymaker involvement.

The logical conclusion is that senior elected politicians are less important than people think. While we like to think of ministers sitting in Whitehall and taking crucial decisions, most of these decisions are taken in their name but without their intervention.

Third, the current pandemic underlines all too clearly the limits of government power. Of course people are pondering the degree to which we can blame UK government ministers for poor choices in relation to Covid-19, or learn from their mistakes to inform better policy.

Many focus on the extent to which ministers were ‘guided by the science’. However, at the onset of a new crisis, government scientists face the same uncertainty about the nature of the policy problem, and ministers are not really able to tell if a Covid-19 policy would work as intended or receive enough public support.

Some examples from the UK experience expose the limited extent to which policymakers can understand, far less control, an emerging crisis.

Prior to the lockdown, neither scientists nor ministers knew how many people were infected, nor when levels of infection would peak.

They had limited capacity to test. They did not know how often (and how well) people wash their hands. They did not expect people to accept and follow strict lockdown rules so readily, and did not know which combination of measures would have the biggest impact.

When supporting businesses and workers during ‘furlough’, they did not know who would be affected and therefore how much the scheme would cost.

In short, while Covid-19 has prompted policy change and state intervention on a scale not witnessed outside of wartime, the government has never really known what impact its measures would have.

Overall, the take-home message is that the UK narrative of strong central government control is damaging to political debate and undermines policy learning. It suggests that every poor outcome is simply the consequence of bad choices by powerful leaders. If so, we are unable to distinguish between the limited competence of some leaders and the limited powers of them all.

2 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, public policy, UK politics and policy

The UK Government’s COVID-19 policy: assessing evidence-informed policy analysis in real time

abstract 25k words

On the 23rd March 2020, the UK Government’s Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of COVID-19 , including new regulations on behaviour, police powers to support public health, budgetary measures to support businesses and workers during their economic inactivity, the almost-complete closure of schools, and the major expansion of healthcare capacity via investment in technology, discharge to care homes, and a consolidation of national, private, and new health service capacity (note that many of these measures relate only to England, with devolved governments responsible for public health in Northern Ireland, Scotland, and Wales). Overall, the coronavirus prompted almost-unprecedented policy change, towards state intervention, at a speed and magnitude that seemed unimaginable before 2020.

Yet, many have criticised the UK government’s response as slow and insufficient. Criticisms include that UK ministers and their advisors did not:

  • take the coronavirus seriously enough in relation to existing evidence (when its devastating effect was increasingly apparent in China in January and Italy from February)
  • act as quickly as some countries to test for infection to limit its spread, and/ or introduce swift measures to close schools, businesses, and major social events, and regulate social behaviour (such as in Taiwan, South Korea, or New Zealand)
  • introduce strict-enough measures to stop people coming into contact with each other at events and in public transport.

They blame UK ministers for pursuing a ‘mitigation’ strategy, allegedly based on reducing the rate of infection and impact of COVID-19 until the population developed ‘herd immunity’, rather than an elimination strategy to minimise its spread until a vaccine or antiviral could be developed. Or, they criticise the over-reliance on specific models, which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown.

Many cite this delay, compounded by insufficient personal protective equipment (PPE) in hospitals and fatal errors in the treatment of care homes, as the biggest contributor to the UK’s unusually high number of excess deaths (Campbell et al, 2020; Burn-Murdoch and Giles, 2020; Scally et al, 2020; Mason, 2020; Ball, 2020; compare with Freedman, 2020a; 2020b and Snowden, 2020).

In contrast, scientific advisers to UK ministers have emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term (e.g. Vallance). Throughout, they emphasised the need for individual behavioural change (hand washing and social distancing), supplemented by government action, in a liberal democracy in which direct imposition is unusual and, according to UK ministers, unsustainable in the long term.

We can relate these debates to the general limits to policymaking identified in policy studies (summarised in Cairney, 2016; 2020a; Cairney et al, 2019) and underpinning the ‘governance thesis’ that dominates the study of British policymaking (Kerr and Kettell, 2006: 11; Jordan and Cairney, 2013: 234).

First, policymakers must ignore almost all evidence. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information.

Second, policymakers have a limited understanding, and even less control, of their policymaking environments. No single centre of government has the power to control policy outcomes. Rather, there are many policymakers and influencers spread across a political system, and most choices in government are made in subsystems, with their own rules and networks, over which ministers have limited knowledge and influence. Further, the social and economic context, and events such as a pandemic, often appear to be largely out of their control.

Third, even though they lack full knowledge and control, governments must still make choices. Therefore, their choices are necessarily flawed.

Fourth, their choices produce unequal impacts on different social groups.

Overall, the idea that policy is controlled by a small number of UK government ministers, with the power to solve major policy problems, is still popular in media and public debate, but dismissed in policy research .

Hold the UK government to account via systematic analysis, not trials by social media

To make more sense of current developments in the UK, we need to understand how UK policymakers address these limitations in practice, and widen the scope of debate to consider the impact of policy on inequalities.

A policy theory-informed and real-time account helps us avoid after-the-fact wisdom and bad-faith trials by social media.

UK government action has been deficient in important ways, but we need careful and systematic analysis to help us separate (a) well-informed criticism to foster policy learning and hold ministers to account, from (a) a naïve and partisan rush to judgement that undermines learning and helps let ministers off the hook.

To that end, I combine insights from policy analysis guides, policy theories, and critical policy analysis to analyse the UK government’s initial coronavirus policy. I use the lens of 5-step policy analysis models to identify what analysts and policymakers need to do, the limits to their ability to do it, and the distributional consequences of their choices.

I focus on sources in the public record, including oral evidence to the House of Commons Health and Social Care committee, and the minutes and meeting papers of the UK Government’s Scientific Advisory Group for Emergencies (SAGE) (and NERVTAG), transcripts of TV press conferences and radio interviews, and reports by professional bodies and think tanks.

The short version is here. The long version – containing a huge list of sources and ongoing debates – is here. Both are on the COVID-19 page.

Leave a comment

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Policy Analysis in 750 Words: policy analysis for marginalized groups in racialized political systems

Note: this post forms one part of the Policy Analysis in 750 words series overview.

For me, this story begins with a tweet by Professor Jamila Michener, about a new essay by Dr Fabienne Doucet, ‘Centering the Margins: (Re)defining Useful Research Evidence Through Critical Perspectives’:

Research and policy analysis for marginalized groups

For Doucet (2019: 1), it begins by describing the William T. Grant Foundation’s focus on improving the ‘use of research evidence’ (URE), and the key questions that we should ask when improving URE:

  1. For what purposes do policymakers find evidence useful?

Examples include to: inform a definition of problems and solutions, foster practitioner learning, support an existing political position, or impose programmes backed by evidence (compare with How much impact can you expect from your analysis?).

  1.   Who decides what to use, and what is useful?

For example, usefulness could be defined by the researchers providing evidence, the policymakers using it, the stakeholders involved in coproduction, or the people affected by research and policy (compare with Bacchi, Stone and Who should be involved in the process of policy analysis?).

  1. How do critical theories inform these questions? (compare with T. Smith)

First, they remind us that so-called ‘rational’ policy processes have incorporated research evidence to help:

‘maintain power hierarchies and accept social inequity as a given. Indeed, research has been historically and contemporaneously (mis)used to justify a range of social harms from enslavement, colonial conquest, and genocide, to high-stakes testing, disproportionality in child welfare services, and “broken windows” policing’ (Doucet, 2019: 2)

Second, they help us redefine usefulness in relation to:

‘how well research evidence communicates the lived experiences of marginalized groups so that the understanding of the problem and its response is more likely to be impactful to the community in the ways the community itself would want’ (Doucet, 2019: 3)

In that context, potential responses include to:

  1. Recognise the ways in which research and policy combine to reproduce the subordination of social groups.
  • General mechanisms include: the reproduction of the assumptions, norms, and rules that produce a disproportionate impact on social groups (compare with Social Construction and Policy Design).
  • Specific mechanism include: judging marginalised groups harshly according to ‘Western, educated, industrialized, rich and democratic’ norms (‘WEIRD’)
  1. Reject the idea that scientific research can be seen as objective or neutral (and that researchers are beyond reproach for their role in subordination).
  2. Give proper recognition to ‘experiential knowledge’ and ‘transdiciplinary approaches’ to knowledge production, rather than privileging scientific knowledge.
  3. Commit to social justice, to help ‘eliminate oppressions and to emancipate and empower marginalized groups’, such as by disrupting ‘the policies and practices that disproportionately harm marginalized groups’ (2019: 5-7)
  4. Develop strategies to ‘center race’, ‘democratize’ research production, and ‘leverage’ transdisciplinary methods (including poetry, oral history and narrative, art, and discourse analysis – compare with Lorde) (2019: 10-22)

See also Doucet, F. (2021) ‘Identifying and Testing Strategies to Improve the Use of Antiracist Research Evidence through Critical Race Lenses

Policy analysis in a ‘racialized polity’

A key way to understand these processes is to use, and improve, policy theories to explain the dynamics and impacts of a racialized political system. For example, ‘policy feedback theory’ (PFT) draws on elements from historical institutionalism and SCPD to identify the rules, norms, and practices that reinforce subordination.

In particular, Michener’s (2019: 424) ‘Policy Feedback in a Racialized Polity’ develops a ‘racialized feedback framework (RFF)’ to help explain the ‘unrelenting force with which racism and White supremacy have pervaded social, economic, and political institutions in the United States’. Key mechanisms include (2019: 424-6):

  1. Channelling resources’, in which the rules, to distribute government resources, benefit some social groups and punish others.
  • Examples include: privileging White populations in social security schemes and the design/ provision of education, and punishing Black populations disproportionately in prisons (2019: 428-32).
  • These rules also influence the motivation of social groups to engage in politics to influence policy (some citizens are emboldened, others alienated).
  1. Generating interests’, in which ‘racial stratification’ is a key factor in the power of interest groups (and balance of power in them).
  2. Shaping interpretive schema’, in which race is a lens through which actors understand, interpret, and seek to solve policy problems.
  3. The ways in which centralization (making policy at the federal level) or decentralization influence policy design.
  • For example, the ‘historical record’ suggests that decentralization is more likely to ‘be a force of inequality than an incubator of power for people of color’ (2019: 433).

Insufficient attention to race and racism: what are the implications for policy analysis?

One potential consequence of this lack of attention to race, and the inequalities caused by racism in policy, is that we place too much faith in the vague idea of ‘pragmatic’ policy analysis.

Throughout the 750 words series, you will see me refer generally to the benefits of pragmatism:

In that context, pragmatism relates to the idea that policy analysis consists of ‘art and craft’, in which analysts assess what is politically feasible if taking a low-risk client-oriented approach.

In this context, pragmatism may be read as a euphemism for conservatism and status quo protection.

In other words, other posts in the series warn against too-high expectations for entrepreneurial and systems thinking approaches to major policy change, but they should not be read as an excuse to reject ambitious plans for much-needed changes to policy and policy analysis (compare with Meltzer and Schwartz, who engage with this dilemma in client-oriented advice).

Connections to blog themes

This post connects well to:

5 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy, Storytelling

Policy Analysis in 750 Words: how much impact can you expect from your analysis?

This post forms one part of the Policy Analysis in 750 words series overview.

Throughout this series you may notice three different conceptions about the scope of policy analysis:

  1. ‘Ex ante’ (before the event) policy analysis. Focused primarily on defining a problem, and predicting the effect of solutions, to inform current choice (as described by Meltzer and Schwartz and Thissen and Walker).
  2. ‘Ex post’ (after the event) policy analysis. Focused primarily on monitoring and evaluating that choice, perhaps to inform future choice (as described famously by Weiss).
  3. Some combination of both, to treat policy analysis as a continuous (never-ending) process (as described by Dunn).

As usual, these are not hard-and-fast distinctions, but they help us clarify expectations in relation to different scenarios.

  1. The impact of old-school ex ante policy analysis

Radin provides a valuable historical discussion of policymaking with the following elements:

  • a small number of analysts, generally inside government (such as senior bureaucrats, scientific experts, and – in particular- economists),
  • giving technical or factual advice,
  • about policy formulation,
  • to policymakers at the heart of government,
  • on the assumption that policy problems would be solved via analysis and action.

This kind of image signals an expectation for high impact: policy analysts face low competition, enjoy a clearly defined and powerful audience, and their analysis is expected to feed directly into choice.

Radin goes on to describe a much different, modern policy environment: more competition, more analysts spread across and outside government, with a less obvious audience, and – even if there is a client – high uncertainty about where the analysis fits into the bigger picture.

Yet, the impetus to seek high and direct impact remains.

This combination of shifting conditions but unshifting hopes/ expectations helps explain a lot of the pragmatic forms of policy analysis you will see in this series, including:

  • Keep it catchy, gather data efficiently, tailor your solutions to your audience, and tell a good story (Bardach)
  • Speak with an audience in mind, highlight a well-defined problem and purpose, project authority, use the right form of communication, and focus on clarity, precision, conciseness, and credibility ( Smith)
  • Address your client’s question, by their chosen deadline, in a clear and concise way that they can understand (and communicate to others) quickly (Weimer and Vining)
  • Client-oriented advisors identify the beliefs of policymakers and anticipate the options worth researching (Mintrom)
  • Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour (make it ‘concise’ and ‘digestible’), their deadline, and their ability to make or influence the policies you might suggest (Meltzer and Schwartz).
  • ‘Advise strategically’, to help a policymaker choose an effective solution within their political context (Thissen and Walker).
  • Focus on producing ‘policy-relevant knowledge’ by adapting to the evidence-demands of policymakers and rejecting a naïve attachment to ‘facts speaking for themselves’ or ‘knowledge for its own sake’ (Dunn).
  1. The impact of research and policy evaluation

Many of these recommendations are familiar to scientists and researchers, but generally in the context of far lower expectations about their likely impact, particularly if those expectations are informed by policy studies (compare Oliver & Cairney with Cairney & Oliver).

In that context, Weiss’ work is a key reference point. It gives us a menu of ways in which policymakers might use policy evaluation (and research evidence more widely):

  • to inform solutions to a problem identified by policymakers
  • as one of many sources of information used by policymakers, alongside ‘stakeholder’ advice and professional and service user experience
  • as a resource used selectively by politicians, with entrenched positions, to bolster their case
  • as a tool of government, to show it is acting (by setting up a scientific study), or to measure how well policy is working
  • as a source of ‘enlightenment’, shaping how people think over the long term (compare with this discussion of ‘evidence based policy’ versus ‘policy based evidence’).

In other words, researchers may have a role, but they struggle (a) to navigate the politics of policy analysis, (b) find the right time to act, and (c) to secure attention, in competition with many other policy actors.

  1. The potential for a form of continuous impact

Dunn suggests that the idea of ‘ex ante’ policy analysis is misleading, since policymaking is continuous, and evaluations of past choices inform current choices. Think of each policy analysis steps as ‘interdependent’, in which new knowledge to inform one step also informs the other four. For example, routine monitoring helps identify compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly, and if we can make a causal link between the policy solutions and outcomes. Its impact is often better seen as background information with intermittent impact.

Key conclusions to bear in mind

  1. The demand for information from policy analysts may be disproportionately high when policymakers pay attention to a problem, and disproportionately low when they feel that they have addressed it.
  2. Common advice for policy analysts and researchers often looks very similar: keep it concise, tailor it to your audience, make evidence ‘policy relevant’, and give advice (don’t sit on the fence). However, unless researchers are prepared to act quickly, to gather data efficiently (not comprehensively), to meet a tight brief for a client, they are not really in the impact business described by most policy analysis texts.
  3. A lot of routine, continuous, impact tends to occur out of the public spotlight, based on rules and expectations that most policy actors take for granted.

Further reading

See the Policy Analysis in 750 words series overview to continue reading on policy analysis.

See the ‘evidence-based policymaking’ page to continue reading on research impact.

ebpm pic

Bristol powerpoint: Paul Cairney Bristol EBPM January 2020

3 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Policy Analysis in 750 Words: What can you realistically expect policymakers to do?

This post forms one part of the Policy Analysis in 750 words series overview.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts.

In this case, modern theories of the policy process help you identify your audience and their capacity to follow your advice. This simple insight may have a profound impact on the advice you give.

Policy analysis for an ideal-type world

For our purposes, an ideal-type is an abstract idea, which highlights hypothetical features of the world, to compare with ‘real world’ descriptions. It need not be an ideal to which we aspire. For example, comprehensive rationality describes the ideal type, and bounded rationality describes the ‘real world’ limitations to the ways in which humans and organisations process information.

 

Imagine writing policy analysis in the ideal-type world of a single powerful ‘comprehensively rational’ policymaker at the heart of government, making policy via an orderly policy cycle.

Your audience would be easy to identify, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change.

You could adopt a simple 5-8 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

I have perhaps over-egged this ideal-type pudding, but I think a lot of traditional policy analyses tapped into this basic idea and focused more on the science of analysis than the political and policymaking context in which it takes place (see Radin and Brans, Geva-May, and Howlett).

Policy analysis for the real world

Then imagine a far messier and less predictable world in which the nature of the policy issue is highly contestedresponsibility for policy is unclear, and no single ‘centre’ has the power to turn a recommendation into an outcome.

This image is a key feature of policy process theories, which describe:

  • Many policymakers and influencers spread across many levels and types of government (as the venues in which authoritative choice takes place). Consequently, it is not a straightforward task to identify and know your audience, particularly if the problem you seek to solve requires a combination of policy instruments controlled by different actors.
  • Each venue resembles an institution driven by formal and informal rules. Formal rules are written-down or widely-known. Informal rules are unwritten, difficult to understand, and may not even be understood in the same way by participants. Consequently, it is difficult to know if your solution will be a good fit with the standard operating procedures of organisations (and therefore if it is politically feasible or too challenging).
  • Policymakers and influencers operate in ‘subsystems’, forming networks built on resources such as trust or coalitions based on shared beliefs. Effective policy analysis may require you to engage with – or become part of – such networks, to allow you to understand the unwritten rules of the game and encourage your audience to trust the messenger. In some cases, the rules relate to your willingness to accept current losses for future gains, to accept the limited impact of your analysis now in the hope of acceptance at the next opportunity.
  • Actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so well-established as to be taken for granted. Common terms include paradigms, hegemons, core beliefs, and monopolies of understandings. These dominant frames of reference give meaning to your policy solution. They prompt you to couch your solutions in terms of, for example, a strong attachment to evidence-based cases in public health, value for money in treasury departments, or with regard to core principles such as liberalism or socialism in different political systems.
  • Your solutions relate to socioeconomic context and the events that seem (a) impossible to ignore and (b) out of the control of policymakers. Such factors range from a political system’s geography, demography, social attitudes, and economy, while events can be routine elections or unexpected crises.

What would you recommend under these conditions? Rethinking 5-step analysis

There is a large gap between policymakers’ (a) formal responsibilities versus (b) actual control of policy processes and outcomes. Even the most sophisticated ‘evidence based’ analysis of a policy problem will fall flat if uninformed by such analyses of the policy process. Further, the terms of your cost-benefit analysis will be highly contested (at least until there is agreement on what the problem is, and how you would measure the success of a solution).

Modern policy analysis texts try to incorporate such insights from policy theories while maintaining a focus on 5-8 steps. For example:

  • Meltzer and Schwartz contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’.
  • Bardachand Dunn emphasise the value of political pragmatism and the ‘art and craft’ of policy analysis.
  • Weimer and Vininginvest 200 pages in economic analyses of markets and government, often highlighting a gap between (a) our ability to model and predict economic and social behaviour, and (b) what actually happens when governments intervene.
  • Mintrom invites you to see yourself as a policy entrepreneur, to highlight the value of of ‘positive thinking’, creativity, deliberation, and leadership, and perhaps seek ‘windows of opportunity’ to encourage new solutions. Alternatively, a general awareness of the unpredictability of events can prompt you to be modest in your claims, since the policymaking environment may be more important (than your solution) to outcomes.
  • Thissen and Walker focus more on a range of possible roles than a rigid 5-step process.

Beyond 5-step policy analysis

  1. Compare these pragmatic, client-orientated, and communicative models with the questioning, storytelling, and decolonizing approaches by Bacchi, Stone, and L.T. Smith.
  • The latter encourage us to examine more closely the politics of policy processes, including the importance of framing, narrative, and the social construction of target populations to problem definition and policy design.
  • Without this wider perspective, we are focusing on policy analysis as a process rather than considering the political context in which analysts use it.
  1. Additional posts on entrepreneurs and ‘systems thinking’ [to be added] encourage us to reflect on the limits to policy analysis in multi-centric policymaking systems.

 

 

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: Who should be involved in the process of policy analysis?

This post forms one part of the Policy Analysis in 750 words series overview.

Think of two visions for policy analysis. It should be primarily:

These choices are not mutually exclusive, but there are key tensions between them that should not be ignored, such as when we ask:

  • how many people should be involved in policy analysis?
  • whose knowledge counts?
  • who should control policy design?

Perhaps we can only produce a sensible combination of the two if we clarify their often very different implications for policy analysis. Let’s begin with one story for each and see where they take us.

A story of ‘evidence-based policymaking’

One story of ‘evidence based’ policy analysis is that it should be based on the best available evidence of ‘what works’.

Often, the description of the ‘best’ evidence relates to the idea that there is a notional hierarchy of evidence according to the research methods used.

At the top would be the systematic review of randomised control trials, and nearer the bottom would be expertise, practitioner knowledge, and stakeholder feedback.

This kind of hierarchy has major implications for policy learning and transfer, such as when importing policy interventions from abroad or ‘scaling up’ domestic projects.

Put simply, the experimental method is designed to identify the causal effect of a very narrowly defined policy intervention. Its importation or scaling up would be akin to the description of medicine, in which the evidence suggests the causal effect of a specific active ingredient to be administered with the correct dosage. A very strong commitment to a uniform model precludes the processes we might associate with co-production, in which many voices contribute to a policy design to suit a specific context (see also: the intersection between evidence and policy transfer).

A story of co-production in policymaking

One story of ‘co-produced’ policy analysis is that it should be ‘reflexive’ and based on respectful conversations between a wide range of policymakers and citizens.

Often, the description is of the diversity of valuable policy relevant information, with scientific evidence considered alongside community voices and normative values.

This rejection of a hierarchy of evidence also has major implications for policy learning and transfer. Put simply, a co-production method is designed to identify the positive effect – widespread ‘ownership’ of the problem and commitment to a commonly-agreed solution – of a well-discussed intervention, often in the absence of central government control.

Its use would be akin to a collaborative governance mechanism, in which the causal mechanism is perhaps the process used to foster agreement (including to produce the rules of collective action and the evaluation of success) rather than the intervention itself. A very strong commitment to this process precludes the adoption of a uniform model that we might associate with narrowly-defined stories of evidence based policymaking.

Where can you find these stories in the 750-words series?

  1. Texts focusing on policy analysis as evidence-based/ informed practice (albeit subject to limits) include: Weimer and Vining, Meltzer and Schwartz, Brans, Geva-May, and Howlett (compare with Mintrom, Dunn)
  2. Texts on being careful while gathering and analysing evidence include: Spiegelhalter
  3. Texts that challenge the ‘evidence based’ story include: Bacchi, T. Smith, Hindess, Stone

 

How can you read further?

See the EBPM page and special series ‘The politics of evidence-based policymaking: maximising the use of evidence in policy

There are 101 approaches to co-production, but let’s see if we can get away with two categories:

  1. Co-producing policy (policymakers, analysts, stakeholders). Some key principles can be found in Ostrom’s work and studies of collaborative governance.
  2. Co-producing research to help make it more policy-relevant (academics, stakeholders). See the Social Policy and Administration special issue ‘Inside Co-production’ and Oliver et al’s ‘The dark side of coproduction’ to get started.

To compare ‘epistemic’ and ‘reflexive’ forms of learning, see Dunlop and Radaelli’s ‘The lessons of policy learning: types, triggers, hindrances and pathologies

My interest has been to understand how governments juggle competing demands, such as to (a) centralise and localise policymaking, (b) encourage uniform and tailored solutions, and (c) embrace and reject a hierarchy of evidence. What could possibly go wrong when they entertain contradictory objectives? For example:

  • Paul Cairney (2019) “The myth of ‘evidence based policymaking’ in a decentred state”, forthcoming in Public Policy and Administration(Special Issue, The Decentred State) (accepted version)
  • Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open AccessPDF
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x PDF
  • Paul Cairney (2017) “Evidence-based best practice is more political than it looks: a case study of the ‘Scottish Approach’”, Evidence and Policy, 13, 3, 499-515 PDF

 

8 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 words: William Dunn (2017) Public Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary. This book is a whopper, with almost 500 pages and 101 (excellent) discussions of methods, so 800 words over budget seems OK to me. If you disagree, just read every second word.  By the time you reach the cat hanging in there baby you are about 300 (150) words away from the end.

Dunn 2017 cover

William Dunn (2017) Public Policy Analysis 6th Ed. (Routledge)

Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical assessment, and communication of policy-relevant knowledge … to solve practical problemsIts practitioners are free to choose among a range of scientific methods, qualitative as well as quantitative, and philosophies of science, so long as these yield reliable knowledge’ (Dunn, 2017: 2-3).

Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’, to help solve problems with analysis that people can trust.

This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world should be and how solutions get us there (as opposed to policy studies/ research seeking primarily to explain what happens).

Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:

  1. The idea of ‘best practice’ characterised by 5-step plans.
  • In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather information; the role they perform in an organisation; the time constraints and incentive structures in organisations and political systems; the expectations and standards of their profession; and, the need to work with teams consisting of many professions/ disciplines (2017: 15-6)
  • The cost (in terms of time and resources) of conducting multiple research and analytical methods is high, and highly constrained in political environments (2017: 17-8; compare with Lindblom)
  1. The too-narrow idea of evidence-based policymaking
  • The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’ undermines a researcher’s ability to adapt well to the evidence-demands of policymakers (2017: 68; 4 compare with Why don’t policymakers listen to your evidence?).

To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3) and after (Qs4-5) policy intervention (2017: 5-7; 54-6):

  1. What is the policy problem to be solved?
  • For example, identify its severity, urgency, cause, and our ability to solve it.
  • Don’t define the wrong problem, such as by oversimplifying or defining it with insufficient knowledge.
  • Key aspects of problems including ‘interdependency’ (each problem is inseparable from a host of others, and all problems may be greater than the sum of their parts), ‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change rather than being solved), and ‘hierarchy’ (which level or type of government is responsible) (2017: 70; 75).
  • Problems vary in terms of how many relevant policymakers are involved, how many solutions are on the agenda, the level of value conflict, and the unpredictability of outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
  • ‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a problem, and ward against making too many assumptions about its nature and cause; produce models of cause-and-effect; and make a problem seem solve-able, such as by placing boundaries on its coverage. These methods foster creativity, which is useful when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69; 77; 81-107).
  • Problem definition draws on evidence, but is primarily the exercise of power to reduce ambiguity through argumentation, such as when defining poverty as the fault of the poor, the elite, the government, or social structures (2017: 79; see Stone).
  1. What effect will each potential policy solution have?
  • Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
  • ‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
  • However, even the most-effective expert/ theory-based methods to extrapolate from the past are flawed, and it is important to communicate levels of uncertainty (2017: 118-23; see Spiegelhalter).
  1. Which solutions should we choose, and why?
  • ‘Prescription’ methods help provide a consistent way to compare each potential solution, in terms of its feasibility and predicted outcome, rather than decide too quickly that one is superior (2017: 55; 190-2; 220-42).
  • They help to combine (a) an estimate of each policy alternative’s outcome with (b) a normative assessment.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer & Schwartz, and Stone on the meaning of these values).
  • For example, cost benefit analysis (CBA) is an established – but problematic – economics method based on finding one metric – such as a $ value – to predict and compare outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
  • Cost effectiveness analysis uses a $ value for costs, but compared with other units of measurement for benefits (such as outputs per $) (2017: 217-9)
  • Although such methods help us combine information and values to compare choices, note the inescapable role of power to decide whose values (and which outcomes, affecting whom) matter (2017: 204)
  1. What were the policy outcomes?
  • ‘Monitoring’ methods help identify (say): levels of compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly (such as on clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
  • Monitoring is crucial because it is so difficult to predict policy success, and unintended consequences are almost inevitable (2017: 250).
  • However, the data gathered are usually no more than proxy indicators of outcomes. Further, the choice of indicators reflect what is available, ‘particular social values’, and ‘the political biases of analysts’ (2017: 262)
  • The idea of ‘evidence based policy’ is linked strongly to the use of experiments and systematic review to identify causality (2017: 273-6; compare with trial-and-error learning in Gigerenzer, complexity theory, and Lindblom).
  1. Did the policy solution work as intended? Did it improve policy outcomes?
  • Although we frame policy interventions as ‘solutions’, few problems are ‘solved’. Instead, try to measure the outcomes and the contribution of your solution, and note that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).  
  • Policy evaluation is not an objective process in which we can separate facts from values.
  • Rather, values and beliefs are part of the criteria we use to gauge success (and even their meaning is contested – 2017: 322-32).
  • We can gather facts about the policy process, and the impacts of policy on people, but this information has little meaning until we decide whose experiences matter.

Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since policymaking is continuous, and evaluations of past choices inform current choices.

Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the impact of knowledge regarding one question on the other four (2017: 7-13; contrast with Meltzer & Schwartz, Thissen & Walker).

Developing arguments and communicating effectively

Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4 for advice on how to write briefs, memos, and executive summaries and prepare oral testimony).

He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:

  • The claim itself, such as a description (size, cause) or evaluation (importance, urgency) of a problem, and prescription of a solution
  • The things that support it (including reasoning, knowledge, authority)
  • Incorporating the things that could undermine it (including any ‘qualifier’, the communication of uncertainty about current knowledge, and counter-arguments).

The key stages of communication (2017: 392-7; 405; 432) include:

  1. ‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing with ‘political and organizational constraints’ and suggesting something that can actually be done.
  2. ‘Documentation’, focusing on synthesising information from many sources, organising it into a coherent argument, translating from jargon or a technical language, simplifying, summarising, and producing user-friendly visuals.
  3. ‘Utilization’, by making sure that (a) communications are tailored to the audience (its size, existing knowledge of policy and methods, attitude to analysts, and openness to challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn from each other.

 

hang-in-there-baby

 

Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy cycles

Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which culminated in new ways to express old ambitions, such as to:

  1. Use ‘systems thinking’, to understand the interdependence between many elements in complex policymaking systems (see also socio-technical and socio-ecological systems).
  • Note the huge difference between (a) policy analysis discussions of ‘systems thinking’ built on the hope that if we can understand them we can direct them, and (b) policy theory discussions that emphasise ‘emergence’ in the absence of central control (and presence of multi-centric policymaking).
  • Also note that Dunn (2017: 73) describes policy problems – rather than policymaking – as complex systems. I’ll write another post (short, I promise) on the many different (and confusing) ways to use the language of complexity.
  1. Promote ‘evidence based policy, as the new way to describe an old desire for ‘technocratic’ policymaking that accentuates scientific evidence and downplays politics and values (see also 2017: 60-4).

In that context, see Dunn’s (47-52) discussion of comprehensive versus bounded rationality:

  • Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in favour of a continuous process of ‘questioning and answering’.
  • This approach is a pragmatic response to the lack of order and predictability of policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5 step policy analyses (compare with Meltzer & Schwartz).

Dunn (2017: 41-7) also provides an unusually useful discussion of the policy cycle. Rather than seeing it as a mythical series of orderly stages, Dunn highlights:

  1. Lasswell’s original discussion of policymaking functions (or functional requirements of policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering knowledge), ‘promotion’ (persuasion and argumentation while defining problems), ‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is made and carried out), and ‘appraisal’ (2017: 42-3).
  2. The constant interaction between all notional ‘stages’ rather than a linear process: attention to a policy problem fluctuates, actors propose and adopt solutions continuously, actors are making policy (and feeding back on its success) as they implement, evaluation (of policy success) is not a single-shot document, and previous policies set the agenda for new policy (2017: 44-5).

In that context, it is no surprise that the impact of a single policy analyst is usually minimal (2017: 57). Sorry to break it to you. Hang in there, baby.

hang-in-there-baby

 

12 Comments

Filed under 750 word policy analysis, public policy

Policy Analysis in 750 words: Beryl Radin, B (2019) Policy Analysis in the Twenty-First Century

Please see the Policy Analysis in 750 words series overview before reading the summary. As usual, the 750-word description is more for branding than accuracy.

Beryl Radin (2019) Policy Analysis in the Twenty-First Century (Routledge)

Radin cover 2019

The basic relationship between a decision-maker (the client) and an analyst has moved from a two-person encounter to an extremely complex and diverse set of interactions’ (Radin, 2019: 2).

Many texts in this series continue to highlight the client-oriented nature of policy analysis (Weimer and Vining), but within a changing policy process that has altered the nature of that relationship profoundly.

This new policymaking environment requires new policy analysis skills and training (see Mintrom), and limits the applicability of classic 8-step (or 5-step) policy analysis techniques (2019: 82).

We can use Radin’s work to present two main stories of policy analysis:

  1. The old ways of making policy resembled a club, or reflected a clear government hierarchy, involving:
  • a small number of analysts, generally inside government (such as senior bureaucrats, scientific experts, and – in particular- economists),
  • giving technical or factual advice,
  • about policy formulation,
  • to policymakers at the heart of government,
  • on the assumption that policy problems would be solved via analysis and action.
  1. Modern policy analysis is characterised by a more open and politicised process in which:
  • many analysts, inside and outside government,
  • compete to interpret facts, and give advice,
  • about setting the agenda, and making, delivering, and evaluating policy,
  • across many policymaking venues,
  • often on the assumption that governments have a limited ability to understand and solve complex policy problems.

As a result, the client-analyst relationship is increasingly fluid:

In previous eras, the analyst’s client was a senior policymaker, the main focus was on the analyst-client relationship, and ‘both analysts and clients did not spend much time or energy thinking about the dimensions of the policy environment in which they worked’ (2019: 59). Now, in a multi-centric policymaking environment:

  1. It is tricky to identify the client.
  • We could imagine the client to be someone paying for the analysis, someone affected by its recommendations, or all policy actors with the ability to act on the advice (2019: 10).
  • If there is ‘shared authority’ for policymaking within one political system, a ‘client’ (or audience) may be a collection of policymakers and influencers spread across a network containing multiple types of government, non-governmental actors, and actors responsible for policy delivery (2019: 33).
  • The growth in international cooperation also complicates the idea of a single client for policy advice (2019: 33-4)
  • This shift may limit the ‘face-to-face encounters’ that would otherwise provide information for – and perhaps trust in – the analyst (2019: 2-3).
  1. It is tricky to identify the analyst
  • Radin (2019: 9-25) traces, from the post-war period in the US, a major expansion of policy analysts, from the notional centre of policymaking in federal government towards analysts spread across many venues, inside government (across multiple levels, ‘policy units’, and government agencies) and congressional committees, and outside government (such as in influential think tanks).
  • Policy analysts can also be specialist external companies contracted by organisations to provide advice (2019: 37-8).
  • This expansion shifted the image of many analysts, from a small number of trusted insiders towards many being treated as akin to interest groups selling their pet policies (2019: 25-6).
  • The nature – and impact – of policy analysis has always been a little vague, but now it seems more common to suggest that ‘policy analysts’ may really be ‘policy advocates’ (2019: 44-6).
  • As such, they may now have to work harder to demonstrate their usefulness (2019: 80-1) and accept that their analysis will have a limited impact (2019: 82, drawing on Weiss’ discussion of ‘enlightenment’).

Consequently, the necessary skills of policy analysis have changed:

Although many people value systematic policy analysis (and many rely on economists), an effective analyst does not simply apply economic or scientific techniques to analyse a problem or solution, or rely on one source of expertise or method, as if it were possible to provide ‘neutral information’ (2019: 26).

Indeed, Radin (2019: 31; 48) compares the old ‘acceptance that analysts would be governed by the norms of neutrality and objectivity’ with

(a) increasing calls to acknowledge that policy analysis is part of a political project to foster some notion of public good or ‘public interest’, and

(b)  Stone’s suggestion that the projection of reason and neutrality is a political strategy.

In other words, the fictional divide between political policymakers and neutral analysts is difficult to maintain.

Rather, think of analysts as developing wider skills to operate in a highly political environment in which the nature of the policy issue is contested, responsibility for a policy problem is unclear, and it is not clear how to resolve major debates on values and priorities:

  • Some analysts will be expected to see the problem from the perspective of a specific client with a particular agenda.
  • Other analysts may be valued for their flexibility and pragmatism, such as when they acknowledge the role of their own values, maintain or operate within networks, communicate by many means, and supplement ‘quantitative data’ with ‘hunches’ when required (2019: 2-3; 28-9).

Radin (2019: 21) emphasises a shift in skills and status

The idea of (a) producing new and relatively abstract ideas, based on high control over available information, at the top of a hierarchical organisation, makes way for (b) developing the ability to:

  • generate a wider understanding of organisational and policy processes, reflecting the diffusion of power across multiple policymaking venues
  • identify a map of stakeholders,
  • manage networks of policymakers and influencers,
  • incorporate ‘multiple and often conflicting perspectives’,
  • make and deliver more concrete proposals (2019: 59-74), while recognising
  • the contested nature of information, and the practices sued to gather it, even during multiple attempts to establish the superiority of scientific evidence (2019: 89-103),
  • the limits to a government’s ability to understand and solve problems (2019: 95-6),
  • the inescapable conflict over trade-offs between values and goals, which are difficult to resolve simply by weighting each goal (2019: 105-8; see Stone), and
  • do so flexibly, to recognise major variations in problem definition, attention and networks across different policy sectors and notional ‘stages’ of policymaking (2019: 75-9; 84).

Radin’s (2019: 48) overall list of relevant skills include:

  1. ‘Case study methods, Cost- benefit analysis, Ethical analysis, Evaluation, Futures analysis, Historical analysis, Implementation analysis, Interviewing, Legal analysis, Microeconomics, Negotiation, mediation, Operations research, Organizational analysis, Political feasibility analysis, Public speaking, Small- group facilitation, Specific program knowledge, Statistics, Survey research methods, Systems analysis’

They develop alongside analytical experience and status, from the early career analyst trying to secure or keep a job, to the experienced operator looking forward to retirement (2019: 54-5)

A checklist for policy analysts

Based on these skills requirements, the contested nature of evidence, and the complexity of the policymaking environment, Radin (2019: 128-31) produces a 4-page checklist of – 91! – questions for policy analysts.

For me, it serves two main functions:

  1. It is a major contrast to the idea that we can break policy analysis into a mere 5-8 steps (rather, think of these small numbers as marketing for policy analysis students, akin to 7-minute abs)
  2. It presents policy analysis as an overwhelming task with absolutely no guarantee of policy impact.

To me, this cautious, eyes-wide-open, approach is preferable to the sense that policy analysts can change the world if they just get the evidence and the steps right.

Further Reading:

  1. Iris Geva-May (2005) ‘Thinking Like a Policy Analyst. Policy Analysis as a Clinical Profession’, in Geva-May (ed) Thinking Like a Policy Analyst. Policy Analysis as a Clinical Profession (Basingstoke: Palgrave)

Although the idea of policy analysis may be changing, Geva-May (2005: 15) argues that it remains a profession with its own set of practices and ways of thinking. As with other professions (like medicine), it would be unwise to practice policy analysis without education and training or otherwise learning the ‘craft’ shared by a policy analysis community (2005: 16-17). For example, while not engaging in clinical diagnosis, policy analysts can draw on 5-step process to diagnose a policy problem and potential solutions (2005: 18-21). Analysts may also combine these steps with heuristics to determine the technical and political feasibility of their proposals (2005: 22-5), as they address inevitable uncertainty and their own bounded rationality (2005: 26-34; see Gigerenzer on heuristics). As with medicine, some aspects of the role – such as research methods – can be taught in graduate programmes, while others may be better suited to on the job learning (2005: 36-40). If so, it opens up the possibility that there are many policy analysis professions to reflect different cultures in each political system (and perhaps the venues within each system).

  1. Vining and Weimar’s take on the distinction between policy analysis and policy process research

 

12 Comments

Filed under 750 word policy analysis, public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Evidence-informed policymaking: context is everything

I thank James Georgalakis for inviting me to speak at the inaugural event of IDS’ new Evidence into Policy and Practice Series, and the audience for giving extra meaning to my story about the politics of ‘evidence-based based policymaking’. The talk (using powerpoint) and Q&A is here:

 

James invited me to respond to some of the challenges raised to my talk – in his summary of the event – so here it is.

I’m working on a ‘show, don’t tell’ approach, leaving some of the story open to interpretation. As a result, much of the meaning of this story – and, in particular, the focus on limiting participation – depends on the audience.

For example, consider the impact of the same story on audiences primarily focused on (a) scientific evidence and policy, or (b) participation and power.

Normally, when I talk about evidence and policy, my audience is mostly people with scientific or public health backgrounds asking why do policymakers ignore scientific evidence? I am usually invited to ruffle feathers, mostly by challenging a – remarkably prevalent – narrative that goes like this:

  • We know what the best evidence is, since we have produced it with the best research methods (the ‘hierarchy of evidence’ argument).
  • We have evidence on the nature of the problem and the most effective solutions (the ‘what works’ argument).
  • Policymakers seems to be ignoring our evidence or failing to act proportionately (the ‘evidence-policy barriers’ argument).
  • Or, they cherry-pick evidence to suit their agenda (the ‘policy based evidence’ argument).

In that context, I suggest that there are many claims to policy-relevant knowledge, policymakers have to ignore most information before making choices, and they are not in control of the policy process for which they are ostensibly in charge.

Limiting participation as a strategic aim

Then, I say to my audience that – if they are truly committed to maximising the use of scientific evidence in policy – they will need to consider how far they will go to get what they want. I use the metaphor of an ethical ladder in which each rung offers more influence in exchange for dirtier hands: tell stories and wait for opportunities, or demonise your opponents, limit participation, and humour politicians when they cherry-pick to reinforce emotional choices.

It’s ‘show don’t tell’ but I hope that the take-home point for most of the audience is that they shouldn’t focus so much on one aim – maximising the use of scientific evidence – to the detriment of other important aims, such as wider participation in politics beyond a reliance on a small number of experts. I say ‘keep your eyes on the prize’ but invite the audience to reflect on which prizes they should seek, and the trade-offs between them.

Limited participation – and ‘windows of opportunity’ – as an empirical finding

NASA launch

I did suggest that most policymaking happens away from the sphere of ‘exciting’ and ‘unruly’ politics. Put simply, people have to ignore almost every issue almost all of the time. Each time they focus their attention on one major issue, they must – by necessity – ignore almost all of the others.

For me, the political science story is largely about the pervasiveness of policy communities and policymaking out of the public spotlight.

The logic is as follows. Elected policymakers can only pay attention to a tiny proportion of their responsibilities. They delegate the rest to bureaucrats at lower levels of government. Bureaucrats lack specialist knowledge, and rely on other actors for information and advice. Those actors trade information for access. In many cases, they develop effective relationships based on trust and a shared understanding of the policy problem.

Trust often comes from a sense that everyone has proven to be reliable. For example, they follow norms or the ‘rules of the game’. One classic rule is to contain disputes within the policy community when actors don’t get what they want: if you complain in public, you draw external attention and internal disapproval; if not, you are more likely to get what you want next time.

For me, this is key context in which to describe common strategic concerns:

  • Should you wait for a ‘window of opportunity’ for policy change? Maybe. Or, maybe it will never come because policymaking is largely insulated from view and very few issues reach the top of the policy agenda.
  • Should you juggle insider and outsider strategies? Yes, some groups seem to do it well and it is possible for governments and groups to be in a major standoff in one field but close contact in another. However, each group must consider why they would do so, and the trade-offs between each strategy. For example, groups excluded from one venue may engage (perhaps successfully) in ‘venue shopping’ to get attention from another. Or, they become discredited within many venues if seen as too zealous and unwilling to compromise. Insider/outsider may seem like a false dichotomy to experienced and well-resourced groups, who engage continuously, and are able to experiment with many approaches and use trial-and-error learning. It is a more pressing choice for actors who may have only one chance to get it right and do not know what to expect.

Where is the power analysis in all of this?

image policy process round 2 25.10.18

I rarely use the word power directly, partly because – like ‘politics’ or ‘democracy’ – it is an ambiguous term with many interpretations (see Box 3.1). People often use it without agreeing its meaning and, if it means everything, maybe it means nothing.

However, you can find many aspects of power within our discussion. For example, insider and outsider strategies relate closely to Schattschneider’s classic discussion in which powerful groups try to ‘privatise’ issues and less powerful groups try to ‘socialise’ them. Agenda setting is about using resources to make sure issues do, or do not, reach the top of the policy agenda, and most do not.

These aspects of power sometimes play out in public, when:

  • Actors engage in politics to turn their beliefs into policy. They form coalitions with actors who share their beliefs, and often romanticise their own cause and demonise their opponents.
  • Actors mobilise their resources to encourage policymakers to prioritise some forms of knowledge or evidence over others (such as by valuing scientific evidence over experiential knowledge).
  • They compete to identify the issues most worthy of our attention, telling stories to frame or define policy problems in ways that generate demand for their evidence.

However, they are no less important when they play out routinely:

  • Governments have standard operating procedures – or institutions – to prioritise some forms of evidence and some issues routinely.
  • Many policy networks operate routinely with few active members.
  • Certain ideas, or ways of understanding the world and the nature of policy problems within it, becomes so dominant that they are unspoken and taken for granted as deeply held beliefs. Still, they constrain or facilitate the success of new ‘evidence based’ policy solutions.

In other words, the word ‘power’ is often hidden because the most profound forms of power often seem to be hidden.

In the context of our discussion, power comes from the ability to define some evidence as essential and other evidence as low quality or irrelevant, and therefore define some people as essential or irrelevant. It comes from defining some issues as exciting and worthy of our attention, or humdrum, specialist and only relevant to experts. It is about the subtle, unseen, and sometimes thoughtless ways in which we exercise power to harness people’s existing beliefs and dominate their attention as much as the transparent ways in which we mobilise resources to publicise issues. Therefore, to ‘maximise the use of evidence’ sounds like an innocuous collective endeavour, but it is a highly political and often hidden use of power.

See also:

I discussed these issues at a storytelling workshop organised by the OSF:

listening-new-york-1-11-16

See also:

Policy in 500 Words: Power and Knowledge

The politics of evidence-based policymaking

Palgrave Communications: The politics of evidence-based policymaking

Using evidence to influence policy: Oxfam’s experience

The UK government’s imaginative use of evidence to make policy

 

5 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Evidence-based policymaking and the ‘new policy sciences’

image policy process round 2 25.10.18

[I wasn’t happy with the first version, so this is version 2, to be enjoyed with the see ppt MP3 ]

In the ‘new policy sciences’, Chris Weible and I advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

However, there is a lot of policy theory out there, and we can’t put policy theory together like Lego to produce consistent insights to inform policy analysis.

Rather, each concept in my image of the policy process represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events.

What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process.

However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

On that basis, I’d encourage you to think of these attempts to synthesise as stories. I tell these stories a lot, but someone else could describe theory very differently (perhaps by relying on fewer male authors or US-derived theories in which there is a very specific reference points and positivism is represented well).

The example of EBPM

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

Further, policy theories/ studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

As described, this focus on the new policy sciences and synthesising insights helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

From one story to many?

However, I tell these stories without my audience having the time to look further into each theory and its individual insights. If they do have a little more time, I go into the possible contribution of individual insights to debate.

For example, they adapt insights from psychology in different ways …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… even though the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

They also present different conceptions of the policymaking environment in which actors make choices. See this post for more on this discussion in relation to EBPM.

My not-brilliant conclusion is that:

  1. Policy theory/ policy studies has a lot to offer other disciplines and professions, particularly in field like EBPM in which we need to account for politics and, more importantly, policymaking systems, but
  2. Beware any policy theory story that presents the source literature as coherent and consistent.
  3. Rather, any story of the field involves a series of choices about what counts as a good theory and good insight.
  4. In other words, the exhortation to think more about what counts as ‘good evidence’ applies just as much to political science as any other.

Postscript: well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one and see it as a sequel to this one!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy

Evidence-based policymaking and the ‘new policy sciences’

Circle image policy process 24.10.18

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

In most cases, we don’t have time to discuss a more fundamental issue (at least for researchers using policy theory and political science concepts):

From where did these concepts come, and how well do we know them?

To cut a long story short, each concept represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events. What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process. However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

The new policy sciences

More recently, in the ‘new policy sciences’, Chris Weible and I present a more provocative story of these efforts, in which we advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

This focus on psychology is not new …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… but the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

Perhaps more importantly, policy studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

Then, have a look at this discussion of ‘synthetic’ policy theories, designed to prompt people to consider how far they would go to get their evidence into policy.

Theory-driven policy analysis

As described, this focus on the new policy sciences helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

Epilogue

Well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one in Auckland and see it as a sequel to this one in Brisbane!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Theory and Practice: How to Communicate Policy Research beyond the Academy

Notes (and audio) for my first talk at the University of Queensland, Wednesday 24th October, 12.30pm, Graduate Centre, room 402.

Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.

I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:

  1. There are many claims to policy relevant knowledge.
  2. Policymakers have to ignore most evidence.
  3. There is no simple policy cycle in which we all know at what stage to provide what evidence.

slide 3 24.10.18

In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.

Under those circumstances, key responses include:

  • framing evidence in terms of the ways in which your audience understands policy problems
  • engaging in networks to identify and exploit the right time to act, and
  • venue shopping to find sympathetic audiences in different parts of political systems.

However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.

In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:

  1. They may not know where to start: who should they contact to start making connections with policymakers?
  2. The incentives and rewards for engagement may not be clear. The UK’s ‘impact’ agenda has changed things, but not to the extent that any engagement is good engagement. Researchers need to tell a convincing story that they made an impact on policy/ policymakers with their published research, so there is a notional tipping point of engagement in which it reaches a scale that makes it worth doing.
  3. The costs are clearer. For example, any time spent doing engagement is time away from writing grant proposals and journal articles (in other words, the things that still make careers).
  4. The rewards and costs are not spread evenly. Put most simply, white male professors may have the most opportunities and face the fewest penalties for engagement in policymaking and social media. Or, the opportunities and rewards may vary markedly by discipline. In some, engagement is routine. In others, it is time away from core work.

In that context, I suggest that CS should:

  • provide clarity on what they expect from academics, and when they need information
  • describe what they can offer in return (which might be as simple as a written and signed acknowledgement of impact, or formal inclusion on an advisory committee).
  • show some flexibility: you may have a tight deadline, but can you reasonably expect an academic to drop what they are doing at short notice?
  • Engage routinely with academics, to help form networks and identify the right people you need at the right time

These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:

  • Technical languages/ jargon to describe their work
  • Timescales to supply and demand information
  • Professional incentives (such as to value scientific novelty in academia but evidential synthesis in government
  • Comfort with uncertainty (often, scientists project relatively high uncertainty and don’t want to get ahead of the evidence; often policymakers need to project certainty and decisiveness)
  • Assessments of the relative value of scientific evidence compared to other forms of policy-relevant information
  • Assessments of the role of values and beliefs (some scientists want to draw the line between providing evidence and advice; some policymakers want them to go much further)

To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge

Figure 1

However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.

In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.

Table 1 3 ideal types of EBBP

In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:

  1. Storytelling

The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact.  Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.

On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.

On the other hand, it is important to not get carried away by the possibilities:

  • My reading of the NPF empirical work is that the most impactful stories are reinforcing the beliefs of the audience – to mobilise them to act – not changing their minds.
  • Also look at the work of the Frameworks Institute which experiments with individual versus thematic stories because people react to them in very different ways. Some might empathise with an individual story; some might judge harshly. For example, they discusse stories about low income families and healthy eating, in which they use the theme of a maze to help people understand the lack of good choices available to people in areas with limited access to healthy food.

See: Storytelling for Policy Change: promise and problems

  1. Evidence for advocacy

The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:

  1. Identifying the policy change they would like to see.
  2. Identifying the powerful actors they need to influence.
  3. A mixture of tactics: insider, outsider, and supporting others by, for example, boosting local civil society organisations.
  4. A mix of ‘evidence types’ for each audience

oxfam table 2

  1. Wider public campaigns to address the political environment in which policymakers consider choices
  2. Engaging stakeholders in the research process (often called the ‘co-production of knowledge’)
  3. Framing: personal stories, ‘killer facts’, visuals, credible messenger
  4. Exploiting ‘windows of opportunity’
  5. Monitoring, learning, trial and error

In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.

See: Using evidence to influence policy: Oxfam’s experience

 

 

 

 

 

 

 

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM)