Tag Archives: policymaking

Policy Analysis in 750 Words: Political feasibility and policy success

Policy studies and policy analysis guidebooks identify the importance of feasible policy solutions:

  • Technical feasibility: will this solution work as intended if implemented?
  • Political feasibility: will it be acceptable to enough powerful people?

For example, Kingdon treats feasibility as one of three conditions for major policy change during a ‘window of opportunity’: (1) there is high attention to the policy problem, (2) a feasible solution already exists, and (3) key policymakers have the motive and opportunity to select it.

Guidebooks relate this requirement initially to your policymaker client: what solutions will they rule out, to the extent that they are not even worth researching as options (at least for the short term)?

Further, this assessment relates to types of policy ‘tool’ or ‘instrument’: one simple calculation is that ‘redistributive’ measures are harder to sell than ‘distributive’, while both may be less attractive than regulation (although complex problems likely require a mix of instruments).

These insights connect to Lindblom’s classic vision of:

  1. Incremental analysis. It is better to research in-depth a small number of feasible options than spread your resources too thinly to consider all possibilities.
  2. Strategic analysis. The feasibility of a solution relates strongly to current policy. The more radical a departure from the current negotiated position, the harder it will be to sell.

As many posts in the Policy Analysis in 750 words series describe, this advice is not entirely  useful for actors who seek rapid and radical departures from the status quo. Lindblom’s response to such critics was to seek radical change via a series of non-radical steps (at least in political systems like the US), which (broadly speaking) represents one of two possible approaches.

While incrementalism is not as popular as it once was (as a description of, or prescription for, policymaking), it tapped into the enduring insight that policymaking systems produce huge amounts of minor change. Rapid and radical policy change is rare, and it is even rarer to be able to connect it to influential analysis and action (at least in the absence of a major event). This knowledge should not put people off trying, but rather help them understand the obstacles that they seek to overcome.

Relating feasible solutions and strategies to ‘policy success’

One way to incorporate this kind of advice is to consider how (especially elected) policymakers would describe their own policy success. The determination of success and failure is a highly contested and political process (not simply a technical exercise called ‘evaluation’), and policymakers may refer – often implicitly – to the following questions when seeking success:

  1. Political. Will this policy boost my government’s credibility and chances of re-election?
  2. Process. Will it be straightforward to legitimise and maintain support for this policy?
  3. Programmatic. Will it achieve its stated objectives and produce beneficial outcomes if implemented?

The benefit to analysts, in asking themselves these questions, is that they help to identify the potential solutions that are technically but not politically feasible (or vice versa).

The absence of clear technical feasibility does not necessarily rule out solutions with wider political benefits (for example, it can be beneficial to look like you are trying to do something good). Hence the popular phrase ‘good politics, bad policy’.

Nor does a politically unattractive option rule out a technically feasible solution (not all politicians flee the prospect of ‘good policy, bad politics’). However, it should prompt attention to hard choices about whose support to seek, how long to wait, or how hard to push, to seek policy change. You can see this kind of thinking as ‘entrepreneurial‘ or ‘systems thinking’ depending on how much faith you have in agency in highly-unequal political contexts.

Further reading

It is tempting to conclude that these obstacles to ‘good policy’ reflect the pathological nature of politics. However, if we want to make this argument, we should at least do it well:

1. You can find this kind of argument in fields such as public health and climate change studies, where researchers bemoan the gap between (a) their high-quality evidence on an urgent problem and (b) a disproportionately weak governmental response. To do it well, we need to separate analytically (or at least think about): (a) the motivation and energy of politicians (usually the source of most criticism of low ‘political will’), and (b) the policymaking systems that constrain even the most sincere and energetic policymakers. See the EBPM page for more.

2. Studies of Social Construction and Policy Design are useful to connect policymaking research with a normative agenda to address ‘degenerative’ policy design.

Leave a comment

Filed under 750 word policy analysis

Policy Analysis in 750 Words: Changing things from the inside

How should policy actors seek radical changes to policy and policymaking?

This question prompts two types of answer:

1. Be pragmatic, and change things from the inside

Pragmatism is at the heart of most of the policy analysis texts in this series. They focus on the needs and beliefs of clients (usually policymakers). Policymakers are time-pressed, so keep your analysis short and relevant. See the world through their eyes. Focus on solutions that are politically as well as technically feasible. Propose non-radical steps, which may add up to radical change over the long-term.

This approach will seem familiar to students of research ‘impact’ strategies which emphasise relationship-building, being available to policymakers, and responding to the agendas of governments to maximise the size of your interested audience.

It will also ring bells for advocates of radical reforms in policy sectors such as (public) health and intersectoral initiatives such as gender mainstreaming:

  • Health in All Policies is a strategy to encourage radical changes to policy and policymaking to improve population health.  Common advice includes to: identify to policymakers how HiAP fits into current policy agendas, seek win-win strategies with partners in other sectors, and go to great lengths to avoid the sense that you are interfering in their work (‘health imperialism’).
  • Gender mainstreaming is a strategy to consider gender in all aspect of policy and policymaking. An equivalent playbook involves steps to: clarify what gender equality is, and what steps may help achieve it; make sure that these ideas translate across all levels and types of policymaking; adopt tools to ensure that gender is a part of routine government business (such as budget processes); and, modify existing policies or procedures while increasing the representation of women in powerful positions.

In other words, the first approach is to pursue your radical agenda via non-radical means, using a playbook that is explicitly non-confrontational.  Use your insider status to exploit opportunities for policy change.

2. Be radical, and challenge things from the outside

Challenging the status quo, for the benefit of marginalised groups, is at the heart of critical policy analysis:

  • Reject the idea that policy analysis is a rationalist, technical, or evidence-based process. Rather, it involves the exercise of power to (a) depoliticise problems to reduce attention to current solutions, and (b) decide whose knowledge counts.
  • Identify and question the dominant social constructions of problems and populations, asking who decides how to portray these stories and who benefits from their outcomes.

This approach resonates with frequent criticisms of ‘impact’ advice, emphasising the importance of producing research independent of government interference, to challenge policies that further harm already-marginalised populations.

It will also rings bells among advocates of more confrontational strategies to seek radical changes to policy and policymaking. They include steps to: find more inclusive ways to generate and share knowledge, produce multiple perspectives on policy problems and potential solutions, focus explicitly on the impact of the status quo on marginalised populations, politicise issues continuously to ensure that they receive sufficient attention, and engage in outsider strategies to protest current policies and practices.

Does this dichotomy make sense?

It is tempting to say that this dichotomy is artificial and that we can pursue the best of both worlds, such as working from within when it works and resorting to outsider action and protest when it doesn’t.

However, the blandest versions of this conclusion tend to ignore or downplay the politics of policy analysis in favour of more technical fixes. Sometimes collaboration and consensus politics is a wonderful feat of human endeavour. Sometimes it is a cynical way to depoliticise issues, stifle debate, and marginalise unpopular positions.

This conclusion also suggests that it is possible to establish what strategies work, and when, without really saying how (or providing evidence for success that would appeal to audiences associated with both approaches). Indeed, a recurrent feature of research in these fields is that most attempts to produce radical change prove to be dispiriting struggles. Non-radical strategies tend to be co-opted by more powerful actors, to mainstream new ways of thinking without changing the old. Radical strategies are often too easy to dismiss or counter.

The latter point reminds us to avoid excessively optimistic overemphasis on the strategies of analysts and advocates at the expense of context and audience. The 500 and 1000 words series perhaps tip us too far in the other direction, but provide a useful way to separate (analytically) the reasons for often-minimal policy change. To challenge dominant forms of policy and policymaking requires us to separate the intentional sources of inertia from the systemic issues that would constrain even the most sincere and energetic reformer.

Further reading

This post forms one part of the Policy Analysis in 750 words series, including posts on the role of analysts and marginalised groups. It also relates to work with St Denny, Kippin, and Mitchell (drawing on this draft paper) and posts on ‘evidence based policymaking’.

1 Comment

Filed under 750 word policy analysis

Policy Analysis in 750 Words: Two approaches to policy learning and transfer

This post forms one part of the Policy Analysis in 750 words series. It draws on work for an in-progress book on learning to reduce inequalities. Some of the text will seem familiar if you have read other posts. Think of it as an adventure game in which the beginning is the same but you don’t know the end.

Policy learning is the use of new information to update policy-relevant knowledge. Policy transfer involves the use of knowledge about policy and policymaking in one government to inform policy and policymaking in another.

These processes may seem to relate primarily to research and expertise, but they require many kinds of political choices (explored in this series). They take place in complex policymaking systems over which no single government has full knowledge or control.

Therefore, while the agency of policy analysts and policymakers still matters, they engage with a policymaking context that constrains or facilitates their action.

Two approaches to policy learning: agency and context-driven stories

Policy analysis textbooks focus on learning and transfer as an agent-driven process with well-established  guidance (often with five main steps). They form part of a functionalist analysis where analysts identify the steps required to turn comparative analysis into policy solutions, or part of a toolkit to manage stages of the policy process.

Agency is less central to policy process research, which describes learning and transfer as contingent on context. Key factors include:

Analysts compete to define problems and determine the manner and sources of learning, in a multi-centric environment where different contexts will constrain and facilitate action in different ways. For example, varying structural factors – such as socioeconomic conditions – influence the feasibility of proposed policy change, and each centre’s institutions provide different rules for gathering, interpreting, and using evidence.

The result is a mixture of processes in which:

  1.  Learning from experts is one of many possibilities. For example, Dunlop and Radaelli also describe ‘reflexive learning’, ‘learning through bargaining’, and ‘learning in the shadow hierarchy’
  2.  Transfer takes many forms.

How should analysts respond?

Think of two different ways to respond to this description of the policy process with this lovely blue summary of concepts. One is your agency-centred strategic response. The other is me telling you why it won’t be straightforward.

An image of the policy process (see 5 images)

There are many policy makers and influencers spread across many policymaking ‘centres’

  1. Find out where the action is and tailor your analysis to different audiences.
  2. There is no straightforward way to influence policymaking if multiple venues contribute to policy change and you don’t know who does what.

Each centre has its own ‘institutions’

  1. Learn the rules of evidence gathering in each centre: who takes the lead, how do they understand the problem, and how do they use evidence?
  2. There is no straightforward way to foster policy learning between political systems if each is unaware of each other’s unwritten rules. Researchers could try to learn their rules to facilitate mutual learning, but with no guarantee of success.

Each centre has its own networks

  1. Form alliances with policymakers and influencers in each relevant venue.
  2. The pervasiveness of policy communities complicates policy learning because the boundary between formal power and informal influence is not clear.

Well-established ‘ideas’ tend to dominate discussion

  1. Learn which ideas are in good currency. Tailor your advice to your audience’s beliefs.
  2. The dominance of different ideas precludes many forms of policy learning or transfer. A popular solution in one context may be unthinkable in another.

Many policy conditions (historic-geographic, technological, social and economic factors) command the attention of policymakers and are out of their control. Routine events and non-routine crises prompt policymaker attention to lurch unpredictably.

  1. Learn from studies of leadership in complex systems or the policy entrepreneurs who find the right time to exploit events and windows of opportunity to propose solutions.
  2. The policy conditions may be so different in each system that policy learning is limited and transfer would be inappropriate. Events can prompt policymakers to pay disproportionately low or high attention to lessons from elsewhere, and this attention relates weakly to evidence from analysts.

Feel free to choose one or both forms of advice. One is useful for people who see analysts and researchers as essential to major policy change. The other is useful if it serves as a source of cautionary tales rather than fatalistic responses.

See also:

Policy Concepts in 1000 Words: Policy Transfer and Learning

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Policy learning to reduce inequalities: a practical framework

Three ways to encourage policy learning

Epistemic versus bargaining-driven policy learning

The ‘evidence-based policymaking’ page explores these issues in more depth

Leave a comment

Filed under 750 word policy analysis, IMAJINE, Policy learning and transfer, public policy

Policy in 500 Words: Trust

This post summarises ‘COVID-19: effective policymaking depends on trust in experts, politicians, and the public’ by Adam Wellstead and me.

The meaning of trust

We define trust as ‘a belief in the reliability of other people, organizations, or processes’, but it is one of those terms – like ‘policy’ – that defies a single comprehensive definition. The term ‘distrust’ complicates things further, since it does not simply mean the absence of trust.

Its treatment in social science also varies, which makes our statement – ‘Trust is necessary for cooperation, coordination, social order, and to reduce the need for coercive state imposition’ – one of many ways to understand its role.

A summary of key concepts

Social science accounts of trust relate it to:

1. Individual choice

I may trust someone to do something if I value their integrity (if they say they will do it, I believe them), credibility (I believe their claim is accurate and feasible), and competence (I believe they have the ability).

This perception of reliability depends on:

  • The psychology of the truster. The truster assesses the risk of relying on others, while combining cognition and emotion to relate that risk of making themselves vulnerable to the benefit of collective action, while drawing on an expectation of reciprocity.
  • The behaviour of the trustee. They demonstrate their trustworthiness in relation to past performance, which demonstrates their competence and reliability and perhaps their selflessness in favour of collective action.
  • Common reference points. The trustee and truster may use shortcuts to collective action, such as a reference to something they have in common (e.g. their beliefs or social background), their past interactions, or the authority of the trustee.

2. Social and political rules (aka institutions).

Perhaps ideally, we would learn who to trust via our experiences of working together, but we also need to trust people we have never met, and put equivalent trust in organisations and ‘systems’.

In that context, approaches such as the Institutional Analysis and Development (IAD) identify the role of many different kinds of rules in relation to trust:

  • Rules can be formal, written, and widely understood (e.g. to help assign authority regardless of levels of interaction) or informal, unwritten, and only understood by some (e.g. resulting from interactions in some contexts).
  • Rules can represent low levels of trust and a focus on deterring breaches (e.g. creating and enforcing contracts) or high levels of trust (e.g. to formalize ‘effective practices built on reciprocity, emotional bonds, and/or positive expectations’).

3. Societal necessity and interdependence.

Trust is a functional requirement. We need to trust people because we cannot maintain a functional society or political system without working together. Trust-building underpins the study of collaboration (or cooperation and bargaining), such as in the Ecology of Games approach (which draws on the IAD).

  • In that context, trust is a resource (to develop) that is crucial to a required outcome.

Is trust good and distrust bad?

We describe trust as ‘necessary for cooperation’ and distrust as a ‘potent motivator’ that may prompt people to ignore advice or defy cooperation or instruction. Yet, neither is necessarily good or bad. Too much trust may be a function of: (1) the abdication of our responsibility to engage critically with leaders in political systems, (2) vulnerability to manipulation, and/ or (3) excessive tribalism, prompting people to romanticise their own cause and demonise others, each of which could lead us to accept uncritically the cynical choices of policymakers.

Further reading

Trust is a slippery concept, and academics often make it slippier by assuming rather than providing a definition. In that context, why not read all of the 500 Words series and ask yourself where trust/ distrust fit in?

Leave a comment

Filed under 500 words, public policy

Policy Analysis in 750 Words: power and knowledge

This post adapts Policy in 500 Words: Power and Knowledge (the body of this post) to inform the Policy Analysis in 750 words series (the top and tails).

One take home message from the 750 Words series is to avoid seeing policy analysis simply as a technical (and ‘evidence-based’) exercise. Mainstream policy analysis texts break down the process into technical-looking steps, but also show how each step relates to a wider political context. Critical policy analysis texts focus more intensely on the role of politics in the everyday choices that we might otherwise take for granted or consider to be innocuous. The latter connect strongly to wider studies of the links between power and knowledge.

Power and ideas

Classic studies suggest that the most profound and worrying kinds of power are the hardest to observe. We often witness highly visible political battles and can use pluralist methods to identify who has material resources, how they use them, and who wins. However, key forms of power ensure that many such battles do not take place. Actors often use their resources to reinforce social attitudes and policymakers’ beliefs, to establish which issues are policy problems worthy of attention and which populations deserve government support or punishment. Key battles may not arise because not enough people think they are worthy of debate. Attention and support for debate may rise, only to be crowded out of a political agenda in which policymakers can only debate a small number of issues.

Studies of power relate these processes to the manipulation of ideas or shared beliefs under conditions of bounded rationality (see for example the NPF). Manipulation might describe some people getting other people to do things they would not otherwise do. They exploit the beliefs of people who do not know enough about the world, or themselves, to know how to identify and pursue their best interests. Or, they encourage social norms – in which we describe some behaviour as acceptable and some as deviant – which are enforced by (1) the state (for example, via criminal justice and mental health policy), (2) social groups, and (3) individuals who govern their own behaviour with reference to what they feel is expected of them (and the consequences of not living up to expectations).

Such beliefs, norms, and rules are profoundly important because they often remain unspoken and taken for granted. Indeed, some studies equate them with the social structures that appear to close off some action. If so, we may not need to identify manipulation to find unequal power relationships: strong and enduring social practices help some people win at the expense of others, by luck or design.

Relating power to policy analysis: whose knowledge matters?

The concept of‘epistemic violence’ is one way todescribe the act of dismissing an individual, social group, or population by undermining the value of their knowledge or claim to knowledge. Specific discussions include: (a) the colonial West’s subjugation of colonized populations, diminishing the voice of the subaltern; (b) privileging scientific knowledge and dismissing knowledge claims via personal or shared experience; and (c) erasing the voices of women of colour from the history of women’s activism and intellectual history.

It is in this context that we can understand ‘critical’ research designed to ‘produce social change that will empower, enlighten, and emancipate’ (p51). Powerlessness can relate to the visible lack of economic material resources and factors such as the lack of opportunity to mobilise and be heard.

750 Words posts examining this link between power and knowledge

Some posts focus on the role of power in research and/ or policy analysis:

These posts ask questions such as: who decides what evidence will be policy-relevant, whose knowledge matters, and who benefits from this selective use of evidence? They help to (1) identify the exercise of power to maintain evidential hierarchies (or prioritise scientific methods over other forms of knowledge gathering and sharing), and (2) situate this action within a wider context (such as when focusing on colonisation and minoritization). They reflect on how (and why) analysts should respect a wider range of knowledge sources, and how to produce more ethical research with an explicit emancipatory role. As such, they challenge the – naïve or cynical – argument that science and scientists are objective and that science-informed analysis is simply a technical exercise (see also Separating facts from values).

Many posts incorporate these discussions into many policy analysis themes.

See also

Policy Concepts in 1000 Words: Power and Ideas

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization. It discusses studies of education policy (many draw on critical policy analysis)

There are also many EBPM posts that slip this discussion of power and politics into discussions of evidence and policy. They don’t always use the word ‘power’ though (see Evidence-informed policymaking: context is everything)

Leave a comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: Separating facts from values

This post begins by reproducing Can you separate the facts from your beliefs when making policy?(based on the 1st edition of Understanding Public Policy) …

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.

It continues with an edited excerpt from p59 of Understanding Public Policy, which explores the implications of bounded rationality for contemporary accounts of ‘evidence-based policymaking’:

‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.

Implications for policy analysis

Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.

A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.

Most relevant posts in the 750 series

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 

Carol Bacchi (2009) Analysing Policy: What’s the problem represented to be? 

Deborah Stone (2012) Policy Paradox

Who should be involved in the process of policy analysis?

William Riker (1986) The Art of Political Manipulation

Using Statistics and Explaining Risk (David Spiegelhalter and Gerd Gigerenzer)

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

See also

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

1 Comment

Filed under 750 word policy analysis, Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Policy Analysis in 750 Words: How to communicate effectively with policymakers

This post forms one part of the Policy Analysis in 750 words series overview. The title comes from this article by Cairney and Kwiatkowski on ‘psychology based policy studies’.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts. How might we combine insights to think about effective communication?

1. Insights from policy analysis texts

Most texts in this series relate communication to understanding your audience (or client) and the political context. Your audience has limited attention or time to consider problems. They may have a good antennae for the political feasibility of any solution, but less knowledge of (or interest in) the technical details. In that context, your aim is to help them treat the problem as worthy of their energy (e.g. as urgent and important) and the solution as doable. Examples include:

  • Bardach: communicating with a client requires coherence, clarity, brevity, and minimal jargon.
  • Dunn: argumentation involves defining the size and urgency of a problem, assessing the claims made for each solution, synthesising information from many sources into a concise and coherent summary, and tailoring reports to your audience.
  • Smith: your audience makes a quick judgement on whether or not to read your analysis. Ask yourself questions including: how do I frame the problem to make it relevant, what should my audience learn, and how does each solution relate to what has been done before? Maximise interest by keeping communication concise, polite, and tailored to a policymaker’s values and interests.

2. Insights from studies of policymaker psychology

These insights emerged from the study of bounded rationality: policymakers do not have the time, resources, or cognitive ability to consider all information, possibilities, solutions, or consequences of their actions. They use two types of informational shortcut associated with concepts such as cognition and emotion, thinking ‘fast and slow’, ‘fast and frugal heuristics’, or, if you like more provocative terms:

  • ‘Rational’ shortcuts. Goal-oriented reasoning based on prioritizing trusted sources of information.
  • ‘Irrational’ shortcuts. Emotional thinking, or thought fuelled by gut feelings, deeply held beliefs, or habits.

We can use such distinctions to examine the role of evidence-informed communication, to reduce:

  • Uncertainty, or a lack of policy-relevant knowledge. Focus on generating ‘good’ evidence and concise communication as you collate and synthesise information.
  • Ambiguity, or the ability to entertain more than one interpretation of a policy problem. Focus on argumentation and framing as you try to maximise attention to (a) one way of defining a problem, and (b) your preferred solution.

Many policy theories describe the latter, in which actors: combine facts with emotional appeals, appeal to people who share their beliefs, tell stories to appeal to the biases of their audience, and exploit dominant ways of thinking or social stereotypes to generate attention and support. These possibilities produce ethical dilemmas for policy analysts.

3. Insights from studies of complex policymaking environments

None of this advice matters if it is untethered from reality.

Policy analysis texts focus on political reality to note that even a perfectly communicated solution is worthless if technically feasible but politically unfeasible.

Policy process texts focus on policymaking reality: showing that ideal-types such as the policy cycle do not guide real-world action, and describing more accurate ways to guide policy analysts.

For example, they help us rethink the ‘know your audience’ mantra by:

Identifying a tendency for most policy to be processed in policy communities or subsystems:

Showing that many policymaking ‘centres’ create the instruments that produce policy change

Gone are the mythical days of a small number of analysts communicating to a single core executive (and of the heroic researcher changing the world by speaking truth to power). Instead, we have many analysts engaging with many centres, creating a need to not only (a) tailor arguments to different audiences, but also (b) develop wider analytical skills (such as to foster collaboration and the use of ‘design principles’).

How to communicate effectively with policymakers

In that context, we argue that effective communication requires analysts to:

1. Understand your audience and tailor your response (using insights from psychology)

2. Identify ‘windows of opportunity’ for influence (while noting that these windows are outside of anyone’s control)

3. Engage with real world policymaking rather than waiting for a ‘rational’ and orderly process to appear (using insights from policy studies).

See also:

Why don’t policymakers listen to your evidence?

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

Entrepreneurial policy analysis

1 Comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

Policy in 500 Words: Peter Hall’s policy paradigms

Several 500 Word and 1000 Word (a, b, c) posts try to define and measure policy change.

Most studies agree that policymaking systems produce huge amounts of minor change and rare instances of radical change, but not how to explain these patterns. For example:

  • Debates on incrementalism questioned if radical change could be managed via non-radical steps.
  • Punctuated equilibrium theory describes policy change as a function of disproportionately low or high attention to problems, and akin to the frequency of earthquakes (a huge number of tiny changes, and more major changes than we would see in a ‘normal distribution’).

One of the most famous accounts of major policy change is by Peter Hall. ‘Policy paradigms’ help explain a tendency towards inertia, punctuated rarely by radical change (compare with discussions of path dependence and critical junctures).

A policy paradigm is a dominant and often taken-for-granted worldview (or collection of beliefs) about: policy goals, the nature of a policy problem, and the instruments to address it.

Paradigms can operate for long periods, subject to minimal challenge or defended successfully during events that call current policies into question. Adherence to a paradigm produces two ‘orders’ of change:

  • 1st order: frequent routine bureaucratic changes to instruments while maintaining policy goals.
  • 2nd order: less frequent, non-routine changes (or use of new instruments) while maintaining policy goals.

Radical and rare – 3rd order – policy change may only follow a crisis in which policymakers cannot solve a policy problem or explain why policy is failing. It prompts a reappraisal and rejection of the dominant paradigm, by a new government with new ways of thinking and/or a government rejecting current experts in favour of new ones. Hall’s example was of rapid paradigm shift in UK economic policy – from ‘Keynesianism’ to ‘Monetarism’ – within very few years.

Hall’s account prompted two different debates:

1. Some describe Hall’s case study as unusual.

Many scholars produced different phrases to describe a more likely pattern of (a) non-radical policy changes contributing to (b) long-term paradigm change and (c) institutional change, perhaps over decades. They include: ‘gradual change with transformative results’ and ‘punctuated evolution’ (see also 1000 Words: Evolution).

2. Some describe Hall’s case study as inaccurate.

This UK paradigm change did not actually happen. Instead, there was:

(a) A sudden and profound policy change that did not represent a paradigm shift (the UK experiment with Monetarism was short-lived).

(b) A series of less radical changes that produced paradigm change over decades: from Keynesianism to ‘neo-Keynesianism’, or from state intervention to neoliberalism (such as to foster economic growth via private rather than public borrowing and spending)

These debates connect strongly to issues in policy analysis, particularly if analysts seek transformative policy change to challenge unequal and unfair outcomes (such as in relation to racism or the climate crisis):

  1. Is paradigm change generally only possible over decades?
  2. How will we know if this transformation is actually taking place and here to stay (if even the best of us can be fooled by temporary developments)?

See also:

1. Beware the use of the word ‘evolution

2. This focus on the endurance of policy instrument change connects to studies of policy success (see Great Policy Successes).

3. Paul Cairney and Chris Weible (2015) ‘Comparing and Contrasting Peter Hall’s Paradigms and Ideas with the Advocacy Coalition Framework’ in (eds) M. Howlett and J. Hogan Policy Paradigms in Theory and Practice (Basingstoke: Palgrave) PDF

4 Comments

Filed under 500 words, public policy

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization

By Paul Cairney and Sean Kippin

This post summarizes a key section of our review of education equity policymaking [see the full article for references to the studies summarized here].

One of the main themes is that many governments present a misleading image of their education policies. There are many variations on this theme, in which policymakers:

  1. Describe the energetic pursuit of equity, and use the right language, as a way to hide limited progress.
  2. Pursue ‘equity for all’ initiatives that ignore or downplay the specific importance of marginalization and minoritization, such as in relation to race and racism, immigration, ethnic minorities, and indigenous populations.
  3. Pursue narrow definitions of equity in terms of access to schools, at the expense of definitions that pay attention to ‘out of school’ factors and social justice.

Minoritization is a strong theme in US studies in particular. US experiences help us categorise multiple modes of marginalisation in relation to race and migration, driven by witting and unwitting action and explicit and implicit bias:

  • The social construction of students and parents. Examples include: framing white students as ‘gifted’ and more deserving of merit-based education (or victims of equity initiatives); framing non-white students as less intelligent, more in need of special needs or remedial classes, and having cultural or other learning ‘deficits’ that undermine them and disrupt white students; and, describing migrant parents as unable to participate until they learn English.
  • Maintaining or failing to challenge inequitable policies. Examples include higher funding for schools and colleges with higher white populations, and tracking (segregating students according to perceived ability), which benefit white students disproportionately.
  • Ignoring social determinants or ‘out of school’ factors.
  • Creating the illusion of equity with measures that exacerbate inequalities. For example, promoting school choice policies while knowing that the rules restrict access to sought-after schools.
  • Promoting initiatives to ignore race, including so-called ‘color blind’ or ‘equity for all’ initiatives.
  • Prioritizing initiatives at the expense of racial or socio-economic equity, such as measures to boost overall national performance at the expense of targeted measures.
  • Game playing and policy subversion, including school and college selection rules to restrict access and improve metrics.

The wider international – primarily Global North – experience suggests that minoritization and marginalization in relation to race, ethnicity, and migration is a routine impediment to equity strategies, albeit with some uncertainty about which policies would have the most impact.

Other country studies describe the poor treatment of citizens in relation to immigration status or ethnicity, often while presenting the image of a more equitable system. Until recently, Finland’s global reputation for education equity built on universalism and comprehensive schools has contrasted with its historic ‘othering’ of immigrant populations. Japan’s reputation for containing a homogeneous population, allowing its governments to present an image of classless egalitarianism and harmonious society, contrasts with its discrimination against foreign students. Multiple studies of Canadian provinces provide the strongest accounts of the symbolic and cynical use of multiculturalism for political gains and economic ends:

As in the US, many countries use ‘special needs’ categories to segregate immigrant and ethnic minority populations. Mainstreaming versus special needs debates have a clear racial and ethnic dimension when (1) some groups are more likely to be categorised as having learning disabilities or behavioural disorders, and (2) language and cultural barriers are listed as disabilities in many countries. Further, ‘commonwealth’ country studies identify the marginalisation of indigenous populations in ways comparable to the US marginalisation of students of colour.

Overall, these studies generate the sense that the frequently used language of education equity policy can signal a range of possibilities, from (1) high energy and sincere commitment to social justice, to (2) the cynical use of rhetoric and symbolism to protect historic inequalities.

Examples:

  • Turner, E.O., and Spain, A.K., (2020) ‘The Multiple Meanings of (In)Equity: Remaking School District Tracking Policy in an Era of Budget Cuts and Accountability’, Urban Education, 55, 5, 783-812 https://doi.org/10.1177%2F0042085916674060
  • Thorius, K.A. and Maxcy, B.D. (2015) ‘Critical Practice Analysis of Special Education Policy: An RTI Example’, Remedial and Special Education, 36, 2, 116-124 https://doi.org/10.1177%2F0741932514550812
  • Felix, E.R. and Trinidad, A. (2020) ‘The decentralization of race: tracing the dilution of racial equity in educational policy’, International Journal of Qualitative Studies in Education, 33, 4, 465-490 https://doi.org/10.1080/09518398.2019.1681538
  • Alexiadou, N. (2019) ‘Framing education policies and transitions of Roma students in Europe’, Comparative Education, 55, 3,  https://doi.org/10.1080/03050068.2019.1619334

See also: https://paulcairney.wordpress.com/2017/09/09/policy-concepts-in-500-words-social-construction-and-policy-design/

2 Comments

Filed under education policy, Evidence Based Policymaking (EBPM), Policy learning and transfer, Prevention policy, public policy

The future of public health policymaking after COVID-19: lessons from Health in All Policies

Paul Cairney, Emily St Denny, Heather Mitchell 

This post summarises new research on the health equity strategy Health in All Policies. As our previous post suggests, it is common to hope that a major event will create a ‘window of opportunity’ for such strategies to flourish, but the current COVID-19 experience suggests otherwise. If so, what do HIAP studies tell us about how to respond, and do they offer any hope for future strategies? The full report is on Open Research Europe, accompanied by a brief interview on its contribution to the Horizon 2020 project – IMAJINE – on spatial justice.

COVID-19 should have prompted governments to treat health improvement as fundamental to public policy

Many had made strong rhetorical commitments to public health strategies focused on preventing a pandemic of non-communicable diseases (NCDs). To do so, they would address the ‘social determinants’ of health, defined by the WHO as ‘the unfair and avoidable differences in health status’ that are ‘shaped by the distribution of money, power and resources’ and ‘the conditions in which people are born, grow, live, work and age’.

COVID-19 reinforces the impact of the social determinants of health. Health inequalities result from factors such as income and social and environmental conditions, which influence people’s ability to protect and improve their health. COVID-19 had a visibly disproportionate impact on people with (a) underlying health conditions associated with NCDs, and (b) less ability to live and work safely.

Yet, the opposite happened. The COVID-19 response side-lined health improvement

Health departments postponed health improvement strategies and moved resources to health protection.

This experience shows that the evidence does not speak for itself

The evidence on social determinants is clear to public health specialists, but the idea of social determinants is less well known or convincing to policymakers.

It also challenges the idea that the logic of health improvement is irresistible

Health in All Policies (HIAP) is the main vehicle for health improvement policymaking, underpinned by: a commitment to health equity by addressing the social determinants of health; the recognition that the most useful health policies are not controlled by health departments; the need for collaboration across (and outside) government; and, the search for high level political commitment to health improvement.

Its logic is undeniable to HIAP advocates, but not policymakers. A government’s public commitment to HIAP does not lead inevitably to the roll-out of a fully-formed HIAP model. There is a major gap between the idea of HIAP and its implementation. It is difficult to generate HIAP momentum, and it can be lost at any time.

Instead, we need to generate more realistic lessons from health improvement and promotion policy

However, most HIAP research does not provide these lessons. Most HIAP research combines:

  1. functional logic (here is what we need)
  2. programme logic (here is what we think we need to do to achieve it), and
  3. hope.

Policy theory-informed empirical studies of policymaking could help produce a more realistic agenda, but very few HIAP studies seem to exploit their insights.

To that end, this review identifies lessons from studies of HIAP and policymaking

It summarises a systematic qualitative review of HIAP research. It includes 113 articles (2011-2020) that refer to policymaking theories or concepts while discussing HIAP.

We produced these conclusions from pre-COVID-19 studies of HIAP and policymaking, but our new policymaking context – and its ironic impact on HIAP – is impossible to ignore.

It suggests that HIAP advocates produced a 7-point playbook for the wrong game

The seven most common pieces of advice add up to a plausible but incomplete strategy:

  1. adopt a HIAP model and toolkit
  2. raise HIAP awareness and support in government
  3. seek win-win solutions with partners
  4. avoid the perception of ‘health imperialism’ when fostering intersectoral action
  5. find HIAP policy champions and entrepreneurs
  6. use HIAP to support the use of health impact assessments (HIAs)
  7. challenge the traditional cost-benefit analysis approach to valuing HIAP.

Yet, two emerging pieces of advice highlight the limits to the current playbook and the search for its replacement:

  1. treat HIAP as a continuous commitment to collaboration and health equity, not a uniform model; and,
  2. address the contradictions between HIAP aims.

As a result, most country studies report a major, unexpected, and disappointing gap between HIAP commitment and actual outcomes

These general findings are apparent in almost all relevant studies. They stand out in the ‘best case’ examples where: (a) there is high political commitment and strategic action (such as South Australia), or (b) political and economic conditions are conducive to HIAP (such as Nordic countries).

These studies show that the HIAP playbook has unanticipated results, such as when the win-win strategy leads to  HIAP advocates giving ground but receiving little in return.

HIAP strategies to challenge the status quo are also overshadowed by more important factors, including (a) a far higher commitment to existing healthcare policies and the core business of government, and (b) state retrenchment. Additional studies of decentralised HIAP models find major gaps between (a) national strategic commitment (backed by national legislation) and (b) municipal government progress.

Some studies acknowledge the need to use policymaking research to produce new ways to encourage and evaluate HIAP success

Studies of South Australia situate HIAP in a complex policymaking system in which the link between policy activity and outcomes is not linear.  

Studies of Nordic HIAP show that a commitment to municipal responsibility and stakeholder collaboration rules out the adoption of a national uniform HIAP model.

However, most studies do not use policymaking research effectively or appropriately

Almost all HIAP studies only scratch the surface of policymaking research (while some try to synthesise its insights, but at the cost of clarity).

Most HIAP studies use policy theories to:

  1. produce practical advice (such as to learn from ‘policy entrepreneurs’), or
  2. supplement their programme logic (to describe what they think causes policy change and better health outcomes).

Most policy theories were not designed for this purpose.

Policymaking research helps primarily to explain the HIAP ‘implementation gap’

Its main lesson is that policy outcomes are beyond the control of policymakers and HIAP advocates. This explanation does not show how to close implementation gaps.

Its practical lessons come from critical reflection on dilemmas and politics, not the reinvention of a playbook

It prompts advocates to:

  • Treat HIAP as a political project, not a technical exercise or puzzle to be solved.
  • Re-examine the likely impact of a focus on intersectoral action and collaboration, to recognise the impact of imbalances of power and the logic of policy specialisation.
  • Revisit the meaning-in-practice of the vague aims that they take for granted without explaining, such as co-production, policy learning, and organisational learning.
  • Engage with key trade-offs, such as between a desire for uniform outcomes (to produce health equity) but acceptance of major variations in HIAP policy and policymaking.
  • Avoid reinventing phrases or strategies when facing obstacles to health improvement.

We describe these points in more detail here:

Our Open Research Europe article (peer reviewed) The future of public health policymaking… (europa.eu)

Paul summarises the key points as part of a HIAP panel: Health in All Policies in times of COVID-19

ORE blog on the wider context of this work: forthcoming

7 Comments

Filed under agenda setting, COVID-19, Evidence Based Policymaking (EBPM), Public health, public policy

I am not Peter Matthews

Some notes for my guest appearance on @urbaneprofessor ‘s module

Peter’s description

Paul comes from a Political Science background and started off his project trying to understand why politicians don’t make good policy. He uses a lot of Political Science theory to understand the policy process (what MPP students have been learning) and theory from Public Policy about how to make the policy process better.

I come from a Social Policy background. I presume policy will be bad, and approach policy analysis from a normative position, analysing and criticising it from theoretical and critical perspectives.

Paul’s description

I specialize in the study of public policy and policymaking. I ‘synthesise’ and use policy concepts and theories to ask: how do policy processes work, and why?

Most theories and concepts – summarized in 1000 and 500 words – engage with that question in some way.

As such, I primarily seek to describe and explain policymaking, without spending much time thinking about making it better (unless asked to do so, or unless I feel very energetic).

In particular, I can give you a decent account of how all of these policy theories relate to each other, which is more important that it first seems.

A story of complex government

This ‘synthesis’ relates to my story about key elements of policy theories, with a different context influencing how I tell it. For example, I tend to describe ‘The Policy Process’ in 500 or 1000 words with the ‘Westminster Model’ versus ‘policy communities’ stories in mind (and a US scholar might tell this story in a different way):

Bounded rationality (500, 1000):

  • Individual policymakers can only pay attention to and understand a tiny proportion of (a) available information (b) the policy problems of which they are ostensibly responsible
  • So, they find cognitive shortcuts to pay attention to some issues/ information and ignore the rest (goal setting, relying on trusted advisors, belief translation, gut instinct, etc.)
  • Governmental organisations have more capacity, but also develop ‘standard operating procedures’ to limit their attention, and rely on many other actors for information and advice

Complex Policymaking Environments consisting of:

  • Many actors in many venues
  • Institutions (formal and informal rules)
  • Networks (relationships between policymakers and influencers)
  • Ideas (dominant beliefs, influencing the interpretation of problems and solutions)
  • Socioeconomic context and events

As such, the story of, say, multi-centric policymaking (or MLG, or complexity theory) contrasts with the idea of highly centralized control in the UK government.

A story of ‘evidence based policymaking’

That story provides context for applications to the agendas taken forward by other disciplines or professions.

  • The most obvious example is ‘evidence based policymaking’: my role is to explain why it is little more than a political slogan, and why people should not expect (or indeed want) it to exist, not to lobby for its existence
  • Also working on similar stories in relation to policy learning and policy design: my role is to highlight dilemmas and cautionary tales, not be a policy designer.

The politics of policymaking research

Most of the theories I describe relate to theory-informed empirical projects, generally originating from the US, and generally described as ‘positivist’ in contrast to (say) ‘interpretive’ (or, say, ‘constructivist’).

However, there are some interesting qualifications:

  • Some argue that these distinctions are overcooked (or, I suppose, overboiled)
  • Some try to bring in postpositivist ideas to positivist networks (NPF)
  • Some emerged from ‘critical policy analysis’ (SCPD)

The politics of policy analysis

This context helps understand my most recent book: The Politics of Policy Analysis

The initial podcast tells a story about MPP development, in which I used to ask students to write policy analyses (1st semester) without explaining what policy analysis was, or how to do it. My excuse is that the punchline of the module was: your account of the policy theories/ policy context is more important than your actual analysis (see the Annex to the book).

Since then, I have produced a webpage – 750 – which:

  • summarises the stories of the most-used policy analysis texts (e.g. Bardach) which identify steps including: define the problem; identify solutions; use values to compare trade-offs between solutions; predict their effects; make a recommendation
  • relates those texts to policy theories, to identify how bounded rationality and complexity change that story (and the story of the policy cycle)
  • relates both to ‘critical’ policy analysis and social science texts (some engage directly – like Stone, like Bacchi – while some provide insights – such as on critical race theory – without necessarily describing ‘policy analysis’)

A description of ‘critical’ approaches is fairly broad, but I think they tend to have key elements in common:

  • a commitment to use research to improve policy for marginalized populations (described by Bacchi as siding with the powerless against the powerful, usually in relation to class, race, ethnicity, gender, sexuality, disability)
  • analysing policy to identify: who is portrayed positively/negatively; who benefits or suffers as a result
  • analysing policymaking to identify: whose knowledge counts (e.g. as high quality and policy relevant), who is included or excluded
  • identifying ways to challenge (a) dominant and damaging policy frames and (b) insulated/ exclusive versus participatory/ inclusive forms of policymaking

If so, I would see these three approaches as ways to understand and engage with policymaking that could be complementary or contradictory. In other words, I would warn against assuming one or the other.

1 Comment

Filed under 1000 words, 500 words, 750 word policy analysis

The COVID-19 exams fiasco across the UK: why did policymaking go so wrong?

This post first appeared on the LSE British Politics and Policy blog, and it summarises our new article: Sean Kippin and Paul Cairney (2021) ‘The COVID-19 exams fiasco across the UK: four nations and two windows of opportunity’, British Politics, PDF Annex. The focus on inequalities of attainment is part of the IMAJINE project on spatial justice and territorial inequalities.

In the summer of 2020, after cancelling exams, the UK and devolved governments sought teacher estimates on students’ grades, but supported an algorithm to standardise the results. When the results produced a public outcry over unfair consequences, they initially defended their decision but reverted quickly to teacher assessment. These experiences, argue Sean Kippin and Paul Cairney, highlight the confluence of events and choices in which an imperfect and rejected policy solution became a ‘lifeline’ for four beleaguered governments. 

In 2020, the UK and devolved governments performed a ‘U-turn’ on their COVID-19 school exams replacement policies. The experience was embarrassing for education ministers and damaging to students. There are significant differences between (and often within) the four nations in terms of the structure, timing, weight, and relationship between the different examinations. However, in general, the A-level (England, Northern Ireland, Wales) and Higher/ Advanced Higher (Scotland) examinations have similar policy implications, dictating entry to further and higher education, and influencing employment opportunities. The Priestley review, commissioned by the Scottish Government after their U-turn, described this as an ‘impossible task’.

Initially, each government defined the new policy problem in relation to the need to ‘credibly’ replicate the purpose of exams to allow students to progress to tertiary education or employment. All four quickly announced their intentions to allocate in some form grades to students, rather than replace the assessments with, for example, remote examinations. However, mindful of the long-term credibility of the examinations system and of ensuring fairness, each government opted to maintain the qualifications and seek a similar distribution of grades to previous years. A key consideration was that UK universities accept large numbers of students from across the UK.

One potential solution open to policymakers was to rely solely on teacher grading (CAG). CAGs are ‘based on a range of evidence including mock exams, non-exam assessment, homework assignments and any other record of student performance over the course of study’. Potential problems included the risk of high variation and discrepancies between different centres, the potential overload of the higher education system, and the tendency for teacher predicted grades to reward already privileged students and punish disabled, non-white, and economically deprived children.

A second option was to take CAGs as a starting point, then use an algorithm to produce ‘standardisation’, which was potentially attractive to each government as it allowed students to complete secondary education and to progress to the next level in similar ways to previous (and future) cohorts. Further, an emphasis on the technical nature of this standardisation, with qualifications agencies taking the lead in designing the process by which grades would be allocated, and opting not share the details of its algorithm were a key part of its (temporary) viability. Each government then made similar claims when defending the problem and selecting the solution. Yet this approach reduced both the debate on the unequal impact of this process on students, and the chance for other experts to examine if the algorithm would produce the desired effect. Policymakers in all four governments assured students that the grading would be accurate and fair, with teacher discretion playing a large role in the calculation of grades.

To these governments, it appeared at first that they had found a fair and efficient (or at least defendable) way to allocate grades, and public opinion did not respond negatively to its announcement. However, these appearances proved to be profoundly deceptive and vanished on each day of each exam result. The Scottish national mood shifted so intensely that, after a few days, pursuing standardisation no longer seemed politically feasible. The intense criticism centred on the unequal level of reductions of grades after standardisation, rather than the unequal overall rise in grade performance after teacher assessment and standardisation (which advantaged poorer students).

Despite some recognition that similar problems were afoot elsewhere, this shift of problem definition did not happen in the rest of the UK until (a) their published exam results highlighted similar problems regarding the role of previous school performance on standardised results, and (b) the Scottish Government had already changed course. Upon the release of grades outside Scotland, it became clear that downgrades were also concentrated in more deprived areas. For instance, in Wales, 42% of students saw their A-Level results lowered from their Centre Assessed Grades, with the figure close to a third for Northern Ireland.

Each government thus faced similar choices between defending the original system by challenging the emerging consensus around its apparent unfairness; modifying the system by changing the appeal system; or abandoning it altogether and reverting to solely teacher assessed grades. Ultimately, all three governments followed the same path. Initially, they opted to defend their original policy choice. However, by 17 August, the UK, Welsh, and Northern education secretaries announced (separately) that examination grades would be based solely on CAGs – unless the standardisation process had generated a higher grade (students would receive whichever was highest).

Scotland’s initial experience was instructive to the rest of the UK and its example provided the UK government with a blueprint to follow (eventually). It began with a new policy choice – reverting to teacher assessed grades – sold as fairer to victims of the standardisation process. Once this precedent had been set, a different course for policymakers at the UK level became difficult to resist, particularly when faced with a similar backlash. The UK’s government’s decision in turn influenced the Welsh and Northern Irish governments.

In short, we can see that the particular ordering of choices created a cascading effect across the four governments which created initially one policy solution, before triggering a U-turn. This focus on order and timing should not be lost during the inevitable inquiries and reports on the examinations systems. The take-home message is to not ignore the policy process when evaluating the long-term effect of these policies. Focus on why the standardisation processes went wrong is welcome, but we should also focus on why the policymaking process malfunctioned, to produce a wildly inconsistent approach to the same policy choice in such a short space of time. Examining both aspects of this fiasco will be crucial to the grading process in 2021, given that governments will be seeking an alternative to exams for a second year.

__________________________

Note: the above draws on the authors’ published work in British Politics.

Leave a comment

Filed under IMAJINE, Policy learning and transfer, public policy, UK politics and policy

Policy Analysis in 750 Words: what you need as an analyst versus policymaking reality

This post forms one part of the Policy Analysis in 750 words series overview. Note for the eagle eyed: you are not about to experience déjà vu. I’m just using the same introduction.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The lines between each approach are blurry, and each element makes less sense without the other. However, the distinction is crucial to help us overcome the major confusion associated with this question:

Does policymaking proceed through a series of stages?

The short answer is no.

The longer answer is that you can find about 40 blog posts (of 500 and 1000 words) which compare (a) a stage-based model called the policy cycle, and (b) the many, many policy concepts and theories that describe a far messier collection of policy processes.

cycle

In a nutshell, most policy theorists reject this image because it oversimplifies a complex policymaking system. The image provides a great way to introduce policy studies, and serves a political purpose, but it does more harm than good:

  1. Descriptively, it is profoundly inaccurate (unless you imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and less predictable outputs).
  2. Prescriptively, it gives you rotten advice about the nature of your policymaking task (for more on these points, see this chapter, article, article, and series).

Why does the stages/ policy cycle image persist? Two relevant explanations

 

  1. It arose from a misunderstanding in policy studies

In another nutshell, Chris Weible and I argue (in a secret paper) that the stages approach represents a good idea gone wrong:

  • If you trace it back to its origins, you will find Lasswell’s description of decision functions: intelligence, recommendation, prescription, invocation, application, appraisal and termination.
  • These functions correspond reasonably well to a policy cycle’s stages: agenda setting, formulation, legitimation, implementation, evaluation, and maintenance, succession or termination.
  • However, Lasswell was imagining functional requirements, while the cycle seems to describe actual stages.

In other words, if you take Lasswell’s list of what policy analysts/ policymakers need to do, multiple it by the number of actors (spread across many organisations or venues) trying to do it, then you get the multi-centric policy processes described by modern theories. If, instead, you strip all that activity down into a single cycle, you get the wrong idea.

  1. It is a functional requirement of policy analysis

This description should seem familiar, because the classic policy analysis texts appear to describe a similar series of required steps, such as:

  1. define the problem
  2. identify potential solutions
  3. choose the criteria to compare them
  4. evaluate them in relation to their predicted outcomes
  5. recommend a solution
  6. monitor its effects
  7. evaluate past policy to inform current policy.

However, these texts also provide a heavy dose of caution about your ability to perform these steps (compare Bardach, Dunn, Meltzer and Schwartz, Mintrom, Thissen and Walker, Weimer and Vining)

In addition, studies of policy analysis in action suggest that:

  • an individual analyst’s need for simple steps, to turn policymaking complexity into useful heuristics and pragmatic strategies,

should not be confused with

What you need versus what you can expect

Overall, this discussion of policy studies and policy analysis reminds us of a major difference between:

  1. Functional requirements. What you need from policymaking systems, to (a) manage your task (the 5-8 step policy analysis) and (b) understand and engage in policy processes (the simple policy cycle).
  2. Actual processes and outcomes. What policy concepts and theories tell us about bounded rationality (which limit the comprehensiveness of your analysis) and policymaking complexity (which undermines your understanding and engagement in policy processes).

Of course, I am not about to provide you with a solution to these problems.

Still, this discussion should help you worry a little bit less about the circular arguments you will find in key texts: here are some simple policy analysis steps, but policymaking is not as ‘rational’ as the steps suggest, but (unless you can think of an alternative) there is still value in the steps, and so on.

See also:

The New Policy Sciences

4 Comments

Filed under 750 word policy analysis, agenda setting, public policy

Policy Analysis in 750 Words: What can you realistically expect policymakers to do?

This post forms one part of the Policy Analysis in 750 words series overview.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts.

In this case, modern theories of the policy process help you identify your audience and their capacity to follow your advice. This simple insight may have a profound impact on the advice you give.

Policy analysis for an ideal-type world

For our purposes, an ideal-type is an abstract idea, which highlights hypothetical features of the world, to compare with ‘real world’ descriptions. It need not be an ideal to which we aspire. For example, comprehensive rationality describes the ideal type, and bounded rationality describes the ‘real world’ limitations to the ways in which humans and organisations process information.

 

Imagine writing policy analysis in the ideal-type world of a single powerful ‘comprehensively rational’ policymaker at the heart of government, making policy via an orderly policy cycle.

Your audience would be easy to identify, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change.

You could adopt a simple 5-8 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

I have perhaps over-egged this ideal-type pudding, but I think a lot of traditional policy analyses tapped into this basic idea and focused more on the science of analysis than the political and policymaking context in which it takes place (see Radin and Brans, Geva-May, and Howlett).

Policy analysis for the real world

Then imagine a far messier and less predictable world in which the nature of the policy issue is highly contestedresponsibility for policy is unclear, and no single ‘centre’ has the power to turn a recommendation into an outcome.

This image is a key feature of policy process theories, which describe:

  • Many policymakers and influencers spread across many levels and types of government (as the venues in which authoritative choice takes place). Consequently, it is not a straightforward task to identify and know your audience, particularly if the problem you seek to solve requires a combination of policy instruments controlled by different actors.
  • Each venue resembles an institution driven by formal and informal rules. Formal rules are written-down or widely-known. Informal rules are unwritten, difficult to understand, and may not even be understood in the same way by participants. Consequently, it is difficult to know if your solution will be a good fit with the standard operating procedures of organisations (and therefore if it is politically feasible or too challenging).
  • Policymakers and influencers operate in ‘subsystems’, forming networks built on resources such as trust or coalitions based on shared beliefs. Effective policy analysis may require you to engage with – or become part of – such networks, to allow you to understand the unwritten rules of the game and encourage your audience to trust the messenger. In some cases, the rules relate to your willingness to accept current losses for future gains, to accept the limited impact of your analysis now in the hope of acceptance at the next opportunity.
  • Actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so well-established as to be taken for granted. Common terms include paradigms, hegemons, core beliefs, and monopolies of understandings. These dominant frames of reference give meaning to your policy solution. They prompt you to couch your solutions in terms of, for example, a strong attachment to evidence-based cases in public health, value for money in treasury departments, or with regard to core principles such as liberalism or socialism in different political systems.
  • Your solutions relate to socioeconomic context and the events that seem (a) impossible to ignore and (b) out of the control of policymakers. Such factors range from a political system’s geography, demography, social attitudes, and economy, while events can be routine elections or unexpected crises.

What would you recommend under these conditions? Rethinking 5-step analysis

There is a large gap between policymakers’ (a) formal responsibilities versus (b) actual control of policy processes and outcomes. Even the most sophisticated ‘evidence based’ analysis of a policy problem will fall flat if uninformed by such analyses of the policy process. Further, the terms of your cost-benefit analysis will be highly contested (at least until there is agreement on what the problem is, and how you would measure the success of a solution).

Modern policy analysis texts try to incorporate such insights from policy theories while maintaining a focus on 5-8 steps. For example:

  • Meltzer and Schwartz contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’.
  • Bardachand Dunn emphasise the value of political pragmatism and the ‘art and craft’ of policy analysis.
  • Weimer and Vininginvest 200 pages in economic analyses of markets and government, often highlighting a gap between (a) our ability to model and predict economic and social behaviour, and (b) what actually happens when governments intervene.
  • Mintrom invites you to see yourself as a policy entrepreneur, to highlight the value of of ‘positive thinking’, creativity, deliberation, and leadership, and perhaps seek ‘windows of opportunity’ to encourage new solutions. Alternatively, a general awareness of the unpredictability of events can prompt you to be modest in your claims, since the policymaking environment may be more important (than your solution) to outcomes.
  • Thissen and Walker focus more on a range of possible roles than a rigid 5-step process.

Beyond 5-step policy analysis

  1. Compare these pragmatic, client-orientated, and communicative models with the questioning, storytelling, and decolonizing approaches by Bacchi, Stone, and L.T. Smith.
  • The latter encourage us to examine more closely the politics of policy processes, including the importance of framing, narrative, and the social construction of target populations to problem definition and policy design.
  • Without this wider perspective, we are focusing on policy analysis as a process rather than considering the political context in which analysts use it.
  1. Additional posts on entrepreneurs and ‘systems thinking’ [to be added] encourage us to reflect on the limits to policy analysis in multi-centric policymaking systems.

 

 

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Concepts in 1000 Words: how do policy theories describe policy change?

The 1000 words and 500 words series already show how important but difficult it is to define and measure policy change. In this post, Leanne Giordono and I dig deeper into the – often confusingly different – ways in which different researchers conceptualise this process. We show why there is such variation and provide a checklist of questions to ask of any description of policy change.

Measuring policy change is more difficult than it looks

The measurement of policy change is important. Most ‘what is policy?’ discussions remind us that there can be a huge difference between policy as a (a)  statement of intent, (b) strategy, (c) collection of tools/ instruments and (d) contributor to policy outcomes.

Policy theories remind us that, while politicians and political parties often promise to sweep into office and produce radical departures from the past, most policy change is minor. There is a major gap between stated intention and actual outcomes, partly because policymakers do not control the policy process for which they are responsible. Instead, they inherit the commitments of their predecessors and make changes at the margins.

The 1000 words and 500 words posts suggest that we address this problem of measurement by identifying the use of a potentially large number of policy instruments or policy tools such as regulation (including legislation) and resources (money and staffing) to accentuate the power at policymaker’s disposal.

Then, they suggest that we tell a story of policy change, focusing on (a) what problem policymakers were trying to solve, and the size of their response in relation to the size of the problem, and (b) the precise nature of specific changes, or how each change contributes to the ‘big picture’.

This recommendation highlights a potentially major problem: as researchers, we can produce very different narratives of policy change from the same pool of evidence, by accentuating some measures and ignoring others, or putting more faith in some data than others.

Three ways to navigate different approaches to imagining and measuring change

Researchers use many different concepts and measures to define and identify policy change. It would be unrealistic – and perhaps unimaginative – to solve this problem with a call for one uniform approach.

Rather, our aim is to help you (a) navigate this diverse field by (b) identifying the issues and concepts that will help you interpret and compare different ways to measure change.

  1. Check if people are ‘showing their work’

Pay close attention to how scholars are defining their terms. For example, be careful with incomplete definitions that rely on a reference to evolutionary change (which can mean so many different things) or incremental change (e.g. does an increment mean small or non-radical)? Or, note that frequent distinctions between minor versus major change seem useful, but we are often trying to capture and explain a confusing mixture of both.

  1. Look out for different questions

Multiple typologies of change often arise because different theories ask and answer different questions:

  • The Advocacy Coalition Framework distinguishes between minor and major change, associating the former with routine ‘policy-oriented learning’, and the latter with changes in core policy beliefs, often caused by a ‘shock’ associated with policy failure or external events.
  • Innovation and Diffusion models examine the adoption and non-adoption of a specific policy solution over a specific period of time in multiple jurisdictions as a result of learning, imitation, competition or coercion.
  • Classic studies of public expenditure generated four categories to ask if the ‘budgetary process of the United States government is equivalent to a set of temporally stable linear decision rules’. They describe policy change as minor and predictable and explain outliers as deviations from the norm.
  • Punctuated Equilibrium Theory identifies a combination of (a) huge numbers of small policy change and (b) small numbers of huge change as the norm, in budgetary and other policy changes.
  • Hall distinguishes between (a) routine adjustments to policy instruments, (b) changes in instruments to achieve existing goals, and (c) complete shifts in goals. He compares long periods in which (1) some ideas dominate and institutions do not change, with (2) ‘third order’ change in which a profound sense of failure contributes to a radical shift of beliefs and rules.
  • More recent scholarship identifies a range of concepts – including layering, drift, conversion, and displacement – to explain more gradual causes of profound changes to institutions.

These approaches identify a range of possible sources of measures:

  1. a combination of policy instruments that add up to overall change
  2. the same single change in many places
  3. change in relation to one measure, such as budgets
  4. a change in ideas, policy instruments and/ or rules.

As such, the potential for confusion is high when we include all such measures under the single banner of ‘policy change’.

  1. Look out for different measures

Spot the different ways in which scholars try to ‘operationalize’ and measure policy change, quantitatively and/ or qualitatively, with reference to four main categories.

  1. Size can be measured with reference to:
  • A comparison of old and new policy positions.
  • A change observed in a sample or whole population (using, for example, standard deviations from the mean).
  • An ‘ideal’ state, such as an industry or ‘best practice’ standard.
  1. Speed describes the amount of change that occurs over a specific interval of time, such as:
  • How long it takes for policy to change after a specific event or under specific conditions.
  • The duration of time between commencement and completion (often described as ‘sudden’ or ‘gradual’).
  • How this speed compares with comparable policy changes in other jurisdictions (often described with reference to ‘leaders’ and ‘laggards’).
  1. Direction describes the course of the path from one policy state to another. It is often described in comparison to:
  • An initial position in one jurisdiction (such as an expansion or contraction).
  • Policy or policy change in other jurisdictions (such as via ‘benchmarking’ or ‘league tables’)
  • An ‘ideal’ state (such as with reference to left or right wing aims).
  1. Substance relates to policy change in relations to:
  • Relatively tangible instruments such as legislation, regulation, or public expenditure.
  • More abstract concepts such as in relation to beliefs or goals.

Take home points for students

Be thoughtful when drawing comparisons between applications, drawn from many theoretical traditions, and addressing different research questions.  You can seek clarity by posing three questions:

  1. How clearly has the author defined the concept of policy change?
  2. How are the chosen theories and research questions likely to influence the author’s operationalization of policy change?
  3. How does the author operationalize policy change with respect to size, speed, direction, and/or substance?

However, you should also note that the choice of definition and theory may affect the meaning of measures such as size, speed, direction, and/or substance.

 

9 Comments

Filed under 1000 words, public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see this post on the meaning of ‘irrational’). Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Now go to:

What can you do when policymakers ignore your evidence?

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes how to be a ‘policy entrepreneur’.

See also the 750 Words series to explore the implications for policy analysis

18 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

For a longer discussion, see Fostering Evidence-informed Policy Making: Uncertainty Versus Ambiguity (PDF)

Or, if you fancy it in French: Favoriser l’élaboration de politiques publiques fondées sur des données probantes : incertitude versus ambiguïté (PDF)

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

31 Comments

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling