Tag Archives: policymaking

Policy Analysis in 750 Words: what you need as an analyst versus policymaking reality

This post forms one part of the Policy Analysis in 750 words series overview. Note for the eagle eyed: you are not about to experience déjà vu. I’m just using the same introduction.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The lines between each approach are blurry, and each element makes less sense without the other. However, the distinction is crucial to help us overcome the major confusion associated with this question:

Does policymaking proceed through a series of stages?

The short answer is no.

The longer answer is that you can find about 40 blog posts (of 500 and 1000 words) which compare (a) a stage-based model called the policy cycle, and (b) the many, many policy concepts and theories that describe a far messier collection of policy processes.

cycle

In a nutshell, most policy theorists reject this image because it oversimplifies a complex policymaking system. The image provides a great way to introduce policy studies, and serves a political purpose, but it does more harm than good:

  1. Descriptively, it is profoundly inaccurate (unless you imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and less predictable outputs).
  2. Prescriptively, it gives you rotten advice about the nature of your policymaking task (for more on these points, see this chapter, article, article, and series).

Why does the stages/ policy cycle image persist? Two relevant explanations

 

  1. It arose from a misunderstanding in policy studies

In another nutshell, Chris Weible and I argue (in a secret paper) that the stages approach represents a good idea gone wrong:

  • If you trace it back to its origins, you will find Lasswell’s description of decision functions: intelligence, recommendation, prescription, invocation, application, appraisal and termination.
  • These functions correspond reasonably well to a policy cycle’s stages: agenda setting, formulation, legitimation, implementation, evaluation, and maintenance, succession or termination.
  • However, Lasswell was imagining functional requirements, while the cycle seems to describe actual stages.

In other words, if you take Lasswell’s list of what policy analysts/ policymakers need to do, multiple it by the number of actors (spread across many organisations or venues) trying to do it, then you get the multi-centric policy processes described by modern theories. If, instead, you strip all that activity down into a single cycle, you get the wrong idea.

  1. It is a functional requirement of policy analysis

This description should seem familiar, because the classic policy analysis texts appear to describe a similar series of required steps, such as:

  1. define the problem
  2. identify potential solutions
  3. choose the criteria to compare them
  4. evaluate them in relation to their predicted outcomes
  5. recommend a solution
  6. monitor its effects
  7. evaluate past policy to inform current policy.

However, these texts also provide a heavy dose of caution about your ability to perform these steps (compare Bardach, Dunn, Meltzer and Schwartz, Mintrom, Thissen and Walker, Weimer and Vining)

In addition, studies of policy analysis in action suggest that:

  • an individual analyst’s need for simple steps, to turn policymaking complexity into useful heuristics and pragmatic strategies,

should not be confused with

What you need versus what you can expect

Overall, this discussion of policy studies and policy analysis reminds us of a major difference between:

  1. Functional requirements. What you need from policymaking systems, to (a) manage your task (the 5-8 step policy analysis) and (b) understand and engage in policy processes (the simple policy cycle).
  2. Actual processes and outcomes. What policy concepts and theories tell us about bounded rationality (which limit the comprehensiveness of your analysis) and policymaking complexity (which undermines your understanding and engagement in policy processes).

Of course, I am not about to provide you with a solution to these problems.

Still, this discussion should help you worry a little bit less about the circular arguments you will find in key texts: here are some simple policy analysis steps, but policymaking is not as ‘rational’ as the steps suggest, but (unless you can think of an alternative) there is still value in the steps, and so on.

See also:

The New Policy Sciences

2 Comments

Filed under 750 word policy analysis, agenda setting, public policy

Policy Analysis in 750 Words: What can you realistically expect policymakers to do?

This post forms one part of the Policy Analysis in 750 words series overview.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts.

In this case, modern theories of the policy process help you identify your audience and their capacity to follow your advice. This simple insight may have a profound impact on the advice you give.

Policy analysis for an ideal-type world

For our purposes, an ideal-type is an abstract idea, which highlights hypothetical features of the world, to compare with ‘real world’ descriptions. It need not be an ideal to which we aspire. For example, comprehensive rationality describes the ideal type, and bounded rationality describes the ‘real world’ limitations to the ways in which humans and organisations process information.

 

Imagine writing policy analysis in the ideal-type world of a single powerful ‘comprehensively rational’ policymaker at the heart of government, making policy via an orderly policy cycle.

Your audience would be easy to identify, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change.

You could adopt a simple 5-8 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

I have perhaps over-egged this ideal-type pudding, but I think a lot of traditional policy analyses tapped into this basic idea and focused more on the science of analysis than the political and policymaking context in which it takes place (see Radin and Brans, Geva-May, and Howlett).

Policy analysis for the real world

Then imagine a far messier and less predictable world in which the nature of the policy issue is highly contestedresponsibility for policy is unclear, and no single ‘centre’ has the power to turn a recommendation into an outcome.

This image is a key feature of policy process theories, which describe:

  • Many policymakers and influencers spread across many levels and types of government (as the venues in which authoritative choice takes place). Consequently, it is not a straightforward task to identify and know your audience, particularly if the problem you seek to solve requires a combination of policy instruments controlled by different actors.
  • Each venue resembles an institution driven by formal and informal rules. Formal rules are written-down or widely-known. Informal rules are unwritten, difficult to understand, and may not even be understood in the same way by participants. Consequently, it is difficult to know if your solution will be a good fit with the standard operating procedures of organisations (and therefore if it is politically feasible or too challenging).
  • Policymakers and influencers operate in ‘subsystems’, forming networks built on resources such as trust or coalitions based on shared beliefs. Effective policy analysis may require you to engage with – or become part of – such networks, to allow you to understand the unwritten rules of the game and encourage your audience to trust the messenger. In some cases, the rules relate to your willingness to accept current losses for future gains, to accept the limited impact of your analysis now in the hope of acceptance at the next opportunity.
  • Actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so well-established as to be taken for granted. Common terms include paradigms, hegemons, core beliefs, and monopolies of understandings. These dominant frames of reference give meaning to your policy solution. They prompt you to couch your solutions in terms of, for example, a strong attachment to evidence-based cases in public health, value for money in treasury departments, or with regard to core principles such as liberalism or socialism in different political systems.
  • Your solutions relate to socioeconomic context and the events that seem (a) impossible to ignore and (b) out of the control of policymakers. Such factors range from a political system’s geography, demography, social attitudes, and economy, while events can be routine elections or unexpected crises.

What would you recommend under these conditions? Rethinking 5-step analysis

There is a large gap between policymakers’ (a) formal responsibilities versus (b) actual control of policy processes and outcomes. Even the most sophisticated ‘evidence based’ analysis of a policy problem will fall flat if uninformed by such analyses of the policy process. Further, the terms of your cost-benefit analysis will be highly contested (at least until there is agreement on what the problem is, and how you would measure the success of a solution).

Modern policy analysis texts try to incorporate such insights from policy theories while maintaining a focus on 5-8 steps. For example:

  • Meltzer and Schwartz contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’.
  • Bardachand Dunn emphasise the value of political pragmatism and the ‘art and craft’ of policy analysis.
  • Weimer and Vininginvest 200 pages in economic analyses of markets and government, often highlighting a gap between (a) our ability to model and predict economic and social behaviour, and (b) what actually happens when governments intervene.
  • Mintrom invites you to see yourself as a policy entrepreneur, to highlight the value of of ‘positive thinking’, creativity, deliberation, and leadership, and perhaps seek ‘windows of opportunity’ to encourage new solutions. Alternatively, a general awareness of the unpredictability of events can prompt you to be modest in your claims, since the policymaking environment may be more important (than your solution) to outcomes.
  • Thissen and Walker focus more on a range of possible roles than a rigid 5-step process.

Beyond 5-step policy analysis

  1. Compare these pragmatic, client-orientated, and communicative models with the questioning, storytelling, and decolonizing approaches by Bacchi, Stone, and L.T. Smith.
  • The latter encourage us to examine more closely the politics of policy processes, including the importance of framing, narrative, and the social construction of target populations to problem definition and policy design.
  • Without this wider perspective, we are focusing on policy analysis as a process rather than considering the political context in which analysts use it.
  1. Additional posts on entrepreneurs and ‘systems thinking’ [to be added] encourage us to reflect on the limits to policy analysis in multi-centric policymaking systems.

 

 

2 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Concepts in 1000 Words: how do policy theories describe policy change?

The 1000 words and 500 words series already show how important but difficult it is to define and measure policy change. In this post, Leanne Giordono and I dig deeper into the – often confusingly different – ways in which different researchers conceptualise this process. We show why there is such variation and provide a checklist of questions to ask of any description of policy change.

Measuring policy change is more difficult than it looks

The measurement of policy change is important. Most ‘what is policy?’ discussions remind us that there can be a huge difference between policy as a (a)  statement of intent, (b) strategy, (c) collection of tools/ instruments and (d) contributor to policy outcomes.

Policy theories remind us that, while politicians and political parties often promise to sweep into office and produce radical departures from the past, most policy change is minor. There is a major gap between stated intention and actual outcomes, partly because policymakers do not control the policy process for which they are responsible. Instead, they inherit the commitments of their predecessors and make changes at the margins.

The 1000 words and 500 words posts suggest that we address this problem of measurement by identifying the use of a potentially large number of policy instruments or policy tools such as regulation (including legislation) and resources (money and staffing) to accentuate the power at policymaker’s disposal.

Then, they suggest that we tell a story of policy change, focusing on (a) what problem policymakers were trying to solve, and the size of their response in relation to the size of the problem, and (b) the precise nature of specific changes, or how each change contributes to the ‘big picture’.

This recommendation highlights a potentially major problem: as researchers, we can produce very different narratives of policy change from the same pool of evidence, by accentuating some measures and ignoring others, or putting more faith in some data than others.

Three ways to navigate different approaches to imagining and measuring change

Researchers use many different concepts and measures to define and identify policy change. It would be unrealistic – and perhaps unimaginative – to solve this problem with a call for one uniform approach.

Rather, our aim is to help you (a) navigate this diverse field by (b) identifying the issues and concepts that will help you interpret and compare different ways to measure change.

  1. Check if people are ‘showing their work’

Pay close attention to how scholars are defining their terms. For example, be careful with incomplete definitions that rely on a reference to evolutionary change (which can mean so many different things) or incremental change (e.g. does an increment mean small or non-radical)? Or, note that frequent distinctions between minor versus major change seem useful, but we are often trying to capture and explain a confusing mixture of both.

  1. Look out for different questions

Multiple typologies of change often arise because different theories ask and answer different questions:

  • The Advocacy Coalition Framework distinguishes between minor and major change, associating the former with routine ‘policy-oriented learning’, and the latter with changes in core policy beliefs, often caused by a ‘shock’ associated with policy failure or external events.
  • Innovation and Diffusion models examine the adoption and non-adoption of a specific policy solution over a specific period of time in multiple jurisdictions as a result of learning, imitation, competition or coercion.
  • Classic studies of public expenditure generated four categories to ask if the ‘budgetary process of the United States government is equivalent to a set of temporally stable linear decision rules’. They describe policy change as minor and predictable and explain outliers as deviations from the norm.
  • Punctuated Equilibrium Theory identifies a combination of (a) huge numbers of small policy change and (b) small numbers of huge change as the norm, in budgetary and other policy changes.
  • Hall distinguishes between (a) routine adjustments to policy instruments, (b) changes in instruments to achieve existing goals, and (c) complete shifts in goals. He compares long periods in which (1) some ideas dominate and institutions do not change, with (2) ‘third order’ change in which a profound sense of failure contributes to a radical shift of beliefs and rules.
  • More recent scholarship identifies a range of concepts – including layering, drift, conversion, and displacement – to explain more gradual causes of profound changes to institutions.

These approaches identify a range of possible sources of measures:

  1. a combination of policy instruments that add up to overall change
  2. the same single change in many places
  3. change in relation to one measure, such as budgets
  4. a change in ideas, policy instruments and/ or rules.

As such, the potential for confusion is high when we include all such measures under the single banner of ‘policy change’.

  1. Look out for different measures

Spot the different ways in which scholars try to ‘operationalize’ and measure policy change, quantitatively and/ or qualitatively, with reference to four main categories.

  1. Size can be measured with reference to:
  • A comparison of old and new policy positions.
  • A change observed in a sample or whole population (using, for example, standard deviations from the mean).
  • An ‘ideal’ state, such as an industry or ‘best practice’ standard.
  1. Speed describes the amount of change that occurs over a specific interval of time, such as:
  • How long it takes for policy to change after a specific event or under specific conditions.
  • The duration of time between commencement and completion (often described as ‘sudden’ or ‘gradual’).
  • How this speed compares with comparable policy changes in other jurisdictions (often described with reference to ‘leaders’ and ‘laggards’).
  1. Direction describes the course of the path from one policy state to another. It is often described in comparison to:
  • An initial position in one jurisdiction (such as an expansion or contraction).
  • Policy or policy change in other jurisdictions (such as via ‘benchmarking’ or ‘league tables’)
  • An ‘ideal’ state (such as with reference to left or right wing aims).
  1. Substance relates to policy change in relations to:
  • Relatively tangible instruments such as legislation, regulation, or public expenditure.
  • More abstract concepts such as in relation to beliefs or goals.

Take home points for students

Be thoughtful when drawing comparisons between applications, drawn from many theoretical traditions, and addressing different research questions.  You can seek clarity by posing three questions:

  1. How clearly has the author defined the concept of policy change?
  2. How are the chosen theories and research questions likely to influence the author’s operationalization of policy change?
  3. How does the author operationalize policy change with respect to size, speed, direction, and/or substance?

However, you should also note that the choice of definition and theory may affect the meaning of measures such as size, speed, direction, and/or substance.

 

7 Comments

Filed under 1000 words, public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see this post on the meaning of ‘irrational’). Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Now go to:

What can you do when policymakers ignore your evidence?

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes how to be a ‘policy entrepreneur’.

See also the 750 Words series to explore the implications for policy analysis

16 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

For a longer discussion, see Fostering Evidence-informed Policy Making: Uncertainty Versus Ambiguity (PDF)

Or, if you fancy it in French: Favoriser l’élaboration de politiques publiques fondées sur des données probantes : incertitude versus ambiguïté (PDF)

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

27 Comments

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

How can governments better collaborate to address complex problems?

Swann Kim

This is a guest post by William L. Swann (left) and Seo Young Kim (right), discussing how to use insights from the Institutional Collective Action Framework to think about how to improve collaborative governance. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Collective Action_1

Many public policy problems cannot be addressed effectively by a single, solitary government. Consider the problems facing the Greater Los Angeles Area, a heavily fragmented landscape of 88 cities and numerous unincorporated areas and special districts. Whether it is combatting rising homelessness, abating the country’s worst air pollution, cleaning the toxic L.A. River, or quelling gang violence, any policy alternative pursued unilaterally is limited by overlapping authority and externalities that alter the actions of other governments.

Problems of fragmented authority are not confined to metropolitan areas. They are also found in multi-level governance scenarios such as the restoration of Chesapeake Bay, as well as in international relations as demonstrated by recent global events such as “Brexit” and the U.S.’s withdrawal from the Paris Climate Agreement. In short, fragmentation problems manifest at every scale of governance, horizontally, vertically, and even functionally within governments.

What is an ‘institutional collective action’ dilemma?

In many cases governments would be better off coordinating and working together, but they face barriers that prevent them from doing so. These barriers are what the policy literature refers to as ‘institutional collective action’ (ICA) dilemmas, or collective action problems in which a government’s incentives do not align with collectively desirable outcomes. For example, all governments in a region benefit from less air pollution, but each government has an incentive to free ride and enjoy cleaner air without contributing to the cost of obtaining it.

The ICA Framework, developed by Professor Richard Feiock, has emerged as a practical analytical instrument for understanding and improving fragmented governance. This framework assumes that governments must match the scale and coerciveness of the policy intervention (or mechanism) to the scale and nature of the policy problem to achieve efficient and desired outcomes.

For example, informal networks (a mechanism) can be highly effective at overcoming simple collective action problems. But as problems become increasingly complex, more obtrusive mechanisms, such as governmental consolidation or imposed collaboration, are needed to achieve collective goals and more efficient outcomes. The more obtrusive the mechanism, however, the more actors’ autonomy diminishes and the higher the transaction costs (monitoring, enforcement, information, and agency) of governing.

Collective Action_2

Three ways to improve institutional collaborative governance

We explored what actionable steps policymakers can take to improve their results with collaboration in fragmented systems. Our study offers three general practical recommendations based on the empirical literature that can enhance institutional collaborative governance.

First, institutional collaboration is more likely to emerge and work effectively when policymakers employ networking strategies that incorporate frequent, face-to-face interactions.

Government actors networking with popular, well-endowed actors (“bridging strategies”) as well as developing closer-knit, reciprocal ties with a smaller set of actors (“bonding strategies”) will result in more collaborative participation, especially when policymakers interact often and in-person.

Policy network characteristics are also important to consider. Research on estuary governance indicates that in newly formed, emerging networks, bridging strategies may be more advantageous, at least initially, because they can provide organizational legitimacy and access to resources. However, once collaboratives mature, developing stronger and more reciprocal bonds with fewer actors reduces the likelihood of opportunistic behavior that can hinder collaborative effectiveness.

Second, policymakers should design collaborative arrangements that reduce transaction costs which hinder collaboration.

Well-designed collaborative institutions can lower the barriers to participation and information sharing, make it easier to monitor the behaviors of partners, grant greater flexibility in collaborative work, and allow for more credible commitments from partners.

Research suggests policymakers can achieve this by

  1. identifying similarities in policy goals, politics, and constituency characteristics with institutional partners
  2. specifying rules such as annual dues, financial reporting, and making financial records reviewable by third parties to increase commitment and transparency in collaborative arrangements
  3. creating flexibility by employing adaptive agreements with service providers, especially when services have limited markets/applications and performance is difficult to measure.

Considering the context, however, is crucial. Collaboratives that thrive on informal, close-knit, reciprocal relations, for example, may be severely damaged by the introduction of monitoring mechanisms that signal distrust.

Third, institutional collaboration is enhanced by the development and harnessing of collaborative capacity.

Research suggests signaling organizational competencies and capacities, such as budget, political support, and human resources, may be more effective at lowering barriers to collaboration than ‘homophily’ (a tendency to associate with similar others in networks). Policymakers can begin building collaborative capacity by seeking political leadership involvement, granting greater managerial autonomy, and looking to higher-level governments (e.g., national, state, or provincial governments) for financial and technical support for collaboration.

What about collaboration in different institutional contexts?

Finally, we recognize that not all policymakers operate in similar institutional contexts, and collaboration can often be mandated by higher-level authorities in more centralized nations. Nonetheless, visible joint gains, economic incentives, transparent rules, and equitable distribution of joint benefits and costs are critical components of voluntary or mandated collaboration.

Conclusions and future directions

The recommendations offered here are, at best, only the tip of the iceberg on valuable practical insight that can be gleaned from collaborative governance research. While these suggestions are consistent with empirical findings from broader public management and policy networks literatures, much could be learned from a closer inspection of the overlap between ICA studies and other streams of collaborative governance work.

Collaboration is a valuable tool of governance, and, like any tool, it should be utilized appropriately. Collaboration is not easily managed and can encounter many obstacles. We suggest that governments generally avoid collaborating unless there are joint gains that cannot be achieved alone. But the key to solving many of society’s intractable problems, or just simply improving everyday public service delivery, lies in a clearer understanding of how collaboration can be used effectively within different fragmented systems.

5 Comments

Filed under public policy

The Politics of Evidence

This is a draft of my review of Justin Parkhurst (2017) The Politics of Evidence (Routledge, Open Access)

Justin Parkhurst’s aim is to identify key principles to take forward the ‘good governance of evidence’. The good governance of scientific evidence in policy and policymaking requires us to address two fundamentally important ‘biases’:

  1. Technical bias. Some organisations produce bad evidence, some parts of government cherry-pick, manipulate, or ignore evidence, and some politicians misinterpret the implications of evidence when calculating risk. Sometimes, these things are done deliberately for political gain. Sometimes they are caused by cognitive biases which cause us to interpret evidence in problematic ways. For example, you can seek evidence that confirms your position, and/ or only believe the evidence that confirms it.
  2. Issue bias. Some evidence advocates use the mantra of ‘evidence based policy’ to depoliticise issues or downplay the need to resolve conflicts over values. They also focus on the problems most conducive to study via their most respected methods such as randomised control trials (RCTs). Methodological rigour trumps policy relevance and simple experiments trump the exploration of complex solutions. So, we lose sight of the unintended consequences of producing the ‘best’ evidence to address a small number of problems, and making choices about the allocation of research resources and attention. Again, this can be deliberate or caused by cognitive biases, such as to seek simpler and more answerable questions than complex questions with no obvious answer.

To address both problems, Parkhurst seeks pragmatic ways to identify principles to decide what counts as ‘good evidence to inform policy’ and ‘what constitutes the good use of evidence within a policy process’:

‘it is necessary to consider how to establish evidence advisory systems that promote the good governance of evidence – working to ensure that rigorous, sys­tematic and technically valid pieces of evidence are used within decision-making processes that are inclusive of, representative of and accountable to the multiple social interests of the population served’ (p8).

Parkhurst identifies some ways in which to bring evidence and policy closer together. First, to produce evidence more appropriate for, or relevant to, policymaking (‘good evidence for policy’):

  1. Relate evidence more closely to policy goals.
  2. Modify research approaches and methods to answer policy relevant questions.
  3. Ensure that the evidence relates to the local or relevant context.

Second, to produce the ‘good use of evidence’, combine three forms of ‘legitimacy’:

  1. Input, to ensure democratic representative bodies have the final say.
  2. Throughput, to ensure widespread deliberation.
  3. Output, to ensure proper consideration the use of the most systematic, unbiased and rigorously produced scientific evidence relevant to the problem.

In the final chapter, Parkhurst suggests that these aims can be pursued in many ways depending on how governments want to design evidence advisory systems, but that it’s worth drawing on the examples of good practice he identifies. Parkhurst also explores the role for Academies of science, or initiatives such as the Cochrane Collaboration, to provide independent advice. He then outlines the good governance of evidence built on key principles: appropriate evidence, accountability in evidence use, transparency, and contestability (to ensure sufficient debate).

The overall result is a book full of interesting discussion and very sensible, general advice for people new to the topic of evidence and policy. This is no mean feat: most readers will seek a clearly explained and articulate account of the subject, and they get it here.

For me, the most interesting thing about Parkhurst’s book is the untold story, or often-implicit reasoning behind the way in which it is framed. We can infer that it is not a study aimed primarily at a political science or social science audience, because most of that audience would take its starting point for granted: the use of evidence is political, and politics involves values. Yet, Parkhurst feels the need to remind the reader of this point, in specific (“it is worth noting that the US presidency is a decidedly political role”, p43) and general circumstances (‘the nature of policymaking is inherently political’, p65). Throughout, the audience appears to be academics who begin with a desire for ‘evidence based policy’ without fully thinking through the implications, either about the lack of a magic bullet of evidence to solve a policy problem, how we might maintain a political system conducive to democratic principles and good evidence use, how we might design a system to reduce key ‘barriers’ between the supply of evidence by scientists and its demand by policymakers, and why few such designs have taken off.

In other words, the book appeals primarily to scientists trained outside social science, some of whom think about politics in their spare time, or encounter it in dispiriting encounters with policymakers. It appeals to that audience with a statement on the crucial role of high quality evidence in policymaking, highlights barriers to its use, tells scientists that they might be part of the problem, but then provides them with the comforting assurance that we can design better systems to overcome at least some of those barriers. For people trained in policy studies, this concluding discussion seems like a tall order, and I think most would read it with great scepticism.

Policy scientists might also be sceptical about the extent to which scientists from other fields think this way about hierarchies of scientific evidence and the desire to depoliticise politics with a primary focus on ‘what works’. Yet, I too hear this language regularly in interdisciplinary workshops (often while standing next to Justin!), and it is usually accompanied by descriptions of the pathology of policymaking, the rise of post-truth politics and rejection of experts, and the need to focus on the role of objective facts in deciding what policy solutions work best. Indeed, I was impressed recently by the skilled way in which another colleague prepared this audience for some provocative remarks when he suggested that the production and use of evidence is about power, not objectivity. OMG: who knew that policymaking was political and about power?!

So, the insights from this book are useful to a large audience of scientists while, for a smaller audience of policy scientists, they remind us that there is an audience out there for many of the statements that many of us would take for granted. Some evidence advocates use the language of ‘evidence based policymaking’ strategically, to get what they want. Others appear to use it because they believe it can exist. Keep this in mind when you read the book.

Parkhurst

4 Comments

Filed under Evidence Based Policymaking (EBPM)

Three ways to communicate more effectively with policymakers

By Paul Cairney and Richard Kwiatkowski

Use psychological insights to inform communication strategies

Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).

They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.

Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.

If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.

We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.

To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.

Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.

On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:

  1. Tailor framing strategies to policymaker bias. If people are cognitive misers, minimise the cognitive burden of your presentation. If policymakers combine cognitive and emotive processes, combine facts with emotional appeals. If policymakers make quick choices based on their values and simple moral judgements, tell simple stories with a hero and moral. If policymakers reflect a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with those beliefs.
  2. Identify ‘windows of opportunity’ to influence individuals and processes. ‘Timing’ can refer to the right time to influence an individual, depending on their current way of thinking, or to act while political conditions are aligned.
  3. Adapt to real-world ‘dysfunctional’ organisations rather than waiting for an orderly process to appear. Form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.


This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.

Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

The impact of multi-level policymaking on the UK energy system

Cairney et al UKERC

In September, we will begin a one-year UKERC project examining current and future energy policy and multi-level policymaking and its impact on ‘energy systems’. This is no mean feat, since the meaning of policy, policymaking (or the ‘policy process’), and ‘system’ are not clear, and our description of the components parts of an energy system and a complex policymaking system may differ markedly. So, one initial aim is to provide some way to turn a complex field of study into something simple enough to understand and engage with.

We do so by focusing on ‘multi-level policymaking’ – which can encompass concepts such as multi-level governance and intergovernmental relations – to reflect the fact that the responsibility for policies relevant to energy are often Europeanised, devolved, and shared between several levels of government. Brexit will produce a major effect on energy and non-energy policies, and prompt the UK and devolved governments to produce relationships, but we all need more clarity on the dynamics of current arrangements before we can talk sensibly about the future. To that end, we pursue three main work packages:

1. What is the ‘energy policymaking system’ and how does it affect the energy system?

Chaudry et al (2009: iv) define the UK energy system as ‘the set of technologies, physical infrastructure, institutions, policies and practices located in and associated with the UK which enable energy services to be delivered to UK consumers’. UK policymaking can have a profound impact, and constitutional changes might produce policy change, but their impacts require careful attention. So, we ‘map’ the policy process and the effect of policy change on energy supply and demand. Mapping sounds fairly straightforward but contains a series of tasks whose level of difficulty rises each time:

  1. Identify which level or type of government is responsible – ‘on paper’ and in practice – for the use of each relevant policy instrument.
  2. Identify how these actors interact to produce what we call ‘policy’, which can range from statements of intent to final outcomes.
  3. Identify an energy policy process containing many actors at many levels, the rules they follow, the networks they form, the ‘ideas’ that dominate discussion, and the conditions and events (often outside policymaker control) which constrain and facilitate action. By this stage, we need to draw on particular policy theories to identify key venues, such as subsystems, and specific collections of actors, such as advocacy coalitions, to produce a useful model of activity.

2. Who is responsible for action to reduce energy demand?

Energy demand is more challenging to policymakers than energy supply because the demand side involves millions of actors who, in the context of household energy use, also constitute the electorate. There are political tensions in making policies to reduce energy demand and carbon where this involves cost and inconvenience for private actors who do not necessarily value the societal returns achieved, and the political dynamics often differ from policy to regulate industrial demand. There are tensions around public perceptions of whose responsibility it is to take action – including local, devolved, national, or international government agencies – and governments look like they are trying to shift responsibility to each other or individuals and firms.

So, there is no end of ways in which energy demand could be regulated or influenced – including energy labelling and product/building standards, emissions reduction measures, promotion of efficient generation, and buildings performance measures – but it is an area of policy which is notoriously diffuse and lacking in co-ordination. So, for the large part, we consider if Brexit provides a ‘window of opportunity’ to change policy and policymaking by, for example, clarifying responsibilities and simplifying relationships.

3: Does Brexit affect UK and devolved policy on energy supply?

It is difficult for single governments to coordinate an overall energy mix to secure supply from many sources, and multi-level policymaking adds a further dimension to planning and cooperation. Yet, the effect of constitutional changes is highly uneven. For example, devolution has allowed Scotland to go its own way on renewable energy, nuclear power and fracking, but Brexit’s impact ranges from high to low. It presents new and sometimes salient challenges for cooperation to supply renewable energy but, while fracking and nuclear are often the most politically salient issues, Brexit may have relatively little impact on policymaking within the UK.

We explore the possibility that renewables policy may be most impacted by Brexit, while nuclear and fracking are examples in which Brexit may have a minimal direct impact on policy. Overall, the big debates are about the future energy mix, and how local, devolved, and UK governments balance the local environmental impacts of, and likely political opposition to, energy development against the economic and energy supply benefits.

For more details, see our 4-page summary

Powerpoint for 13.7.17

Cairney et al UKERC presentation 23.10.17

1 Comment

Filed under Fracking, public policy, UKERC

Policy in 500 Words: The Policy Process

We talk a lot about ‘the policy process’ without really saying what it is. If you are new to policy studies, maybe you think that you’ll learn what it is eventually if you read enough material. This would be a mistake! Instead, when you seek a definition of the policy process, you’ll find two common responses.

  1. Many will seek to define policy or public policy instead of ‘the policy process’.
  2. Some will describe the policy process as a policy cycle with stages.

Both responses seem inadequate: one avoids giving an answer, and another gives the wrong answer!

However, we can combine elements of each approach to give you just enough of a sense of ‘the policy process’ to continue reading:

  1. The beauty of the ‘what is policy?’ question is that we don’t give you an answer. I give you a working definition to help raise further questions. Look at the questions we need to ask if we begin with the definition, ‘the sum total of government action, from signals of intent to the final outcomes’.
  2. The beauty of the policy cycle approach is that it provides a simple way to imagine policy ‘dynamics’, or events and choices producing a sequence of other events and choices. Look at the stages to identify many different tasks within one ‘process’, and to get the sense that policymaking is continuous and often ‘its own cause’.

There are more complicated but better ways of describing policymaking dynamics

This picture is the ‘policy process’ equivalent of my definition of public policy. It captures the main elements of the policy process described (in different ways) by most policy theories. It is there to give you enough of an answer to help you ask the right questions.

Cairney 2017 image of the policy process

In the middle is ‘policy choice’. At the heart of most policy theory is ‘bounded rationality’, which describes (a) the cognitive limits of people, and (b) how they overcome those limits to make decisions. They use ‘rational’ and ‘irrational’ shortcuts to action.

Surrounding choice is what we’ll call the ‘policy environment’, containing: policymakers in many levels and types of government, the ideas or beliefs they share, the rules they follow, the networks they form with influencers, and the ‘structural’ or socioeconomic context in which they operate.

This picture is only the beginning of analysis, raising further questions that will make more sense when you read further, including: should policymaker choice be at the centre of this picture? Why are there arrows (describing the order of choice) in the cycle but not in my picture?

Take home message for students: don’t describe ‘the policy process’ without giving the reader some sense of its meaning. Its definition overlaps with ‘policy’ considerably, but the ‘process’ emphasises modes and dynamics of policymaking, while ‘policy’ emphasises outputs. Then, think about how each policy model or theory tries, in different ways, to capture the key elements of the process. A cycle focuses on ‘stages’ but most theories in this series focus on ‘environments’.

 

 

 

 

 

4 Comments

Filed under 500 words, public policy

Three habits of successful policy entrepreneurs

This post is one part of a series – called Practical Lessons from Policy Theories and it summarizes this article (PDF).

Policy entrepreneurs’ invest their time wisely for future reward, and possess key skills that help them adapt particularly well to their environments. They are the agents for policy change who possess the knowledge, power, tenacity, and luck to be able to exploit key opportunities. They draw on three strategies:

1. Don’t focus on bombarding policymakers with evidence.

Scientists focus on making more evidence to reduce uncertainty, but put people off with too much information. Entrepreneurs tell a good story, grab the audience’s interest, and the audience demands information.

Table 1

2. By the time people pay attention to a problem it’s too late to produce a solution.

So, you produce your solution then chase problems.

Table 2

3. When your environment changes, your strategy changes.

For example, in the US federal level, you’re in the sea, and you’re a surfer waiting for the big wave. In the smaller subnational level, on a low attention and low budget issue, you can be Poseidon moving the ‘streams’. In the US federal level, you need to ‘soften’ up solutions over a long time to generate support. In subnational or other countries, you have more opportunity to import and adapt ready-made solutions.

Table 3

It all adds up to one simple piece of advice – timing and luck matters when making a policy case – but policy entrepreneurs know how to influence timing and help create their own luck.

Full paper: Three habits of successful policy entrepreneurs

(Note: the previous version was friendlier and more focused on entrepreneurs)

For more on ‘multiple streams’ see:

Paul Cairney and Michael Jones (2016) ‘Kingdon’s Multiple Streams Approach: What Is the Empirical Impact of this Universal Theory?’ Policy Studies Journal, 44, 1, 37-58 PDF (Annex to Cairney Jones 2016) (special issue of PSJ)

Paul Cairney and Nikos Zahariadis (2016) ‘Multiple streams analysis: A flexible metaphor presents an opportunity to operationalize agenda setting processes’ in Zahariadis, N. (eds) Handbook of Public Policy Agenda-Setting (Cheltenham: Edward Elgar) PDF see also

I use a space launch metaphor in the paper. If you prefer different images, have a look at 5 images of the policy process. If you prefer a watery metaphor (it’s your life, I suppose), click Policy Concepts in 1000 Words: Multiple Streams Analysis

For more on entrepreneurs:

18 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Folksy wisdom, public policy, Storytelling

Practical Lessons from Policy Theories

These links to blog posts (the underlined headings) and tweets (with links to their full article) describe a new special issue of Policy and Politics, published in April 2018 and free to access until the end of May.

Weible Cairney abstract

Three habits of successful policy entrepreneurs

Telling stories that shape public policy

How to design ‘maps’ for policymakers relying on their ‘internal compass’

Three ways to encourage policy learning

How can governments better collaborate to address complex problems?

How do we get governments to make better decisions?

How to navigate complex policy designs

Why advocacy coalitions matter and how to think about them

None of these abstract theories provide a ‘blueprint’ for action (they were designed primarily to examine the policy process scientifically). Instead, they offer one simple insight: you’ll save a lot of energy if you engage with the policy process that exists, not the one you want to see.

Then, they describe variations on the same themes, including:

  1. There are profound limits to the power of individual policymakers: they can only process so much information, have to ignore almost all issues, and therefore tend to share policymaking with many other actors.
  2. You can increase your chances of success if you work with that insight: identify the right policymakers, the ‘venues’ in which they operate, and the ‘rules of the game’ in each venue; build networks and form coalitions to engage in those venues; shape agendas by framing problems and telling good stories, design politically feasible solutions, and learn how to exploit ‘windows of opportunity’ for their selection.

Background to the special issue

Chris Weible and I asked a group of policy theory experts to describe the ‘state of the art’ in their field and the practical lessons that they offer.

Our next presentation was at the ECPR in Oslo:

The final articles in this series are now complete, but our introduction discusses the potential for more useful contributions

Weible Cairney next steps pic

20 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

 

 

5 Comments

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

See also our project website IMAJINE.

Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.

Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).

So, what should we do?

I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.

I have mocked up a paper abstract to describe this kind of work:

In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them.  In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.

Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.

We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.

So, in the paper we (my future research partner and I), would:

  • Outline the payoffs to this engage-early approach. Early engagement will inform the research questions you ask, how you ask them, and how you ‘frame’ the results. It should also help produce more academic publications (which is still the key consideration for many academics), partly because this early approach will help us speak with some authority about policy and policymaking in many countries.
  • Describe the complications of engaging with different policymakers in many ‘venues’ in different countries: you would expect very different questions to arise, and perhaps struggle to manage competing audience demands.
  • Raise practical questions about the research audience, including: should we interview key advocacy groups and private sources of funding for applied research, as well as policymakers, when refining questions? I ask this question partly because it can be more effective to communicate evidence via policy influencers rather than try to engage directly with policymakers.
  • Raise ethical questions, including: what if policymaker interviewees want the ‘wrong’ questions answered? What if they are only interested in policy solutions that we think are misguided, either because the evidence-base is limited (and yet they seek a magic bullet) or their aims are based primarily on ideology (an allegedly typical dilemma regards left-wing academics providing research for right-wing governments)?

Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?

The case study of territorial inequalities is a key source of such dilemmas …

…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.

When considering solutions, they can choose from three basic categories:

  1. Tax and spending to redistribute income and wealth, perhaps treating economic inequalities as the source of most others (such as health and education inequalities).
  2. The provision of public services to help mitigate the effects of economic and other inequalities (such as free healthcare and education, and public transport in urban and rural areas).
  3. The adoption of ‘prevention’ strategies to engage as early as possible in people’s lives, on the assumption that key inequalities are well-established by the time children are three years old.

Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.

‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.

For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?

“There is extensive health and public health literature on the ‘evidence-policy gap’, exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems”.

We use this table to describe how the policy process works, how effective actors respond, and the dilemmas that arise for advocates of scientific evidence: should they act this way too?

We summarise this argument in two posts for:

The Guardian If scientists want to influence policymaking, they need to understand it

Sax Institute The evidence policy gap: changing the research mindset is only the beginning

The article is part of a wider body of work in which one or both of us considers the relationship between evidence and policy in different ways, including:

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review PDF

Paul Cairney (2016) The Politics of Evidence-Based Policy Making (PDF)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Paul Cairney (2016) Evidence-based best practice is more political than it looks in Evidence and Policy

Many of my blog posts explore how people like scientists or researchers might understand and respond to the policy process:

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

‘Evidence-based Policymaking’ and the Study of Public Policy

How far should you go to secure academic ‘impact’ in policymaking?

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

What 10 questions should we put to evidence for policy experts?

Why doesn’t evidence win the day in policy and policymaking?

We all want ‘evidence based policy making’ but how do we do it?

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

The Politics of Evidence Based Policymaking:3 messages

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

There are more posts like this on my EBPM page

I am also guest editing a series of articles for the Open Access journal Palgrave Communications on the ‘politics of evidence-based policymaking’ and we are inviting submissions throughout 2017.

There are more details on that series here.

And finally ..

… if you’d like to read about the policy theories underpinning these arguments, see Key policy theories and concepts in 1000 words and 500 words.

 

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

15 Comments

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid?

One of the most dispiriting parts of fierce political debate is the casual use of mental illness or old and new psychiatric terms to undermine an opponent: she is mad, he is crazy, she is a nutter, they are wearing tin foil hats, get this guy a straitjacket and the men in white coats because he needs to lie down in a dark room, she is hysterical, his position is bipolar, and so on. This kind of statement reflects badly on the campaigner rather than their opponent.

I say this because, while doing some research on a paper on the psychology of politics and policymaking (this time with Richard Kwiatkowski, as part of this special collection), there are potentially useful concepts that seem difficult to insulate from such political posturing. There is great potential to use them cynically against opponents rather than benefit from their insights.

The obvious ‘live’ examples relate to ‘rational’ versus ‘irrational’ policymaking. For example, one might argue that, while scientists develop facts and evidence rationally, using tried and trusted and systematic methods, politicians act irrationally, based on their emotions, ideologies, and groupthink. So, we as scientists are the arbiters of good sense and they are part of a pathological political process that contributes to ‘post truth’ politics.

The obvious problem with such accounts is that we all combine cognitive and emotional processes to think and act. We are all subject to bias in the gathering and interpretation of evidence. So, the more positive, but less tempting, option is to consider how this process works – when both competing sides act ‘rationally’ and emotionally – and what we can realistically do to mitigate the worst excesses of such exchanges. Otherwise, we will not get beyond demonising our opponents and romanticising our own cause. It gives us the warm and fuzzies on twitter and in academic conferences but contributes little to political conversations.

A less obvious example comes from modern work on the links between genes and attitudes. There is now a research agenda which uses surveys of adult twins to compare the effect of genes and environment on political attitudes. For example, Oskarsson et al (2015: 650) argue that existing studies ‘report that genetic factors account for 30–50% of the variation in issue orientations, ideology, and party identification’. One potential mechanism is cognitive ability: put simply, and rather cautiously and speculatively, with a million caveats, people with lower cognitive ability are more likely to see ‘complexity, novelty, and ambiguity’ as threatening and to respond with fear, risk aversion, and conservatism (2015: 652).

My immediate thought, when reading this stuff, is about how people would use it cynically, even at this relatively speculative stage in testing and evidence gathering: my opponent’s genes make him stupid, which makes him fearful of uncertainty and ambiguity, and therefore anxious about change and conservative in politics (in other words, the Yoda hypothesis applied only to stupid people). It’s not his fault, but his stupidity is an obstacle to progressive politics. If you add in some psychological biases, in which people inflate their own sense of intelligence and underestimate that of their opponents, you have evidence-informed, really shit political debate! ‘My opponent is stupid’ seems a bit better than ‘my opponent is mental’ but only in the sense that eating a cup of cold sick is preferable to eating shit.

I say this as we try to produce some practical recommendations (for scientist and advocates of EBPM) to engage with politicians to improve the use of evidence in policy. I’ll let you know if it goes beyond a simple maxim: adapt to their emotional and cognitive biases, but don’t simply assume they’re stupid.

See also: the many commentaries on how stupid it is to treat your political opponents as stupid

Stop Calling People “Low Information Voters

1 Comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized