Tag Archives: policy theory

The future of public health policymaking after COVID-19: lessons from Health in All Policies

Paul Cairney, Emily St Denny, Heather Mitchell 

This post summarises new research on the health equity strategy Health in All Policies. As our previous post suggests, it is common to hope that a major event will create a ‘window of opportunity’ for such strategies to flourish, but the current COVID-19 experience suggests otherwise. If so, what do HIAP studies tell us about how to respond, and do they offer any hope for future strategies? The full report is on Open Research Europe, accompanied by a brief interview on its contribution to the Horizon 2020 project – IMAJINE – on spatial justice.

COVID-19 should have prompted governments to treat health improvement as fundamental to public policy

Many had made strong rhetorical commitments to public health strategies focused on preventing a pandemic of non-communicable diseases (NCDs). To do so, they would address the ‘social determinants’ of health and health inequalities, defined by the WHO as ‘the unfair and avoidable differences in health status’ that are ‘shaped by the distribution of money, power and resources’ and ‘the conditions in which people are born, grow, live, work and age’.

COVID-19 reinforces the impact of the social determinants of health. Health inequalities result from factors such as income and social and environmental conditions, which influence people’s ability to protect and improve their health. COVID-19 had a visibly disproportionate impact on people with (a) underlying health conditions associated with NCDs, and (b) less ability to live and work safely.

Yet, the opposite happened. The COVID-19 response side-lined health improvement

Health departments postponed health improvement strategies and moved resources to health protection.

This experience shows that the evidence does not speak for itself

The evidence on social determinants is clear to public health specialists, but the idea of social determinants is less well known or convincing to policymakers.

It also challenges the idea that the logic of health improvement is irresistible

Health in All Policies (HIAP) is the main vehicle for health improvement policymaking, underpinned by: a commitment to health equity by addressing the social determinants of health; the recognition that the most useful health policies are not controlled by health departments; the need for collaboration across (and outside) government; and, the search for high level political commitment to health improvement.

Its logic is undeniable to HIAP advocates, but not policymakers. A government’s public commitment to HIAP does not lead inevitably to the roll-out of a fully-formed HIAP model. There is a major gap between the idea of HIAP and its implementation. It is difficult to generate HIAP momentum, and it can be lost at any time.

Instead, we need to generate more realistic lessons from health improvement and promotion policy

However, most HIAP research does not provide these lessons. Most HIAP research combines:

  1. functional logic (here is what we need)
  2. programme logic (here is what we think we need to do to achieve it), and
  3. hope.

Policy theory-informed empirical studies of policymaking could help produce a more realistic agenda, but very few HIAP studies seem to exploit their insights.

To that end, this review identifies lessons from studies of HIAP and policymaking

It summarises a systematic qualitative review of HIAP research. It includes 113 articles (2011-2020) that refer to policymaking theories or concepts while discussing HIAP.

We produced these conclusions from pre-COVID-19 studies of HIAP and policymaking, but our new policymaking context – and its ironic impact on HIAP – is impossible to ignore.

It suggests that HIAP advocates produced a 7-point playbook for the wrong game

The seven most common pieces of advice add up to a plausible but incomplete strategy:

  1. adopt a HIAP model and toolkit
  2. raise HIAP awareness and support in government
  3. seek win-win solutions with partners
  4. avoid the perception of ‘health imperialism’ when fostering intersectoral action
  5. find HIAP policy champions and entrepreneurs
  6. use HIAP to support the use of health impact assessments (HIAs)
  7. challenge the traditional cost-benefit analysis approach to valuing HIAP.

Yet, two emerging pieces of advice highlight the limits to the current playbook and the search for its replacement:

  1. treat HIAP as a continuous commitment to collaboration and health equity, not a uniform model; and,
  2. address the contradictions between HIAP aims.

As a result, most country studies report a major, unexpected, and disappointing gap between HIAP commitment and actual outcomes

These general findings are apparent in almost all relevant studies. They stand out in the ‘best case’ examples where: (a) there is high political commitment and strategic action (such as South Australia), or (b) political and economic conditions are conducive to HIAP (such as Nordic countries).

These studies show that the HIAP playbook has unanticipated results, such as when the win-win strategy leads to  HIAP advocates giving ground but receiving little in return.

HIAP strategies to challenge the status quo are also overshadowed by more important factors, including (a) a far higher commitment to existing healthcare policies and the core business of government, and (b) state retrenchment. Additional studies of decentralised HIAP models find major gaps between (a) national strategic commitment (backed by national legislation) and (b) municipal government progress.

Some studies acknowledge the need to use policymaking research to produce new ways to encourage and evaluate HIAP success

Studies of South Australia situate HIAP in a complex policymaking system in which the link between policy activity and outcomes is not linear.  

Studies of Nordic HIAP show that a commitment to municipal responsibility and stakeholder collaboration rules out the adoption of a national uniform HIAP model.

However, most studies do not use policymaking research effectively or appropriately

Almost all HIAP studies only scratch the surface of policymaking research (while some try to synthesise its insights, but at the cost of clarity).

Most HIAP studies use policy theories to:

  1. produce practical advice (such as to learn from ‘policy entrepreneurs’), or
  2. supplement their programme logic (to describe what they think causes policy change and better health outcomes).

Most policy theories were not designed for this purpose.

Policymaking research helps primarily to explain the HIAP ‘implementation gap’

Its main lesson is that policy outcomes are beyond the control of policymakers and HIAP advocates. This explanation does not show how to close implementation gaps.

Its practical lessons come from critical reflection on dilemmas and politics, not the reinvention of a playbook

It prompts advocates to:

  • Treat HIAP as a political project, not a technical exercise or puzzle to be solved.
  • Re-examine the likely impact of a focus on intersectoral action and collaboration, to recognise the impact of imbalances of power and the logic of policy specialisation.
  • Revisit the meaning-in-practice of the vague aims that they take for granted without explaining, such as co-production, policy learning, and organisational learning.
  • Engage with key trade-offs, such as between a desire for uniform outcomes (to produce health equity) but acceptance of major variations in HIAP policy and policymaking.
  • Avoid reinventing phrases or strategies when facing obstacles to health improvement.

We describe these points in more detail here:

Our Open Research Europe article (peer reviewed) The future of public health policymaking… (europa.eu)

Paul summarises the key points as part of a HIAP panel: Health in All Policies in times of COVID-19

ORE blog on the wider context of this work: forthcoming

11 Comments

Filed under agenda setting, COVID-19, Evidence Based Policymaking (EBPM), Public health, public policy

Policy Analysis in 750 Words: what you need as an analyst versus policymaking reality

This post forms one part of the Policy Analysis in 750 words series overview. Note for the eagle eyed: you are not about to experience déjà vu. I’m just using the same introduction.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The lines between each approach are blurry, and each element makes less sense without the other. However, the distinction is crucial to help us overcome the major confusion associated with this question:

Does policymaking proceed through a series of stages?

The short answer is no.

The longer answer is that you can find about 40 blog posts (of 500 and 1000 words) which compare (a) a stage-based model called the policy cycle, and (b) the many, many policy concepts and theories that describe a far messier collection of policy processes.

cycle

In a nutshell, most policy theorists reject this image because it oversimplifies a complex policymaking system. The image provides a great way to introduce policy studies, and serves a political purpose, but it does more harm than good:

  1. Descriptively, it is profoundly inaccurate (unless you imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and less predictable outputs).
  2. Prescriptively, it gives you rotten advice about the nature of your policymaking task (for more on these points, see this chapter, article, article, and series).

Why does the stages/ policy cycle image persist? Two relevant explanations

 

  1. It arose from a misunderstanding in policy studies

In another nutshell, Chris Weible and I argue (in a secret paper) that the stages approach represents a good idea gone wrong:

  • If you trace it back to its origins, you will find Lasswell’s description of decision functions: intelligence, recommendation, prescription, invocation, application, appraisal and termination.
  • These functions correspond reasonably well to a policy cycle’s stages: agenda setting, formulation, legitimation, implementation, evaluation, and maintenance, succession or termination.
  • However, Lasswell was imagining functional requirements, while the cycle seems to describe actual stages.

In other words, if you take Lasswell’s list of what policy analysts/ policymakers need to do, multiple it by the number of actors (spread across many organisations or venues) trying to do it, then you get the multi-centric policy processes described by modern theories. If, instead, you strip all that activity down into a single cycle, you get the wrong idea.

  1. It is a functional requirement of policy analysis

This description should seem familiar, because the classic policy analysis texts appear to describe a similar series of required steps, such as:

  1. define the problem
  2. identify potential solutions
  3. choose the criteria to compare them
  4. evaluate them in relation to their predicted outcomes
  5. recommend a solution
  6. monitor its effects
  7. evaluate past policy to inform current policy.

However, these texts also provide a heavy dose of caution about your ability to perform these steps (compare Bardach, Dunn, Meltzer and Schwartz, Mintrom, Thissen and Walker, Weimer and Vining)

In addition, studies of policy analysis in action suggest that:

  • an individual analyst’s need for simple steps, to turn policymaking complexity into useful heuristics and pragmatic strategies,

should not be confused with

What you need versus what you can expect

Overall, this discussion of policy studies and policy analysis reminds us of a major difference between:

  1. Functional requirements. What you need from policymaking systems, to (a) manage your task (the 5-8 step policy analysis) and (b) understand and engage in policy processes (the simple policy cycle).
  2. Actual processes and outcomes. What policy concepts and theories tell us about bounded rationality (which limit the comprehensiveness of your analysis) and policymaking complexity (which undermines your understanding and engagement in policy processes).

Of course, I am not about to provide you with a solution to these problems.

Still, this discussion should help you worry a little bit less about the circular arguments you will find in key texts: here are some simple policy analysis steps, but policymaking is not as ‘rational’ as the steps suggest, but (unless you can think of an alternative) there is still value in the steps, and so on.

See also:

The New Policy Sciences

4 Comments

Filed under 750 word policy analysis, agenda setting, public policy

Making sense of policy theory

Here is my 2-pager for the ICPP Montreal conference panel called ‘Making Sense of (and Through) Policy Theory’. The panel’s description is:

The panel aims to bring authors and readers together in an open exploration of the way in which theory is used in the making and analysis of policy.  There are numerous books and journal special issues about policy and theory, but they do not always explain how they see the subject and why they address it in the way they do.  In this panel, the authors or editors of selected current texts will be invited to state how they see policy process theory, and how they chose to address it in their book, and a selection of readers, with varying relationships to the policy process and its analysis, will be invited to review these books the ways in which, and the extent to which, they found them useful in advancing their understanding of the policy process.

My contribution

In my first undergraduate year (1990), Jeremy Richardson presented an image of politics (generally written in partnership with Grant Jordan) – that I still use frequently to this day:

  • The size and scope of the state is so large that it is always in danger of becoming unmanageable. The same can be said of the crowded environment in which huge numbers of actors seek policy influence. Consequently, to all intents and purposes, policymakers manage complexity by breaking the state’s component parts into policy sectors and sub-sectors, with power spread across many parts of government.
  • Elected policymakers can only pay attention to a tiny proportion of issues for which they are responsible. So, they pay attention to a small number and ignore the rest. In effect, they delegate policymaking responsibility to other actors such as bureaucrats, often at low levels of government.
  • At this level of government and specialisation, bureaucrats rely on specialist organisations for information and advice. Those organisations trade that information/advice and other resources for access to, and influence within, the government (other resources may relate to who groups represent – such as a large, paying membership, an important profession, or a high status donor or corporation).
  • Most public policy is conducted primarily through small and specialist policy communities that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement.
  • This description of ‘policy communities’ suggests that senior elected politicians are less important than people think, their impact on policy is questionable, and elections and changes of government may not provide the changes in policy that many expect.
  • Initially, Jordan and Richardson were addressing the worry in the 1970s that alternating parties of government were damaging UK politics. We can still find the same kinds of contrast between the popular image of centralist, majoritarian Westminster politics and academic studies of policymaking.
  • Jordan and Richardson also described the influence of US studies of interest groups and subsystems, to suggest they were describing the UK brand of an international product.

Since then, I have been interested in the extent to which key aspects of such arguments are ‘universal’ (abstract enough to apply to all systems/ times) or specific to systems and eras. Tanya Heikkila, Matt Wood, and I have just described multi-centric policymaking, which:

  1. can be abstract enough to apply universally, since so much of policy studies is based on exploring the implications of bounded rationality and complexity, but
  2. academics make sense of these concepts in very different ways, partly to reflect their preferred approaches, and partly to describe the different ways in which policy actors deal with bounded rationality and complexity in specific contexts.

My main contribution to this discussion is a picture that looks like a turtle. I use it to describe policymaking to non-specialists and reflect on this description with other specialists.

image policy process round 2 25.10.18

It projects the sense that people combine (say) cognition and emotion to make choices, and they do so within a complex policymaking environment consisting of many actors spread across many venues, each with their own rules, networks, ways of seeing the world, and ways of responding to socio-economic factors and events. The centre of the picture does not describe a centre of government, and the lines between each factor do not imply causation.

For me, these concepts represent my attempt – while going solo in Understanding Public Policy and describing ‘evidence based policymaking’ or as co-author with Tanya Heikkila or Chris Weible – to synthesise insights from many policy theories, subject to these kinds of limitations:

  1. Different academics describe each concept in remarkably different ways.
  2. Some differences seem irreconcilable. At least, we should not take synthesis lightly.
  3. The US or Global North provides the primary lens through which to view the world of policymaking. Applying that lens to Global South countries may be useful in one sense (to analyse policymaking systematically) but damaging in another (to treat some experiences as normal and others as meeting the norm or representing outliers).
  4. White male professors seem the most likely to tell these stories of policymaking. One response, explored in the 2nd edition of UPP, is to describe the problem and commit to making continuous changes. Another is to encourage far more voices as part of a series of textbooks on policymaking. I will use part of my talk to encourage such submissions to the Palgrave series that I edit, while acknowledging that the opportunities to engage, and rewards for engagement, are not shared equally.

See also:

Leave a comment

Filed under public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Taking lessons from policy theory into practice: 3 examples

Notes for ANZSOG/ ANU Crawford School/ UNSW Canberra workshop. Powerpoint here. The recording of the lecture (skip to 2m30) and Q&A is here (right click to download mp3 or dropbox link):

The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?

The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).

However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:

  1. Central government policymakers may accept the descriptive accuracy of policy theories emphasising limited central control, but not the recommendation that they should let go, share power, and describe their limits to the public.
  2. Scientists may accept key limitations to ‘evidence based policymaking’ but reject the idea that they should respond by becoming better storytellers or more manipulative operators.
  3. Researchers and practitioners struggle to resolve hard choices when combining evidence and ‘coproduction’ while ‘scaling up’ policy interventions. Evidence choice is political choice. Can we do more than merely encourage people to accept this point?

I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.

Example 1: the policy cycle endures despite its descriptive inaccuracy

cycle

The policy cycle does not describe and explain the policy process well:

  • If we insist on keeping the cycle metaphor, it is more accurate to see the process as a huge set of policy cycles that connect with each other in messy and unpredictable ways.
  • The cycle approach also links strongly to the idea of ‘comprehensive rationality’ in which a small group of policymakers and analysts are in full possession of the facts and full control of the policy process. They carry out their aims through a series of stages.

Policy theories provide more descriptive and explanatory usefulness. Their insights include:

  • Limited choice. Policymakers inherit organisations, rules, and choices. Most ‘new’ choice is a revision of the old.
  • Limited attention. Policymakers must ignore almost all of the policy problems for which they are formally responsible. They pay attention to some, and delegate most responsibility to civil servants. Bureaucrats rely on other actors for information and advice, and they build relationships on trust and information exchange.
  • Limited central control. Policy may appear to be made at the ‘top’ or in the ‘centre’, but in practice policymaking responsibility is spread across many levels and types of government (many ‘centres’). ‘Street level’ actors make policy as they deliver. Policy outcomes appear to ‘emerge’ locally despite central government attempts to control their fate.
  • Limited policy change. Most policy change is minor, made and influenced by actors who interpret new evidence through the lens of their beliefs. Well-established beliefs limit the opportunities of new solutions. Governments tend to rely on trial-and-error, based on previous agreements, rather than radical policy change based on a new agenda. New solutions succeed only during brief and infrequent windows of opportunity.

However, the cycle metaphor endures because:

  • It provides a simple model of policymaking with stages that map onto important policymaking functions.
  • It provides a way to project policymaking to the public. You know how we make policy, and that we are in charge, so you know who to hold to account.

In that context, we may want to be pragmatic about our advice:

  1. One option is via complexity theory, in which scholars generally encourage policymakers to accept and describe their limits:
  • Accept routine error, reduce short-term performance management, engage more in trial and error, and ‘let go’ to allow local actors the flexibility to adapt and respond to their context.
  • However, would a government in the Westminster tradition really embrace this advice? No. They need to balance (a) pragmatic policymaking, and (b) an image of governing competence.
  1. Another option is to try to help improve an existing approach.

Further reading (blog posts):

The language of complexity does not mix well with the language of Westminster-style accountability

Making Sense of Policymaking: why it’s always someone else’s fault and nothing ever changes

Two stories of British politics: the Westminster model versus Complex Government

Example 2: how to deal with a lack of ‘evidence based policymaking’

I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.

I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.

So, I wrote The Politics of Evidence-Based Policymaking and one zillion blog posts to help identify the ways in which policy theories could help explain the relationship between evidence and policy.

Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.

But how exactly do we turn empirical descriptions of policymaking into practical advice?

For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:

  1. they have a broader view on what counts as good evidence (see ANZSOG description)
  2. they have to ignore almost all information (a nice way into bounded rationality and policymaker psychology)
  3. they do not understand or control the process in which they seek to use evidence (a way into ‘the policy process’)

Cairney 2017 image of the policy process

We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?

Examples include:

  • ‘How to do it’ advice. I compare tips for individuals (from experienced practitioners) with tips based on policy concepts. They are quite similar-looking tips – e.g. find out where the action is, learn the rules, tell good stories, engage allies, seek windows of opportunity – but I describe mine as 5 impossible tasks!
  • Organisational reform. I describe work with the European Commission Joint Research Centre to identify 8 skills or functions of an organisation bringing together the supply/demand of knowledge.
  • Ethical dilemmas. I use key policy theories to ask people how far they want to go to privilege evidence in policy. It’s fun to talk about these things with the type of scientist who sees any form of storytelling as manipulation.

Further reading:

Is Evidence-Based Policymaking the same as good policymaking?

A 5-step strategy to make evidence count

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Principles of science advice to government: key problems and feasible solutions

Example 3: how to encourage realistic evidence-informed policy transfer

This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.

I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!

For me, the context is potentially overwhelming:

So, when we start to focus on sharing lessons, we will have three things to discover:

  1. What is the evidence for success, and from where does it come? Governments often project success without backing it up.
  2. What story do policymakers tell about the problem they are trying to solve, the solutions they produced, and why? Two different governments may be framing and trying to solve the same problem in very different ways.
  3. Was the policy introduced in a comparable policymaking system? People tend to focus on political system comparability (e.g. is it unitary or federal?), but I think the key is in policymaking system comparability (e.g. what are the rules and dominant ideas?).

To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.

 

The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018

I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), IMAJINE, Policy learning and transfer

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

For a longer discussion, see Fostering Evidence-informed Policy Making: Uncertainty Versus Ambiguity (PDF)

Or, if you fancy it in French: Favoriser l’élaboration de politiques publiques fondées sur des données probantes : incertitude versus ambiguïté (PDF)

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

34 Comments

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

Debating the politics of evidence-based policy

Joshua Newman has provided an interesting review of three recent books on evidence/ policy (click here). One of those books is mine: The Politics of Evidence-Based Policy Making (which you can access here).

His review is very polite, for which I thank him. I hope my brief response can be seen in a similarly positive light (well, I had hoped to make it brief). Maybe we disagree on one or two things, but often these discussions are about the things we emphasize and the way we describe similar points.

There are 5 points to which I respond because I have 5 digits on my right hand. I’d like you to think of me counting them out on my fingers. In doing so, I’ll use ‘Newman’ throughout, because that’s the academic convention, but I’d also like to you imagine me reading my points aloud and whispering ‘Joshua’ before each ‘Newman’.

  1. Do we really need to ‘take the debate forward’ so often?

I use this phrase myself, knowingly, to keep a discussion catchy, but I think it’s often misleading. I suggest not to get your hopes up too high when Newman raises the possibility of taking the debate forward with his concluding questions. We won’t resolve the relationship between evidence, politics & policy by pretending to reframe the same collection of questions about the prospect of political reform that people have been asking for centuries. It is useful to envisage better political systems (the subject of Newman’s concluding remarks) but I don’t think we should pretend that this is a new concern or that it will get us very far.

Indeed, my usual argument is that researchers need to do something (such as improve how we engage in the policy process) while we wait for political system reforms to happen (while doubting if they will ever happen).

Further, Newman does not produce any political reforms to address the problems he raises. Rather, for example, he draws attention to Trump to describe modern democracies as ‘not pluralist utopias’ and to identify examples in which policymakers draw primarily on beliefs, not evidence. By restating these problems, he does not solve them. So, what are researchers supposed to do after they grow tired of complaining that the world does not meet their hopes or expectations?

In other words, for me, (a) promoting political change and (b) acting during its absence are two sides of the same coin. We go round and round more often than we take things forward.

  1. What debate are we renaming?

Newman’s ‘we’ve heard it before’ argument seems more useful, but there is a lot to hear and relatively few people have heard it. I’d warn against the assumption that ‘I’ve heard this before’ can ever equal ‘we’ve heard it before’ (unless ‘we’ refers to a tiny group of specialists talking only to each other).

Rather, one of the most important things we can do as academics is to tell the same story to each other (to check if we understand the same story, in the same way, and if it remains useful) and to wider audiences (in a way that they can pick up and use without dedicating their career to our discipline).

Some of our most important insights endure for decades and they sometimes improve in the retelling. We apply them to new eras, and often come to the same basic conclusions, but it seems unhelpful to criticise a lack of complete novelty in individual texts (particularly when they are often designed to be syntheses). Why not use them to occasionally take a step back to discuss and clarify what we know?

Perhaps more importantly, I don’t think Newman is correct when he says that each book retells the story of the ‘research utilization’ literature. I’m retelling the story of policy theory, which describes how policymakers deal with bounded rationality in a complex policymaking environment. Policy theory’s intellectual histories often provide very different perspectives – of the policymaker trying to make good enough decisions, rather than the researcher trying to improve the uptake of their research – than the agenda inspired by Weiss et al (see for example The New Policy Sciences).

  1. Don’t just ‘get political’; understand the policy process

I draw on policy theory because it helps people understand policymaking. It would be a mistake to conclude from my book that I simply want researchers to ‘get political’. Rather, I want them to develop useful knowledge of the policy process in which they might want to engage. This knowledge is not freely available; it takes time to understand the discipline and reflect on policy dynamics.

Yet, the payoff can be profound, if only because it helps people think about the difference between two analytically separate causes of a notional ‘evidence policy gap’: (a) individuals making choices based on their beliefs and limited information (which is relatively easy to understand but also to caricature), and (b) systemic or ‘environmental’ causes (which are far more difficult to conceptualise and explain, but often more useful to understand).

  1. Don’t throw out the ‘two communities’ phrase without explaining why

Newman criticises the phrase ‘two communities’ as a description of silos in policymaking versus research, partly because (a) many policymakers use research frequently, and (b) the real divide is often between users/ non-users of research within policymaking organisations. In short, there are more than two communities.

I’d back up his published research with my anecdotal experience of giving talks to government audiences: researchers and analysts within government are often very similar in outlook to academics and they often talk in the same way as academics about the disconnect between their (original or synthetic) research and its use by their ‘operational’ colleagues.

Still, I’m not sure why Newman concludes that the ‘two communities’ phrase is ‘deeply flawed and probably counter-productive’. Yes, the world is more nuanced and less binary than ‘two communities’ suggests. Yes, the real divide may be harder to spot. Still, as Newman et al suggest: ‘Policy makers and academics should focus on bridging instruments that can bring their worlds closer together’. This bullet point from their article seems, to me, to be the point of using the phrase ‘two communities’. Maybe Caplan used the phrase differently in 1979, but to assert its historic meaning then reject the phrase’s use in modern discussion seems less useful than simply clarifying the argument in ways such as:

  • There is no simple policymaker/ academic divide but, … note the major difference in requirements between (a) people who produce or distribute research without taking action, which allows them (for example) to be more comfortable with uncertainty, and (b) people who need to make choices despite having incomplete information to hand.
  • You might find a more receptive audience in one part of government (e.g. research/ analytical) than another (e.g. operational), so be careful about generalising from singular experiences.
  1. Should researchers engage in the policy process?

Newman says that each book, ‘unfairly places the burden of resolving the problem in the hands of an ill-equipped group of academics, operating outside the political system’.

I agree with Newman when he says that many researchers do not possess the skills to engage effectively in the policy process. Scientific training does not equip us with political skills. Indeed, I think you could read a few of my blog posts and conclude, reasonably, that you would like nothing more to do with the policy process because you’d be more effective by focusing on research.

The reason I put the onus back on researchers is because I am engaging with arguments like the one expressed by Newman (in other words, part of the meaning comes from the audience). Many people conclude their evidence policy discussions by identifying (or ‘reframing’) the problem primarily as the need for political reform. For me, the focus on other people changing to suit your preferences seems unrealistic and misplaced. In that context, I present the counter-argument that it may be better to adapt effectively to the policy process that exists, not the one you’d like to see. Sometimes it’s more useful to wear a coat than complain about the weather.

See also:  The Politics of Evidence 

The Politics of Evidence revisited

 

Pivot cover

2 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

How can governments better collaborate to address complex problems?

Swann Kim

This is a guest post by William L. Swann (left) and Seo Young Kim (right), discussing how to use insights from the Institutional Collective Action Framework to think about how to improve collaborative governance. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Collective Action_1

Many public policy problems cannot be addressed effectively by a single, solitary government. Consider the problems facing the Greater Los Angeles Area, a heavily fragmented landscape of 88 cities and numerous unincorporated areas and special districts. Whether it is combatting rising homelessness, abating the country’s worst air pollution, cleaning the toxic L.A. River, or quelling gang violence, any policy alternative pursued unilaterally is limited by overlapping authority and externalities that alter the actions of other governments.

Problems of fragmented authority are not confined to metropolitan areas. They are also found in multi-level governance scenarios such as the restoration of Chesapeake Bay, as well as in international relations as demonstrated by recent global events such as “Brexit” and the U.S.’s withdrawal from the Paris Climate Agreement. In short, fragmentation problems manifest at every scale of governance, horizontally, vertically, and even functionally within governments.

What is an ‘institutional collective action’ dilemma?

In many cases governments would be better off coordinating and working together, but they face barriers that prevent them from doing so. These barriers are what the policy literature refers to as ‘institutional collective action’ (ICA) dilemmas, or collective action problems in which a government’s incentives do not align with collectively desirable outcomes. For example, all governments in a region benefit from less air pollution, but each government has an incentive to free ride and enjoy cleaner air without contributing to the cost of obtaining it.

The ICA Framework, developed by Professor Richard Feiock, has emerged as a practical analytical instrument for understanding and improving fragmented governance. This framework assumes that governments must match the scale and coerciveness of the policy intervention (or mechanism) to the scale and nature of the policy problem to achieve efficient and desired outcomes.

For example, informal networks (a mechanism) can be highly effective at overcoming simple collective action problems. But as problems become increasingly complex, more obtrusive mechanisms, such as governmental consolidation or imposed collaboration, are needed to achieve collective goals and more efficient outcomes. The more obtrusive the mechanism, however, the more actors’ autonomy diminishes and the higher the transaction costs (monitoring, enforcement, information, and agency) of governing.

Collective Action_2

Three ways to improve institutional collaborative governance

We explored what actionable steps policymakers can take to improve their results with collaboration in fragmented systems. Our study offers three general practical recommendations based on the empirical literature that can enhance institutional collaborative governance.

First, institutional collaboration is more likely to emerge and work effectively when policymakers employ networking strategies that incorporate frequent, face-to-face interactions.

Government actors networking with popular, well-endowed actors (“bridging strategies”) as well as developing closer-knit, reciprocal ties with a smaller set of actors (“bonding strategies”) will result in more collaborative participation, especially when policymakers interact often and in-person.

Policy network characteristics are also important to consider. Research on estuary governance indicates that in newly formed, emerging networks, bridging strategies may be more advantageous, at least initially, because they can provide organizational legitimacy and access to resources. However, once collaboratives mature, developing stronger and more reciprocal bonds with fewer actors reduces the likelihood of opportunistic behavior that can hinder collaborative effectiveness.

Second, policymakers should design collaborative arrangements that reduce transaction costs which hinder collaboration.

Well-designed collaborative institutions can lower the barriers to participation and information sharing, make it easier to monitor the behaviors of partners, grant greater flexibility in collaborative work, and allow for more credible commitments from partners.

Research suggests policymakers can achieve this by

  1. identifying similarities in policy goals, politics, and constituency characteristics with institutional partners
  2. specifying rules such as annual dues, financial reporting, and making financial records reviewable by third parties to increase commitment and transparency in collaborative arrangements
  3. creating flexibility by employing adaptive agreements with service providers, especially when services have limited markets/applications and performance is difficult to measure.

Considering the context, however, is crucial. Collaboratives that thrive on informal, close-knit, reciprocal relations, for example, may be severely damaged by the introduction of monitoring mechanisms that signal distrust.

Third, institutional collaboration is enhanced by the development and harnessing of collaborative capacity.

Research suggests signaling organizational competencies and capacities, such as budget, political support, and human resources, may be more effective at lowering barriers to collaboration than ‘homophily’ (a tendency to associate with similar others in networks). Policymakers can begin building collaborative capacity by seeking political leadership involvement, granting greater managerial autonomy, and looking to higher-level governments (e.g., national, state, or provincial governments) for financial and technical support for collaboration.

What about collaboration in different institutional contexts?

Finally, we recognize that not all policymakers operate in similar institutional contexts, and collaboration can often be mandated by higher-level authorities in more centralized nations. Nonetheless, visible joint gains, economic incentives, transparent rules, and equitable distribution of joint benefits and costs are critical components of voluntary or mandated collaboration.

Conclusions and future directions

The recommendations offered here are, at best, only the tip of the iceberg on valuable practical insight that can be gleaned from collaborative governance research. While these suggestions are consistent with empirical findings from broader public management and policy networks literatures, much could be learned from a closer inspection of the overlap between ICA studies and other streams of collaborative governance work.

Collaboration is a valuable tool of governance, and, like any tool, it should be utilized appropriately. Collaboration is not easily managed and can encounter many obstacles. We suggest that governments generally avoid collaborating unless there are joint gains that cannot be achieved alone. But the key to solving many of society’s intractable problems, or just simply improving everyday public service delivery, lies in a clearer understanding of how collaboration can be used effectively within different fragmented systems.

5 Comments

Filed under public policy

Three ways to communicate more effectively with policymakers

By Paul Cairney and Richard Kwiatkowski

Use psychological insights to inform communication strategies

Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).

They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.

Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.

If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.

We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.

To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.

Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.

On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:

  1. Tailor framing strategies to policymaker bias. If people are cognitive misers, minimise the cognitive burden of your presentation. If policymakers combine cognitive and emotive processes, combine facts with emotional appeals. If policymakers make quick choices based on their values and simple moral judgements, tell simple stories with a hero and moral. If policymakers reflect a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with those beliefs.
  2. Identify ‘windows of opportunity’ to influence individuals and processes. ‘Timing’ can refer to the right time to influence an individual, depending on their current way of thinking, or to act while political conditions are aligned.
  3. Adapt to real-world ‘dysfunctional’ organisations rather than waiting for an orderly process to appear. Form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.


This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.

Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

The impact of multi-level policymaking on the UK energy system

Cairney et al UKERC

In September, we will begin a one-year UKERC project examining current and future energy policy and multi-level policymaking and its impact on ‘energy systems’. This is no mean feat, since the meaning of policy, policymaking (or the ‘policy process’), and ‘system’ are not clear, and our description of the components parts of an energy system and a complex policymaking system may differ markedly. So, one initial aim is to provide some way to turn a complex field of study into something simple enough to understand and engage with.

We do so by focusing on ‘multi-level policymaking’ – which can encompass concepts such as multi-level governance and intergovernmental relations – to reflect the fact that the responsibility for policies relevant to energy are often Europeanised, devolved, and shared between several levels of government. Brexit will produce a major effect on energy and non-energy policies, and prompt the UK and devolved governments to produce relationships, but we all need more clarity on the dynamics of current arrangements before we can talk sensibly about the future. To that end, we pursue three main work packages:

1. What is the ‘energy policymaking system’ and how does it affect the energy system?

Chaudry et al (2009: iv) define the UK energy system as ‘the set of technologies, physical infrastructure, institutions, policies and practices located in and associated with the UK which enable energy services to be delivered to UK consumers’. UK policymaking can have a profound impact, and constitutional changes might produce policy change, but their impacts require careful attention. So, we ‘map’ the policy process and the effect of policy change on energy supply and demand. Mapping sounds fairly straightforward but contains a series of tasks whose level of difficulty rises each time:

  1. Identify which level or type of government is responsible – ‘on paper’ and in practice – for the use of each relevant policy instrument.
  2. Identify how these actors interact to produce what we call ‘policy’, which can range from statements of intent to final outcomes.
  3. Identify an energy policy process containing many actors at many levels, the rules they follow, the networks they form, the ‘ideas’ that dominate discussion, and the conditions and events (often outside policymaker control) which constrain and facilitate action. By this stage, we need to draw on particular policy theories to identify key venues, such as subsystems, and specific collections of actors, such as advocacy coalitions, to produce a useful model of activity.

2. Who is responsible for action to reduce energy demand?

Energy demand is more challenging to policymakers than energy supply because the demand side involves millions of actors who, in the context of household energy use, also constitute the electorate. There are political tensions in making policies to reduce energy demand and carbon where this involves cost and inconvenience for private actors who do not necessarily value the societal returns achieved, and the political dynamics often differ from policy to regulate industrial demand. There are tensions around public perceptions of whose responsibility it is to take action – including local, devolved, national, or international government agencies – and governments look like they are trying to shift responsibility to each other or individuals and firms.

So, there is no end of ways in which energy demand could be regulated or influenced – including energy labelling and product/building standards, emissions reduction measures, promotion of efficient generation, and buildings performance measures – but it is an area of policy which is notoriously diffuse and lacking in co-ordination. So, for the large part, we consider if Brexit provides a ‘window of opportunity’ to change policy and policymaking by, for example, clarifying responsibilities and simplifying relationships.

3: Does Brexit affect UK and devolved policy on energy supply?

It is difficult for single governments to coordinate an overall energy mix to secure supply from many sources, and multi-level policymaking adds a further dimension to planning and cooperation. Yet, the effect of constitutional changes is highly uneven. For example, devolution has allowed Scotland to go its own way on renewable energy, nuclear power and fracking, but Brexit’s impact ranges from high to low. It presents new and sometimes salient challenges for cooperation to supply renewable energy but, while fracking and nuclear are often the most politically salient issues, Brexit may have relatively little impact on policymaking within the UK.

We explore the possibility that renewables policy may be most impacted by Brexit, while nuclear and fracking are examples in which Brexit may have a minimal direct impact on policy. Overall, the big debates are about the future energy mix, and how local, devolved, and UK governments balance the local environmental impacts of, and likely political opposition to, energy development against the economic and energy supply benefits.

For more details, see our 4-page summary

Powerpoint for 13.7.17

Cairney et al UKERC presentation 23.10.17

1 Comment

Filed under Fracking, public policy, UKERC

5 images of the policy process

Cairney 2017 image of the policy process

A picture tells a thousand words but, in policy studies, those words are often misleading or unclear. The most useful images can present the least useful advice, or capture a misleading metaphor. Images from the most useful theories are useful when you already know the theory, but far more difficult to grasp initially.

So, I present two examples from each, then describe what a compromise image might look like, to combine something that is easy to pick up and use but also not misleading or merely metaphorical.

Why do we need it? It is common practice at workshops and conferences for some to present policy process images on powerpoint and for others to tweet photos of them, generally with little critical discussion of what they say and how useful they are. I’d like to see as-simple but more-useful images spread this way.

1. The policy cycle

cycle

The policy cycle is perhaps the most used and known image. It divides the policy process into a series of stages (described in 1000 words and 500 words). It oversimplifies, and does not explain, a complex policymaking system. We are better to imagine, for example, thousands of policy cycles interacting with each other to produce less orderly behaviour and less predictable outputs.

For students, we have dozens of concepts and theories which serve as better ways to understand policymaking.

Policymakers have more use for the cycle, to tell a story of what they’d like to do: identify aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement, then evaluate the policy.

As such, this story still pops up from time to time:

Yet, most presentations from policymakers, advisers, and practitioners modify the cycle image to show how messy life really is:

Update 18.2.20: I knocked this up one in my garage for the book The Politics of Policy Analysis. See in particular what you need as an analyst versus policymaking reality, which argues that the cycle (or 5-step policy analysis) describes what policy analysts would like to do (not what happens).

cycle and cycle spirograph 18.2.20

2. The multiple streams metaphor

NASA launch

The ‘multiple streams’ approach uses metaphor to describe this messier world (described in 1000 words and 500 words). Instead of a linear cycle – in which policymakers define problems, then ask for potential solutions, then select one – we describe these ‘stages’ as independent ‘streams’. Each stream – heightened attention to a problem (problem stream), an available and feasible solution (policy stream), and the motive to select it (politics stream) – must come together during a ‘window of opportunity’ or the opportunity is lost.

Many people like MSA because it contains a flexible metaphor which is simple to pick up and use. However, it’s so flexible that I’ve seen many different ways to visualise – and make sense of – the metaphor, including literal watery streams, which suggest that when they come together they are hard to separate.  There is the Ghostbusters metaphor which shows that key actors (‘entrepreneurs’) help couple the streams. There is also Howlett et al’s attempt to combine the streams and cycles metaphors (reproduced here, and criticised here).

However, I’d encourage Kingdon’s space launch metaphor in which policymakers will abort the mission unless every factor is just right.

3. The punctuated equilibrium graph

True et al figure 6.2

Punctuated equilibrium theory is one of the most important approaches to policy dynamics, now backed up with a wealth of data from the Comparative Agendas Project. The image (in True et al, 2007) describes the result of the policy process rather than the process itself. It describes government budgets in the US, although we can find very similar images from studies of budgets in many other countries and in many measures of policy change.

It sums up a profoundly important message about policy change: we find a huge number of very small changes, and a very small number of huge changes. Compare the distribution of values in this image with the ‘normal distribution’ (the dotted line). It shows a ‘leptokurtic’ distribution, with most values deviating minimally from the mean (and the mean change in each budget item is small), but with a high number of ‘outliers’.

The image helps sum up a key aim of PET, to measure and try to explain long periods of policymaking stability, and policy continuity, disrupted by short but intense periods of instability and change. One explanation relates to ‘bounded rationality’: policymakers have to ignore almost all issues while paying attention to some. The lack of ‘macropolitical’ attention to most issues helps explain stability and continuity, while lurches of attention can help explain instability (although attention can fade before ‘institutions’ feel the need to respond).

Here I am, pointing at this graph:

4. The advocacy coalition framework ‘flow diagram’

ACF diagram

The ACF presents an ambitious image of the policy process, in which we zoom out to consider how key elements fit together in a process containing many actors and levels of government. Like many policy theories, it situates most of the ‘action’ in policy networks or subsystems, showing that some issues involve intensely politicized disputes containing many actors while others are treated as technical and processed routinely, largely by policy specialists, out of the public spotlight.

The ACF suggests that people get into politics to turn their beliefs into policy, form coalitions with people who share their beliefs, and compete with coalitions of actors who share different beliefs. This competition takes place in a policy subsystem, in which coalitions understand new evidence through the lens of their beliefs, and exercise power to make sure that their interpretation is accepted. The other boxes describe the factors – the ‘parameters’ likely to be stable during the 10-year period of study, the partial sources of potential ‘shocks’ to the subsystem, and the need and ability of key actors to form consensus for policy change (particularly in political systems with PR elections) – which constrain and facilitate coalition action.

5. What do we need from a new image?

I recommend an image that consolidates or synthesises existing knowledge and insights. It is tempting to produce something that purports to be ‘new’ but, as with ‘new’ concepts or ‘new’ policy theories, how could we accumulate insights if everyone simply declared novelty and rejected the science of the past?

For me, the novelty should be in the presentation of the image, to help people pick up and use a wealth of policy studies which try to capture two key dynamics:

  1. Policy choice despite uncertainty and ambiguity.

Policymakers can only pay attention to a tiny proportion of issues. They use ‘rational’ and ‘irrational’ cognitive shortcuts to make decisions quickly, despite their limited knowledge of the world, and the possibility to understand policy problems from many perspectives.

  1. A policy environment which constrains and facilitates choice.

Such environments are made up of:

  1. Actors (individuals and organisations) influencing policy at many levels and types of government
  2. Institutions: a proliferation of rules and norms followed by different levels or types of government
  3. Networks: relationships between policymakers and influencers
  4. Ideas: a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. Context and events: economic, social, demographic, and technological conditions provide the context for policy choice, and routine/ unpredictable events can prompt policymaker attention to lurch at short notice.

The implications of both dynamics are fairly easy to describe in tables (for example, while describing MSA) and to cobble together quickly in a SmartArt picture:

Cairney 2017 image of the policy process

However, note at least three issues with such a visual presentation:

  1. Do we put policymakers and choice at the centre? If so, it could suggest (a bit like the policy cycle) that a small number of key actors are at the ‘centre’ of the process, when we might prefer to show that their environment, or the interaction between many actors, is more important.
  2. Do we show only the policy process or relate it to the ‘outside world’?
  3. There are many overlaps between concepts. For example, we seek to describe the use and reproduction of rules in ‘institutions’ and ‘networks’, while those rules relate strongly to ‘ideas’. Further, ‘networks’ could sum up ‘actors interacting in many levels/ types of government’. So, ideally, we’d have overlapping shapes to denote overlapping relationships and understandings, but it would really mess up the simplicity of the image.

Of course, the bigger issue is that the image I provide is really just a vehicle to put text on a screen (in the hope that it will be shared). At best it says ‘note these concepts’. It does not show causal relationships. It does not describe any substantial interaction between the concepts to show cause and effect (such as, event A prompted policy choice B).

However, if we tried to bring in that level of detail, I think we would quickly end up with the messy process already described in relation to the policy cycle. Or, we would need to provide a more specific and less generally applicable model of policymaking.

So, right now, this image is a statement of intent. I want to produce something better, but don’t yet know what ‘better’ looks like. There is no ‘general theory’ of policymaking, so can we have a general image? Or, like ‘what is policy?’ discussions, do we produce an answer largely to raise more questions?

Update: please compare with the turtle diagram, below, and explored in more depth here.

Circle image policy process 24.10.18

___

Here I am, looking remarkably pleased with my SmartArt skills

19 Comments

Filed under public policy

Practical Lessons from Policy Theories

These links to blog posts (the underlined headings) and tweets (with links to their full article) describe a new special issue of Policy and Politics, published in April 2018 and free to access until the end of May.

Weible Cairney abstract

Three habits of successful policy entrepreneurs

Telling stories that shape public policy

How to design ‘maps’ for policymakers relying on their ‘internal compass’

Three ways to encourage policy learning

How can governments better collaborate to address complex problems?

How do we get governments to make better decisions?

How to navigate complex policy designs

Why advocacy coalitions matter and how to think about them

None of these abstract theories provide a ‘blueprint’ for action (they were designed primarily to examine the policy process scientifically). Instead, they offer one simple insight: you’ll save a lot of energy if you engage with the policy process that exists, not the one you want to see.

Then, they describe variations on the same themes, including:

  1. There are profound limits to the power of individual policymakers: they can only process so much information, have to ignore almost all issues, and therefore tend to share policymaking with many other actors.
  2. You can increase your chances of success if you work with that insight: identify the right policymakers, the ‘venues’ in which they operate, and the ‘rules of the game’ in each venue; build networks and form coalitions to engage in those venues; shape agendas by framing problems and telling good stories, design politically feasible solutions, and learn how to exploit ‘windows of opportunity’ for their selection.

Background to the special issue

Chris Weible and I asked a group of policy theory experts to describe the ‘state of the art’ in their field and the practical lessons that they offer.

Our next presentation was at the ECPR in Oslo:

The final articles in this series are now complete, but our introduction discusses the potential for more useful contributions

Weible Cairney next steps pic

20 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

See also our project website IMAJINE.

Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.

Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).

So, what should we do?

I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.

I have mocked up a paper abstract to describe this kind of work:

In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them.  In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.

Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.

We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.

So, in the paper we (my future research partner and I), would:

  • Outline the payoffs to this engage-early approach. Early engagement will inform the research questions you ask, how you ask them, and how you ‘frame’ the results. It should also help produce more academic publications (which is still the key consideration for many academics), partly because this early approach will help us speak with some authority about policy and policymaking in many countries.
  • Describe the complications of engaging with different policymakers in many ‘venues’ in different countries: you would expect very different questions to arise, and perhaps struggle to manage competing audience demands.
  • Raise practical questions about the research audience, including: should we interview key advocacy groups and private sources of funding for applied research, as well as policymakers, when refining questions? I ask this question partly because it can be more effective to communicate evidence via policy influencers rather than try to engage directly with policymakers.
  • Raise ethical questions, including: what if policymaker interviewees want the ‘wrong’ questions answered? What if they are only interested in policy solutions that we think are misguided, either because the evidence-base is limited (and yet they seek a magic bullet) or their aims are based primarily on ideology (an allegedly typical dilemma regards left-wing academics providing research for right-wing governments)?

Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?

The case study of territorial inequalities is a key source of such dilemmas …

…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.

When considering solutions, they can choose from three basic categories:

  1. Tax and spending to redistribute income and wealth, perhaps treating economic inequalities as the source of most others (such as health and education inequalities).
  2. The provision of public services to help mitigate the effects of economic and other inequalities (such as free healthcare and education, and public transport in urban and rural areas).
  3. The adoption of ‘prevention’ strategies to engage as early as possible in people’s lives, on the assumption that key inequalities are well-established by the time children are three years old.

Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.

‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.

For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?

“There is extensive health and public health literature on the ‘evidence-policy gap’, exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems”.

We use this table to describe how the policy process works, how effective actors respond, and the dilemmas that arise for advocates of scientific evidence: should they act this way too?

We summarise this argument in two posts for:

The Guardian If scientists want to influence policymaking, they need to understand it

Sax Institute The evidence policy gap: changing the research mindset is only the beginning

The article is part of a wider body of work in which one or both of us considers the relationship between evidence and policy in different ways, including:

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review PDF

Paul Cairney (2016) The Politics of Evidence-Based Policy Making (PDF)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Paul Cairney (2016) Evidence-based best practice is more political than it looks in Evidence and Policy

Many of my blog posts explore how people like scientists or researchers might understand and respond to the policy process:

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

‘Evidence-based Policymaking’ and the Study of Public Policy

How far should you go to secure academic ‘impact’ in policymaking?

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

What 10 questions should we put to evidence for policy experts?

Why doesn’t evidence win the day in policy and policymaking?

We all want ‘evidence based policy making’ but how do we do it?

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

The Politics of Evidence Based Policymaking:3 messages

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

There are more posts like this on my EBPM page

I am also guest editing a series of articles for the Open Access journal Palgrave Communications on the ‘politics of evidence-based policymaking’ and we are inviting submissions throughout 2017.

There are more details on that series here.

And finally ..

… if you’d like to read about the policy theories underpinning these arguments, see Key policy theories and concepts in 1000 words and 500 words.

 

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

15 Comments

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

These notes are for my brief panel talk at the European Parliament-European University Institute ‘Policy Roundtable’: Evidence and Analysis in EU Policy-Making: Concepts, Practice and Governance. As you can see from the programme description, the broader theme is about how EU institutions demonstrate their legitimacy through initiatives such as stakeholder participation and evidence-based policymaking (EBPM). So, part of my talk is about what happens when EBPM does not exist.

The post is a slightly modified version of my (recorded) talk for Open Society Foundations (New York) but different audiences make sense of these same basic points in very different ways.

  1. Recognise that the phrase ‘evidence-based policy-making’ means everything and nothing

The main limitation to ‘evidence-based policy-making’ is that no-one really knows what it is or what the phrase means. So, each actor makes sense of EBPM in different ways and you can tell a lot about each actor by the way in which they answer these questions:

  • Should you use restrictive criteria to determine what counts as ‘evidence’? Some actors equate evidence with scientific evidence and adhere to specific criteria – such as evidence-based medicine’s hierarchy of evidence – to determine what is scientific. Others have more respect for expertise, professional experience, and stakeholder and service user feedback as sources of evidence.
  • Which metaphor, evidence based or informed is best? ‘Evidence based’ is often rejected by experienced policy participants as unrealistic, preferring ‘informed’ to reflect pragmatism about mixing evidence and political calculations.
  • How far do you go to pursue EBPM? It is unrealistic to treat ‘policy’ as a one-off statement of intent by a single authoritative actor. Instead, it is made and delivered by many actors in a continuous policymaking process within a complicated policy environment (outlined in point 3). This is relevant to EU institutions with limited resources: the Commission often makes key decisions but relies on Member States to make and deliver, and the Parliament may only have the ability to monitor ‘key decisions’. It is also relevant to stakeholders trying to ensure the use of evidence throughout the process, from supranational to local action.
  • Which actors count as policymakers? Policymaking is done by ‘policymakers’, but many are unelected and the division between policymaker/ influencer is often unclear. The study of policymaking involves identifying networks of decision-making by elected and unelected policymakers and their stakeholders, while the actual practice is about deciding where to draw the line between influence and action.
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

For stakeholders, an effective engagement strategy is not straightforward: it takes time to know ‘where the action is’, how and where to engage with policymakers, and with whom to form coalitions. For the Commission, it is difficult to know what will happen to policy after it is made (although we know the end point will not resemble the starting point). For the Parliament, it is difficult even to know where to look.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected national and local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from stakeholders, professional groups, service user and local practitioner experience. This principle seems to rule out the use of RCTs, at least as a source of a uniform model to be rolled out and evaluated. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach to EBPM or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking

  • If policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals?
  • If policymaking systems are so complex, should stakeholders devote huge amounts of resources to make sure they’re effective at each stage?
  • Should proponents of scientific evidence go to great lengths to make sure that EBPM is based on a hierarch of evidence? There is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
  • Should policymakers try to direct the use of evidence in policy as well as policy itself?

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Writing a policy paper and blog post #POLU9UK

It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.

You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.

There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.

bardachs-8-steps

Be concise, be smart

So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?

It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.

Your aim is not to give a full account of a problem – it’s to get someone important to care about it.

Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.

Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.

Be creative

I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:

  1. Who is your audience? Our discussion of the limits to centralised policymaking suggest that your most influential audience will not necessarily be a UK government minister – but who else would it be?
  2. How manipulative should you be? Our discussions of ‘bounded rationality’ and ‘evidence-based policymaking’ suggest that policymakers combine ‘rational’ and ‘irrational’ shortcuts to gather information and make choices. So, do you appeal to their desire to set goals and gather a lot of scientific information and/or make an emotional and manipulative appeal?
  3. Are you an advocate or an ‘honest broker’? Contemporary discussions of science advice to government highlight unresolved debates about the role of unelected advisors: should you simply lay out some possible solutions or advocate one solution strongly?

Be reflective

For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.

You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:

  • Show how your understanding of policymaker psychology helped you decide how to present information on problems and solutions.
  • Extract insights from policy theories, such as from punctuated equilibrium theory on policymaker attention, multiple streams analysis on timing and feasibility, or the NPF on how to tell persuasive stories.
  • Explore the implications of the lack of ‘comprehensive rationality’ and absence of a ‘policy cycle’: feasibility is partly about identifying the extent to which a solution is ‘doable’ when central governments have limited powers. What ‘policy style’ or policy instruments would be appropriate for the solution you favour?

Be a blogger

With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.

The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.

Be reflective once again

Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.

This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!

See also: Professionalism online with social media

Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?

20161125_094112c

20161125_094131

20161125_094146

20161125_094203

7 Comments

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

The Politics of Evidence-based Policymaking in 2500 words

Here is a 2500 word draft of an entry to the Oxford Research Encyclopaedia (public administration and policy) on EBPM. It brings together some thoughts in previous posts and articles

Evidence-based Policymaking (EBPM) has become one of many valence terms that seem difficult to oppose: who would not want policy to be evidence based? It appears to  be the most recent incarnation of a focus on ‘rational’ policymaking, in which we could ask the same question in a more classic way: who would not want policymaking to be based on reason and collecting all of the facts necessary to make good decisions?

Yet, as we know from classic discussions, there are three main issues with such an optimistic starting point. The first is definitional: valence terms only seem so appealing because they are vague. When we define key terms, and produce one definition at the expense of others, we see differences of approach and unresolved issues. The second is descriptive: ‘rational’ policymaking does not exist in the real world. Instead, we treat ‘comprehensive’ or ‘synoptic’ rationality as an ideal-type, to help us think about the consequences of ‘bounded rationality’ (Simon, 1976). Most contemporary policy theories have bounded rationality as a key starting point for explanation (Cairney and Heikkila, 2014). The third is prescriptive. Like EBPM, comprehensive rationality seems – initially – to be unequivocally good. Yet, when we identify its necessary conditions, or what we would have to do to secure this aim, we begin to question EBPM and comprehensive rationality as an ideal scenario.

What is ‘evidence-based policymaking?’ is a lot like ‘what is policy?’ but more so!

Trying to define EBPM is like magnifying the problem of defining policy. As the entries in this encyclopaedia suggest, it is difficult to say what policy is and measure how much it has changed. I use the working definition, ‘the sum total of government action, from signals of intent to the final outcomes’ (Cairney, 2012: 5) not to provide something definitive, but to raise important qualifications, including: there is a difference between what people say they will do, what they actually do, and the outcome; and, policymaking is also about the power not to do something.

So, the idea of a ‘sum total’ of policy sounds intuitively appealing, but masks the difficulty of identifying the many policy instruments that make up ‘policy’ (and the absence of others), including: the level of spending; the use of economic incentives/ penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organisational change; and, the levels of resources/ methods dedicated to policy implementation and evaluation (2012: 26). In that context, we are trying to capture a process in which actors make and deliver ‘policy’ continuously, not identify a set-piece event providing a single opportunity to use a piece of scientific evidence to prompt a policymaker response.

Similarly, for the sake of simplicity, we refer to ‘policymakers’ but in the knowledge that it leads to further qualifications and distinctions, such as: (1) between elected and unelected participants, since people such as civil servants also make important decisions; and (2) between people and organisations, with the latter used as a shorthand to refer to a group of people making decisions collectively and subject to rules of collective engagement (see ‘institutions’). There are blurry dividing lines between the people who make and influence policy and decisions are made by a collection of people with formal responsibility and informal influence (see ‘networks’). Consequently, we need to make clear what we mean by ‘policymakers’ when we identify how they use evidence.

A reference to EBPM provides two further definitional problems (Cairney, 2016: 3-4). The first is to define evidence beyond the vague idea of an argument backed by information. Advocates of EBPM are often talking about scientific evidence which describes information produced in a particular way. Some describe ‘scientific’ broadly, to refer to information gathered systematically using recognised methods, while others refer to a specific hierarchy of methods. The latter has an important reference point – evidence based medicine (EBM) – in which the aim is to generate the best evidence of the best interventions and exhort clinicians to use it. At the top of the methodological hierarchy are randomized control trials (RCTs) to determine the evidence, and the systematic review of RCTs to demonstrate the replicated success of interventions in multiple contexts, published in the top scientific journals (Oliver et al, 2014a; 2014b).

This reference to EBM is crucial in two main ways. First, it highlights a basic difference in attitude between the scientists proposing a hierarchy and the policymakers using a wider range of sources from a far less exclusive list of publications: ‘The tools and programs of evidence-based medicine … are of little relevance to civil servants trying to incorporate evidence in policy advice’ (Lomas and Brown 2009: 906).  Instead, their focus is on finding as much information as possible in a short space of time – including from the ‘grey’ or unpublished/non-peer reviewed literature, and incorporating evidence on factors such as public opinion – to generate policy analysis and make policy quickly. Therefore, second, EBM provides an ideal that is difficult to match in politics, proposing: “that policymakers adhere to the same hierarchy of scientific evidence; that ‘the evidence’ has a direct effect on policy and practice; and that the scientific profession, which identifies problems, is in the best place to identify the most appropriate solutions, based on scientific and professionally driven criteria” (Cairney, 2016: 52; Stoker 2010: 53).

These differences are summed up in the metaphor ‘evidence-based’ which, for proponents of EBM suggests that scientific evidence comes first and acts as the primary reference point for a decision: how do we translate this evidence of a problem into a proportionate response, or how do we make sure that the evidence of an intervention’s success is reflected in policy? The more pragmatic phrase ‘evidence-informed’ sums up a more rounded view of scientific evidence, in which policymakers know that they have to take into account a wider range of factors (Nutley et al, 2007).

Overall, the phrases ‘evidence-based policy’ and ‘evidence-based policymaking’ are less clear than ‘policy’. This problem puts an onus on advocates of EBPM to state what they mean, and to clarify if they are referring to an ideal-type to aid description of the real world, or advocating a process that, to all intents and purposes, would be devoid of politics (see below). The latter tends to accompany often fruitless discussions about ‘policy based evidence’, which seems to describe a range of mistakes by policymakers – including ignoring evidence, using the wrong kinds, ‘cherry picking’ evidence to suit their agendas, and/ or producing a disproportionate response to evidence – without describing a realistic standard to which to hold them.

For example, Haskins and Margolis (2015) provide a pie chart of ‘factors that influence legislation’ in the US, to suggest that research contributes 1% to a final decision compared to, for example, ‘the public’ (16%), the ‘administration’ (11%), political parties (8%) and the budget (8%). Theirs is a ‘whimsical’ exercise to lampoon the lack of EBPM in government (compare with Prewitt et al’s 2012 account built more on social science studies), but it sums up a sense in some scientific circles about their frustrations with the inability of the policymaking world to keep up with science.

Indeed, there is an extensive literature in health science (Oliver, 2014a; 2014b), emulated largely in environmental studies (Cairney, 2016: 85; Cairney et al, 2016), which bemoans the ‘barriers’ between evidence and policy. Some identify problems with the supply of evidence, recommending the need to simplify reports and key messages. Others note the difficulties in providing timely evidence in a chaotic-looking process in which the demand for information is unpredictable and fleeting. A final main category relates to a sense of different ‘cultures’ in science and policymaking which can be addressed in academic-practitioner workshops (to learn about each other’s perspectives) and more scientific training for policymakers. The latter recommendation is often based on practitioner experiences and a superficial analysis of policy studies (Oliver et al, 2014b; Embrett and Randall’s, 2014).

EBPM as a misleading description

Consequently, such analysis tends to introduce reference points that policy scholars would describe as ideal-types. Many accounts refer to the notion of a policy cycle, in which there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, breaking down their task into clearly defined and well-ordered stages (Cairney, 2016: 16-18). The hope may be that scientists can help policymakers make good decisions by getting them as close as possible to ‘comprehensive rationality’ in which they have the best information available to inform all options and consequences. In that context, policy studies provides two key insights (2016; Cairney et al, 2016).

  1. The role of multi-level policymaking environments, not cycles

Policymaking takes place in less ordered and predictable policy environments, exhibiting:

  • a wide range of actors (individuals and organisations) influencing policy in many levels and types of government
  • a proliferation of rules and norms followed in different venues
  • close relationships (‘networks’) between policymakers and powerful actors
  • a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  • policy conditions and events that can prompt policymaker attention to lurch at short notice.

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multilevel policy process. It shows scientists that they are competing with many actors to present evidence in a particular way to secure a policymaker audience. Support for particular solutions varies according to which organisation takes the lead and how it understands the problem. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift – but major policy change is rare.

  1. Policymakers use two ‘shortcuts’ to deal with bounded rationality and make decisions

Policymakers deal with ‘bounded rationality’ by employing two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, beliefs, habits, and familiar reference points to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing.

Framing refers to the ways in which we understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), and responsible for policy, how much attention they pay, and what kind of solution they favour. Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with evidence. Rather, policy theories signal the strategies that actors use to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (True, Jones, and Baumgartner 2007)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (Jones, Shanahan, and McBeth 2014)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (Weible, Heikkila, and Sabatier 2012)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (Kingdon 1984).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, it can take years to produce support for an ‘evidence-based’ policy solution, built on its technical and political feasibility (will it work as intended, and do policymakers have the motive and opportunity to select it?).

EBPM as a problematic prescription

A pragmatic solution to the policy process would involve: identifying the key venues in which the ‘action’ takes place; learning the ‘rules of the game’ within key networks and institutions; developing framing and persuasion techniques; forming coalitions with allies; and engaging for the long term (Cairney, 2016: 124; Weible et al, 2012: 9-15). The alternative is to seek reforms to make EBPM in practice more like the EBM ideal.

Yet, EBM is defendable because the actors involved agree to make primary reference to scientific evidence and be guided by what works (combined with their clinical expertise and judgement). In politics, there are other – and generally more defendable – principles of ‘good’ policymaking (Cairney, 2016: 125-6). They include the need to legitimise policy: to be accountable to the public in free and fair elections, consult far and wide to generate evidence from multiple perspectives, and negotiate policy across political parties and multiple venues with a legitimate role in policymaking. In that context, we may want scientific evidence to play a major role in policy and policymaking, but pause to reflect on how far we would go to secure a primary role for unelected experts and evidence that few can understand.

Conclusion: the inescapable and desirable politics of evidence-informed policymaking

Many contemporary discussions of policymaking begin with the naïve belief in the possibility and desirability of an evidence-based policy process free from the pathologies of politics. The buzz phrase for any complaint about politicians not living up to this ideal is ‘policy based evidence’: biased politicians decide first what they want to do, then cherry pick any evidence that backs up their case. Yet, without additional thought, they put in its place a technocratic process in which unelected experts are in charge, deciding on the best evidence of a problem and its best solution.

In other words, new discussions of EBPM raise old discussions of rationality that have occupied policy scholars for many decades. The difference since the days of Simon and Lindblom (1959) is that we now have the scientific technology and methods to gather information in ways beyond the dreams of our predecessors. Yet, such advances in technology and knowledge have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policymaking at multiple levels (partly to uphold a principled commitment, in many political system, to devolve or share power).  Policy theories do not tell us what to do about these limits to EBPM, but they help us to separate pragmatism from often-misplaced idealism.

References

Cairney, Paul (2012) Understanding Public Policy (Basingstoke: Palgrave)

Cairney, Paul (2016) The Politics of Evidence-based Policy Making (Basingstoke: Palgrave)

Cairney, Paul and Heikkila, Tanya (2014) ‘A Comparison of Theories of the Policy Process’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early view, DOI:10.1111/puar.12555

Embrett, M. and Randall, G. (2014) ‘Social determinants of health and health equity policy research: Exploring the use, misuse, and nonuse of policy analysis theory’, Social Science and Medicine, 108, 147-55

Haskins, Ron and Margolis, Greg (2015) Show Me the Evidence: Obama’s fight for rigor and results in social policy (Washington DC: Brookings Institution Press)

Kingdon, J. (1984) Agendas, Alternatives and Public Policies 1st ed. (New York, NY: Harper Collins)

Lindblom, C. (1959) ‘The Science of Muddling Through’, Public Administration Review, 19: 79–88

Lomas J. and Brown A. (2009) ‘Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health’, Milbank Quarterly, 87, 4, 903–926

McBeth, M., Jones, M. and Shanahan, E. (2014) ‘The Narrative Policy Framework’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Nutley, S., Walter, I. and Davies, H. (2007) Using evidence: how research can inform public services (Bristol: The Policy Press)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Kenneth Prewitt, Thomas A. Schwandt, and Miron L. Straf, (Editors) (2012) Using Science as Evidence in Public Policy http://www.nap.edu/catalog.php?record_id=13460

Simon, H. (1976) Administrative Behavior, 3rd ed. (London: Macmillan)

Stoker, G. (2010) ‘Translating experiments into policy’, The ANNALS of the American Academy of Political and Social Science, 628, 1, 47-58

True, J. L., Jones, B. D. and Baumgartner, F. R. (2007) Punctuated Equilibrium Theory’ in P. Sabatier (ed.) Theories of the Policy Process, 2nd ed (Cambridge, MA: Westview Press)

Weible, C., Heikkila, T., deLeon, P. and Sabatier, P. (2012) ‘Understanding and influencing the policy process’, Policy Sciences, 45, 1, 1–21

 

6 Comments

Filed under Evidence Based Policymaking (EBPM)

Policy bubbles and emotional policymaking

I am at a workshop today on policy ‘bubbles’, or (real and perceived) disproportionate policy responses. For Moshe Maor, a bubble describes an over-reaction to a problem, and a negative policy bubble describes under-reaction.

For Maor, this focus on bubbles is one way into our increasing focus on the role of emotion in policymaking: we pay disproportionate attention to problems, and try to solve some but not others, based on the ways in which we engage emotionally with information.

This focus on psychology is, I think, gaining a lot of traction in political science now, and I think it is crucial to explaining, for example, processes associated with ‘evidence-based policymaking’.

In taking this agenda forward, there remain some outstanding issues:

How much of the psychology literature is already reflected in policy studies? For example, see the social construction of target populations (emotion-driven treatment of social groups), ACF (on loss aversion and the devil shift), and the NPF (telling stories to exploit cognitive biases).

What insights remain untapped from key fields such as organisational psychology? I’ll say more about this in a forthcoming post.

How can we study the psychology of policymaking? Most policy theory begins with some reference to bounded rationality, including PET and the identification of disproportionate information processing (policymakers pay disproportionate attention to some issues and ignore the rest). It is largely deductive then empirical: we make some logical steps about the implications of bounded rationality, then study the process in that light.

Similarly, I think most studies of emotion/ policymaking take insights from psychology (e.g. people value losses more than gains, or they make moral judgements then seek evidence to justify them) and then apply them indirectly to policymaking (asking, for example, what is the effect of prospect theory on the behaviour of coalitions).

Can we do more, by studying more directly the actions of policymakers rather than merely interpreting their actions? The problem, of course, is that few policymakers may be keen on engaging in the types of study (e.g. experiments with control groups) that psychologists have used to establish things like fluency effects.

How does policymaker psychology fit into broader explanations of policymaking? The psychology of policymakers is one part of the story. The other is the system or environment in which they operate. So, we have some choices to make about future studies. Some might ‘zoom in’ to focus on emotionally-driven policymaking in key actors, perhaps at the centre of government.

Others may ‘zoom out’. The latter may involve ascribing the same basic thought processes to a large number of actors, examining that process at a relatively abstract level. This is the necessary consequence of trying to account for the effects of a very large number of actors, and to take into account the role of a policymaking environment, only some of which is in the control of policymakers.

Can we really demonstrate disproportionate policy action? The idea of a proportionate policy response interests me, because I think it is always in the eye of the beholder. We make moral and other personal evaluative statements when we describe a proportionate solution in relation to the size of the problem.

For example, in tobacco policy, a well-established argument in public health is that a proportionate policy response to the health effects of smoking and passive smoking (a) has been 20-30 years behind the evidence in ‘leading countries’, and (b) has yet to happen in ‘laggard’ countries. The counterargument is that the identification of a problem does not necessitate the favoured public health solution (comprehensive tobacco control, towards the ‘endgame’ of zero smoking) because it involves major limits to personal liberties and choice.

Is emotion-driven policymaking necessarily a bad thing?

[excerpt from my 2014 PSA paper ] This is partly the focus of Alter and Oppenheimer (2008) when they argue that policymakers spend disproportionate amounts of money on risks with which they are familiar, at the expense of spending money on things with more negative effects, producing a ‘dramatic misallocation of funds’. They draw on Sunstein (2002), who suggests that emotional bases for attention to environmental problems from the 1970s prompted many regulations to be disproportionate to the risk involved. Further, Slovic’s work suggest that people’s feelings towards risk may even be influenced by the way in which it is described, for example as a percentage versus a 1 in X probability (Slovic, P. 2010: xxii).

Haidt (2001: 815) argues that a focus on psychology can be used to improve policymaking: the identification of the ‘intuitive basis of moral judgment’ can be used to help policymakers ‘avoid mistakes’ or allow people to develop ‘programs’ or an ‘environment’ to ‘improve the quality of moral judgment and behavior’. Similarly, Alter and Oppenheimer (2009: 232) worry about medical and legal judgements swayed by fluid diagnoses and stories.

These studies compare with arguments focusing on the positive role of emotions of decision-making, either individually (see Constantinescu, 2012, drawing on Frank, 1988 and Elster, 2000 on the decisions of judges) or as part of social groups, with emotional responses providing useful information in the form of social cues (Van Kleef et al, 2010).

Policy theory does not shy away from these issues. For example, Schneider and Ingram (2014) argue that the outcomes of social construction are often dysfunctional and not based on a well-reasoned, goal-oriented strategy: ‘Studies have shown that rules, tools, rationales and implementation structures inspired by social constructions send dysfunctional messages and poor choices may hamper the effectiveness of policy’. However, part of the value of policy theory is to show that policy results from the interaction of large numbers of people and institutions. So, the poor actions of one policymaker would not be the issue; we need to know more about the cumulative effect of individual emotional decision making in collective decision-making – not only in discrete organisations, but also networks and systems.

And finally: if it is a bad thing, should we do something about it?

Our choice is to find it interesting then go home (this might appeal to the academics) or try to limit the damage/ maximise the benefits of policymaker psychology to policy and society (this might appeal to practitioners). There is no obvious way to do something, though, is there?

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy