Tag Archives: public policy

Can A Government Really Take Control Of Public Policy?

This post first appeared on the MIHE blog to help sell my book.

During elections, many future leaders give the impression that they will take control of public policy. They promise major policy change and give little indication that anything might stand in their way.

This image has been a major feature of Donald Trump’s rhetoric on his US Presidency. It has also been a feature of campaigns for the UK withdrawal from the European Union (‘Brexit’) to allow its leaders to take back control of policy and policymaking. According to this narrative, Brexit would allow (a) the UK government to make profound changes to immigration and spending, and (b) Parliament and the public to hold the UK government directly to account, in contrast to a distant EU policy process less subject to direct British scrutiny.

Such promises are built on the false image of a single ‘centre’ of government, in which a small number of elected policymakers take responsibility for policy outcomes. This way of thinking is rejected continuously in the modern literature. Instead, policymaking is ‘multi-centric’: responsibility for policy outcomes is spread across many levels and types of government (‘centres’), and shared with organisations outside of government, to the extent that it is not possible to simply know who is in charge and to blame. This arrangement helps explain why leaders promise major policy change but most outcomes represent a minor departure from the status quo.

Some studies of politics relate this arrangement to the choice to share power across many centres. In the US, a written constitution ensures power sharing across different branches (executive, legislative, judicial) and between federal and state or local jurisdictions. In the UK, central government has long shared power with EU, devolved, and local policymaking organisations.

However, policy theories show that most aspects of multi-centric governance are necessary. The public policy literature provides many ways to describe such policy processes, but two are particularly useful.

The first approach is to explain the diffusion of power with reference to an enduring logic of policymaking, as follows:

  • The size and scope of the state is so large that it is always in danger of becoming unmanageable. Policymakers manage complexity by breaking the state’s component parts into policy sectors and sub-sectors, with power spread across many parts of government.
  • Elected policymakers can only pay attention to a tiny proportion of issues for which they are responsible. They pay attention to a small number and ignore the rest. They delegate policymaking responsibility to other actors such as bureaucrats, often at low levels of government.
  • At this level of government and specialisation, bureaucrats rely on specialist organisations for information and advice. Those organisations trade that information/advice and other resources for access to, and influence within, the government.
  • Most public policy is conducted primarily through small and specialist ‘policy communities’ that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement.

This description suggests that senior elected politicians are less important than people think, their impact on policy is questionable, and elections may not provide major changes in policy. Most decisions are taken in their name but without their intervention.

A second, more general, approach is to show that elected politicians deal with such limitations by combining cognition and emotion to make choices quickly. Although such action allows them to be decisive, they occur within a policymaking environment over which governments have limited control. Government bureaucracies only have the coordinative capacity to direct policy outcomes in a small number of high priority areas. In most other cases, policymaking is spread across many venues, each with their own rules, networks, ways of seeing the world, and ways of responding to socio-economic factors and events.

In that context, we should always be sceptical when election candidates and referendum campaigners (or, in many cases, leaders of authoritarian governments) make such promises about political leadership and government control.

A more sophisticated knowledge of policy processes allows us to identify the limits to the actions of elected policymakers, and develop a healthier sense of pragmatism about the likely impact of government policy. The question of our age is not: how can governments take back control? Rather, it is: how can we hold policymakers to account in a complex system over which they have limited knowledge and even less control?

Leave a comment

Filed under public policy, UK politics and policy

Institutionalising preventive health: what are the key issues?

By Paul Cairney and John Boswell. This post first appeared on the Public Health Reform Scotland blog.

On the 17th May, Professor Paul Cairney (University of Stirling) and Dr John Boswell (University of Southampton) led a discussion on ‘institutionalising’ preventive health with key people working with the Scottish Government and COSLA to reform public health in Scotland, including members of the Programme Board, the Oversight Board, Commission leads and members of the senior teams in NHS Health Scotland and Public Health and Intelligence. They drew on their published work, co-authored with Dr Emily St Denny (University of Stirling), to examine the role of evidence in policy and the lessons from comparable experiences in other public health agencies (in England, New Zealand and Australia).

This post summarises their presentation, reflections from the panel, group-work in the afternoon, and post-event feedback.

The Academic Argument

Governments face two major issues when they try to improve population health and reduce health inequalities:

  1. Should they ‘mainstream’ policies – to help prevent ill health and reduce health inequalities – across government and/ or set up a dedicated government agency?
  2. Should an agency ‘speak truth to power ‘and seek a high profile to set the policy agenda?

Our research provides three messages to inform policy and practice:

  1. When governments have tried to mainstream ‘preventive’ policies, they have always struggled to explain what prevention means and reform services to make them more preventive than reactive.
  2. Public health agencies could set a clearer and more ambitious policy agenda. However, successful agencies keep a low profile and make realistic demands for policy change. In the short term, they measure success according to their own survival and their ability to maintain the positive attention of policymakers.
  3. Advocates of policy change often describe ‘evidence based policy’ as the answer. However, a comparison between (a) specific tobacco policy change and (b) very general prevention policy shows that the latter’s ambiguity hinders the use of evidence for policy. Governments use three different models of evidence-informed policy. These models are internally consistent but they draw on assumptions and practices that are difficult to mix and match. Effective evidence use requires clear aims driven by political choice.

Overall, they warn against treating any response – (a) the idiom ‘prevention is better than cure’, (b) setting up a public health agency, or (c) seeking ‘evidence based policy’ – as a magic bullet. Major public health changes require policymakers to define their aims, and agencies to endure long enough to influence policy and encourage the consistent use of models of evidence-informed policy.

The Panel Discussion

The panel discussion produced a series of positive and sensible suggestions about the way forward, including the need to:

  • Make a strong political case for the idea of a ‘social return on investment’, in which every £1 spent on preventive work produces far more valuable long term returns.
  • Establish respect for the work of a public health agency in a political context.
  • Build on the fact that the broad argument for prevention has been won within Scottish central and local government.
  • Ensure a shift in culture, to maximise partnership working and foster leadership skills among a larger number of people (than associated with a hierarchical model of leadership).
  • Take forward work by the Christie Commission on reforming public services (such as to ‘empower individuals and communities’, ‘integrate service provision’, ‘prevent negative outcomes from arising’, and ‘become more efficient’).

However, we noted that Christie – and the Scottish Government’s ‘decisive shift to prevention’ – took place eight years ago. We also describe (in Why Isn’t Government Policy More Preventive?) a historic tendency for the ‘same cycle to be repeated without resolution’: an ‘initial period of enthusiasm and activity’ is replaced in a few years by ‘disenchantment and inactivity’.

In that context, our challenge is: what will make the difference this time?

The group discussion

The group discussion took on a ‘world café’ format in which people moved around each space, providing ideas according to theme. The main questions – and three key answers per question – include:

How can we engage well with members of the public?

  1. Establish a brand, digital presence, public role, and approach to ‘social marketing’.
  2. Choose a consistent model of ‘co-production’ based on what you want from your relationship with service users.
  3. Choose how to balance the need to give consistent population-wide advice, and advice tailored to specific communities.

How can we encourage and maintain a public health community?

  1. Address perceptions of power and status in the NHS and local government.
  2. Clarify what evidence counts, and how to gather and use it.
  3. Balance the need for modest ‘quick wins’ (for PHS endurance) with the need to maintain an ambitious advocacy-focused agenda (for community morale).

How can the NHS and local government work well in partnership?

  1. Address immediate important issues: contracts of employment, union recognition and support, location.
  2. Identify cross-system partnership issues: the boundaries between NHS/ Local authority work, working with local governments directly or via COSLA, how to balance your time between core work and partnership work, and how to work with each other’s stakeholders.
  3. Address the possible tensions between national NHS work and local variation and accountability.

How can PHS keep public health high on the ministerial agenda?

  1. Use advocacy to generate public attention to evidence-informed policy solutions.
  2. Frame solutions in different ways to different audiences, to appeal to national ministers and local politicians.
  3. Generate an understanding of how to work closely with stakeholders and policymakers without undermining an image of PHS independence.

How can PHS focus on the bigger picture?

  1. Develop a strategy to stay informed about, and seek to influence, policies reserved to the UK.
  2. Develop a more detailed ‘health in all policies’ strategy: clarify aims, identify key policymakers, develop a strategy to influence policymakers beyond ‘health’.
  3. Develop a strategy to deal with a complex media landscape: from personal relationships with key journalists to less personal messaging for social media.

Post Event Feedback

Feedback from the event was generally positive. Attendees appreciated the time and space to come together with PHS team leaders to discuss next steps. The feedback suggests that the academic presentation helped challenge or shape group assumptions, by:

  • Questioning if attendees agreed on key issues. What is prevention? What counts as good evidence? What models of evidence-informed policy should we recommend? From whom should we learn?
  • Shifting attitudes about what counts as agency success (survival!) and what strategies help achieve it (such as by stealth rather than always speaking truth to power).

Next Steps

From this discussion, it is clear that Public Health Scotland will happen, and its general remit and ambition is clear. However, to ensure that PHS becomes successful requires grappling with the inevitable dilemmas that confront policymakers – and advisers to policymakers – in such complex terrain. Perhaps the key theme of the reflective discussion was the role of clear choice to address important trade-offs:

  1. balancing the imperative to speak ‘uncomfortable truths’ with the need to retain the trust and attention of government
  2. pursuing evidence-informed policymaking but with sufficient flexibility to enable cooperation across different approaches
  3. choosing with whom to collaborate to maximise impact but maintain credibility
  4. working out how to retain long-term support from the public health community in the face of short-term disagreements and disappointments
  5. to work for the public (in the background) or with the public (in the foreground) in pursuit of preventive aims.

Some of these strategic choices are more pressing than others. Some can be resolved decisively while others will require an ongoing balancing act. However, each choice requires a commitment to realistic and continuous dialogue and reflection on what (a) PHS can seek to achieve, and (b) what it can realistically expect central and local governments to do.

Leave a comment

Filed under Public health, public policy, Scottish politics

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in this article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see this post on the meaning of ‘irrational’). Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Now go to:

What can you do when policymakers ignore your evidence?

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes how to be a ‘policy entrepreneur’.

See also the 750 Words series to explore the implications for policy analysis

17 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

For a longer discussion, see Fostering Evidence-informed Policy Making: Uncertainty Versus Ambiguity (PDF)

Or, if you fancy it in French: Favoriser l’élaboration de politiques publiques fondées sur des données probantes : incertitude versus ambiguïté (PDF)

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

27 Comments

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

How to write theory-driven policy analysis

Writing theory driven policy analysis 10.11.17

(or right click to download this lecture which accompanies my MPP)

Here is a guide to writing theory-driven policy analysis. Your aim is to identify a policy problem and solution, know your audience, and account for the complexity of policymaking.

At first, it may seem like daunting task to put together policy analysis and policy theory. On its own, policy analysis seems difficult but relatively straightforward: use evidence to identify and measure a policy problem, compare the merits of one or more solution, and make a recommendation on the steps to take us from policy to action.

However, policy process research tells us that people will engage emotionally with that evidence, and that policymakers operate in a complex system of which they have very limited knowledge and control.

So, how can we produce a policy analysis paper to which people will pay attention, and respond positively and effectively, under such circumstances? I focus on developing the critical analysis that will help you produce effective and feasible analysis. To do so, I show how policy analysis forms part of a collection of exercises to foster analysis informed by theory and reflection.

Aims of this document:

  1. Describe the context. There are two fields of study – theory and analysis – which do not always speak to each other. Theory can inform analysis, but it is not always clear how. I show the payoff to theory-driven policy analysis and the difference between it and regular analysis. Note the two key factors that policy analysis should address: your audience will engage emotionally with your analysis, and the feasibility of your solutions depends on the complexity of the policy environment.
  2. Describe how the coursework helps you combine policy theory and policy analysis. Policy analysis is one of four tasks. There is a reflection, to let you ‘show your work’; how your knowledge of policy theory guides your description of a problem and feasible solutions. The essay allows you to expand on theory, to describe how and why policy changes (and therefore what a realistic policy analysis would look like). The blogs encourage new communication skills. In one, you explore how you would expect a policy maker or influencer to sell the recommendations in your policy analysis. In another, you explain complex concepts to a non-academic audience.

Background notes.

I have written this document as if part of a book to be called Teaching Public Policy and co-authored with Dr Emily St Denny.

For that audience, I have two aims: (1) to persuade policy scholars-as-teachers to adopt this kind of coursework in their curriculum; and, (2) to show students how to complete it effectively.

If you prefer shorter advice, see Writing a policy paper and blog post and Writing an essay on politics, policymaking, and policy change.

If you are interested in more background reading, see: The New Policy Sciences (by Paul Cairney and Chris Weible) which describes the need to combine policy theory-driven research with policy analysis; and, Practical Lessons from Policy Theories which describes eight attempts by scholars to translate policy theory into lessons that can be used for policy analysis.

The theories make more sense if you have read the corresponding 1000 Words posts (based on Cairney, 2012). Some of the forthcoming text will look familiar if you read my blog because I am consolidating several individual posts into an overall discussion.

I’m not quite there yet (the chapter is a first draft, a bit scrappy at times, and longer than a chapter should be), so all comments welcome (in the comments bit).

Writing theory driven policy analysis 10.11.17

Cairney 2017 image of the policy process

8 Comments

Filed under public policy

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Evidence based policymaking: 7 key themes

7 themes of EBPM

I looked back at my blog posts on the politics of ‘evidence based policymaking’ and found that I wrote quite a lot (particularly from 2016). Here is a list based on 7 key themes.

1. Use psychological insights to influence the use of evidence

My most-current concern. The same basic theme is that (a) people (including policymakers) are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you (b) bombard them with information, or (c) call them idiots.

Three ways to communicate more effectively with policymakers (shows how to use psychological insights to promote evidence in policymaking)

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid? (yes)

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself? (older paper, linking studies of psychology with studies of EBPM)

Older posts on the same theme:

Is there any hope for evidence in emotional debates and chaotic government? (yes)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

These complaints about ignoring science seem biased and naïve – and too easy to dismiss

How can we close the ‘cultural’ gap between the policymakers and scientists who ‘just don’t get it’?

2. How to use policy process insights to influence the use of evidence

I try to simplify key insights about the policy process to show to use evidence in it. One key message is to give up on the idea of an orderly policy process described by the policy cycle model. What should you do if a far more complicated process exists?

Why don’t policymakers listen to your evidence?

The Politics of Evidence Based Policymaking: 3 messages (3 ways to say that you should engage with the policy process that exists, not a mythical process that will never exist)

Three habits of successful policy entrepreneurs (shows how entrepreneurs are influential in politics)

Why doesn’t evidence win the day in policy and policymaking? and What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco and There is no blueprint for evidence-based policy, so what do you do? (3 posts describing the conditions that must be met for evidence to ‘win the day’)

Writing for Impact: what you need to know, and 5 ways to know it (explains how our knowledge of the policy process helps communicate to policymakers)

How can political actors take into account the limitations of evidence-based policy-making? 5 key points (presentation to European Parliament-European University Institute ‘Policy Roundtable’ 2016)

Evidence Based Policy Making: 5 things you need to know and do (presentation to Open Society Foundations New York 2016)

What 10 questions should we put to evidence for policy experts? (part of a series of videos produced by the European Commission)

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

My argument here is that EBPM is about deciding at the same time what is: (1) good evidence, and (2) a good way to make and deliver policy. If you just focus on one at a time – or consider one while ignoring the other – you cannot produce a defendable way to promote evidence-informed policy delivery.

Kathryn Oliver and I have just published an article on the relationship between evidence and policy (summary of and link to our article on this very topic)

We all want ‘evidence based policy making’ but how do we do it? (presentation to the Scottish Government on 2016)

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

What Works (in a complex policymaking system)?

How Far Should You Go to Make Sure a Policy is Delivered?

4. Face up to your need to make profound choices to pursue EBPM

These posts have arisen largely from my attendance at academic-practitioner conferences on evidence and policy. Many participants tell the same story about the primacy of scientific evidence challenged by post-truth politics and emotional policymakers. I don’t find this argument convincing or useful. So, in many posts, I challenge these participants to think about more pragmatic ways to sum up and do something effective about their predicament.

Political science improves our understanding of evidence-based policymaking, but does it produce better advice? (shows how our knowledge of policymaking clarifies dilemmas about engagement)

The role of ‘standards for evidence’ in ‘evidence informed policymaking’ (argues that a strict adherence to scientific principles may help you become a good researcher but not an effective policy influencer)

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators (you have to make profound ethical and strategic choices when seeking to maximise the use of evidence in policy)

Principles of science advice to government: key problems and feasible solutions (calling yourself an ‘honest broker’ while complaining about ‘post-truth politics’ is a cop out)

What sciences count in government science advice? (political science, obvs)

I know my audience, but does my other audience know I know my audience? (compares the often profoundly different ways in which scientists and political scientists understand and evaluate EBPM – this matters because, for example, we rarely discuss power in scientist-led debates)

Is Evidence-Based Policymaking the same as good policymaking? (no)

Idealism versus pragmatism in politics and policymaking: … evidence-based policymaking (how to decide between idealism and pragmatism when engaging in politics)

Realistic ‘realist’ reviews: why do you need them and what might they look like? (if you privilege impact you need to build policy relevance into systematic reviews)

‘Co-producing’ comparative policy research: how far should we go to secure policy impact? (describes ways to build evidence advocacy into research design)

The Politics of Evidence (review of – and link to – Justin Parkhurt’s book on the ‘good governance’ of evidence production and use)

20170512_095446

5. For students and researchers wanting to read/ hear more

These posts are relatively theory-heavy, linking quite clearly to the academic study of public policy. Hopefully they provide a simple way into the policy literature which can, at times, be dense and jargony.

‘Evidence-based Policymaking’ and the Study of Public Policy

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Practical Lessons from Policy Theories (series of posts on the policy process, offering potential lessons for advocates of evidence use in policy)

Writing a policy paper and blog post 

12 things to know about studying public policy

Can you want evidence based policymaking if you don’t really know what it is? (defines each word in EBPM)

Can you separate the facts from your beliefs when making policy? (no, very no)

Policy Concepts in 1000 Words: Success and Failure (Evaluation) (using evidence to evaluate policy is inevitably political)

Policy Concepts in 1000 Words: Policy Transfer and Learning (so is learning from the experience of others)

Four obstacles to evidence based policymaking (EBPM)

What is ‘Complex Government’ and what can we do about it? (read about it)

How Can Policy Theory Have an Impact on Policy Making? (on translating policy theories into useful advice)

The role of evidence in UK policymaking after Brexit (argues that many challenges/ opportunities for evidence advocates will not change after Brexit)

Why is there more tobacco control policy than alcohol control policy in the UK? (it’s not just because there is more evidence of harm)

Evidence Based Policy Making: If You Want to Inject More Science into Policymaking You Need to Know the Science of Policymaking and The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty and Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty and The barriers to evidence based policymaking in environmental policy (early versions of what became the chapters of the book)

6. Using storytelling to promote evidence use

This is increasingly a big interest for me. Storytelling is key to the effective conduct and communication of scientific research. Let’s not pretend we’re objective people just stating the facts (which is the least convincing story of all). So far, so good, except to say that the evidence on the impact of stories (for policy change advocacy) is limited. The major complication is that (a) the story you want to tell and have people hear interacts with (b) the story that your audience members tell themselves.

Combine Good Evidence and Emotional Stories to Change the World

Storytelling for Policy Change: promise and problems

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

7. The major difficulties in using evidence for policy to reduce inequalities

These posts show how policymakers think about how to combine (a) often-patchy evidence with (b) their beliefs and (c) an electoral imperative to produce policies on inequalities, prevention, and early intervention. I suggest that it’s better to understand and engage with this process than complain about policy-based-evidence from the side-lines. If you do the latter, policymakers will ignore you.

The UK government’s imaginative use of evidence to make policy 

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

Two myths about the politics of inequality in Scotland

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

A ‘decisive shift to prevention’: how do we turn an idea into evidence based policy?

Can the Scottish Government pursue ‘prevention policy’ without independence?

Note: these issues are discussed in similar ways in many countries. One example that caught my eye today:

 

All of this discussion can be found under the EBPM category: https://paulcairney.wordpress.com/category/evidence-based-policymaking-ebpm/T

See also the special issue on maximizing the use of evidence in policy

Palgrave C special

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Storytelling, UK politics and policy

Telling Stories that Shape Public Policy

This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.

But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story.  Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place.  The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.

Why focus on stories?

The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.

The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.

Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.

The 5 Steps of Good Policy Narrating

  1. Tell a Story: Remember, facts never speak for themselves. If you are presenting best practices, relaying scientific information, or detailing cost/benefit analyses, you are telling or contributing to a story.  Engage your storytelling deliberately.
  2. Set the Stage: Policy narratives have a setting and in this setting you will find specific evidence, geography, legal parameters, and other policy consequential items and information.  Think of these setting items as props.  Not all stages can hold every relevant prop.  Be true to science; be true to your craft, but set your stage with props that maximize the potency of your story, which always includes making your setting amenable to your audience.
  3. Establish the Plot: In public policy plots usually define the problem (and polices do not exist without at least a potential problem). Define your problem. Doing so determines the causes, which establishes blame.
  4. Cast the Characters:  Having established a plot and defined your problem, the roles you will need your characters to play become apparent. Determine who the victim is (who is harmed by the problem), who is responsible (the villain) and who can bring relief (the hero). Cast characters your audience will appreciate in their roles.
  5. Clearly Specify the Moral: Postmodern films might get away without having a point.  Policy narratives usually do not. Let your audience know what the solution is.

Public Policy Intervention Points

There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.

These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.

Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.

When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.

  1. Who are you and what are your goals? Are you an outsider trying to affect change to solve a problem or push an agency to do something it might not be inclined to do?  Are you an insider trying to evaluate and improve policy making and implementation? Understanding your role and your goals is essential to both selecting an appropriate intervention point and optimizing your narrative therein.
  2. Carefully consider your audience. Who are they and what is their posture towards your overall goal? Understanding your audience’s values and beliefs is essential for avoiding invoking defensiveness.
  3. There is the intervention point itself – what is the best way to reach your audience? What are the rules for the type of communication you plan to use? For example, media communications can be done with lengthy press releases, interviews with the press, or in the confines of a simple tweet.  All of these methods have both formal and informal constraints that will determine what you can and can’t do.

Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.

On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.

 

2 Comments

Filed under public policy, Storytelling

How can governments better collaborate to address complex problems?

Swann Kim

This is a guest post by William L. Swann (left) and Seo Young Kim (right), discussing how to use insights from the Institutional Collective Action Framework to think about how to improve collaborative governance. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Collective Action_1

Many public policy problems cannot be addressed effectively by a single, solitary government. Consider the problems facing the Greater Los Angeles Area, a heavily fragmented landscape of 88 cities and numerous unincorporated areas and special districts. Whether it is combatting rising homelessness, abating the country’s worst air pollution, cleaning the toxic L.A. River, or quelling gang violence, any policy alternative pursued unilaterally is limited by overlapping authority and externalities that alter the actions of other governments.

Problems of fragmented authority are not confined to metropolitan areas. They are also found in multi-level governance scenarios such as the restoration of Chesapeake Bay, as well as in international relations as demonstrated by recent global events such as “Brexit” and the U.S.’s withdrawal from the Paris Climate Agreement. In short, fragmentation problems manifest at every scale of governance, horizontally, vertically, and even functionally within governments.

What is an ‘institutional collective action’ dilemma?

In many cases governments would be better off coordinating and working together, but they face barriers that prevent them from doing so. These barriers are what the policy literature refers to as ‘institutional collective action’ (ICA) dilemmas, or collective action problems in which a government’s incentives do not align with collectively desirable outcomes. For example, all governments in a region benefit from less air pollution, but each government has an incentive to free ride and enjoy cleaner air without contributing to the cost of obtaining it.

The ICA Framework, developed by Professor Richard Feiock, has emerged as a practical analytical instrument for understanding and improving fragmented governance. This framework assumes that governments must match the scale and coerciveness of the policy intervention (or mechanism) to the scale and nature of the policy problem to achieve efficient and desired outcomes.

For example, informal networks (a mechanism) can be highly effective at overcoming simple collective action problems. But as problems become increasingly complex, more obtrusive mechanisms, such as governmental consolidation or imposed collaboration, are needed to achieve collective goals and more efficient outcomes. The more obtrusive the mechanism, however, the more actors’ autonomy diminishes and the higher the transaction costs (monitoring, enforcement, information, and agency) of governing.

Collective Action_2

Three ways to improve institutional collaborative governance

We explored what actionable steps policymakers can take to improve their results with collaboration in fragmented systems. Our study offers three general practical recommendations based on the empirical literature that can enhance institutional collaborative governance.

First, institutional collaboration is more likely to emerge and work effectively when policymakers employ networking strategies that incorporate frequent, face-to-face interactions.

Government actors networking with popular, well-endowed actors (“bridging strategies”) as well as developing closer-knit, reciprocal ties with a smaller set of actors (“bonding strategies”) will result in more collaborative participation, especially when policymakers interact often and in-person.

Policy network characteristics are also important to consider. Research on estuary governance indicates that in newly formed, emerging networks, bridging strategies may be more advantageous, at least initially, because they can provide organizational legitimacy and access to resources. However, once collaboratives mature, developing stronger and more reciprocal bonds with fewer actors reduces the likelihood of opportunistic behavior that can hinder collaborative effectiveness.

Second, policymakers should design collaborative arrangements that reduce transaction costs which hinder collaboration.

Well-designed collaborative institutions can lower the barriers to participation and information sharing, make it easier to monitor the behaviors of partners, grant greater flexibility in collaborative work, and allow for more credible commitments from partners.

Research suggests policymakers can achieve this by

  1. identifying similarities in policy goals, politics, and constituency characteristics with institutional partners
  2. specifying rules such as annual dues, financial reporting, and making financial records reviewable by third parties to increase commitment and transparency in collaborative arrangements
  3. creating flexibility by employing adaptive agreements with service providers, especially when services have limited markets/applications and performance is difficult to measure.

Considering the context, however, is crucial. Collaboratives that thrive on informal, close-knit, reciprocal relations, for example, may be severely damaged by the introduction of monitoring mechanisms that signal distrust.

Third, institutional collaboration is enhanced by the development and harnessing of collaborative capacity.

Research suggests signaling organizational competencies and capacities, such as budget, political support, and human resources, may be more effective at lowering barriers to collaboration than ‘homophily’ (a tendency to associate with similar others in networks). Policymakers can begin building collaborative capacity by seeking political leadership involvement, granting greater managerial autonomy, and looking to higher-level governments (e.g., national, state, or provincial governments) for financial and technical support for collaboration.

What about collaboration in different institutional contexts?

Finally, we recognize that not all policymakers operate in similar institutional contexts, and collaboration can often be mandated by higher-level authorities in more centralized nations. Nonetheless, visible joint gains, economic incentives, transparent rules, and equitable distribution of joint benefits and costs are critical components of voluntary or mandated collaboration.

Conclusions and future directions

The recommendations offered here are, at best, only the tip of the iceberg on valuable practical insight that can be gleaned from collaborative governance research. While these suggestions are consistent with empirical findings from broader public management and policy networks literatures, much could be learned from a closer inspection of the overlap between ICA studies and other streams of collaborative governance work.

Collaboration is a valuable tool of governance, and, like any tool, it should be utilized appropriately. Collaboration is not easily managed and can encounter many obstacles. We suggest that governments generally avoid collaborating unless there are joint gains that cannot be achieved alone. But the key to solving many of society’s intractable problems, or just simply improving everyday public service delivery, lies in a clearer understanding of how collaboration can be used effectively within different fragmented systems.

5 Comments

Filed under public policy

Why Advocacy Coalitions Matter and How to Think about Them

Chris and Karin

This is a guest post by Professor Chris Weible (left) and Professor Karin Ingold (right), discussing how to use insights from the Advocacy Coalition Framework to think about how to engage in policymaking. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

There are many ways that people relate to their government.  People may vote for their formal representatives through elections.  Through referendums and initiatives, people can vote directly to shape public policy.  More indirect ways include through informal representation via political parties or interest groups and associations.

This blog addresses another extremely important way to relate government via “advocacy coalitions.”

What are advocacy coalitions?

Advocacy coalitions are alliances of people around a shared policy goal. People associated with the same advocacy coalition have similar ideologies and worldviews and wish to change a given policy (concerning health, environmental, or many other issues) in the same direction.

Advocacy coalitions can include anyone regularly seeking to influence a public policy, such as elected and government officials, members of political parties or interest groups, scientists, journalists, or members of trade unions and non-for-profit/ ‘third sector’ organizations.

The coalition is an informal network of allies that usually operate against an opposing coalition consisting of people who advocate for different policy directions.  As one coalition tries to outmaneuver the other, the result is a game of political one-upmanship of making and unmaking public policies that can last years to decades.

Political debates over normative issues endure for a long time, advocacy coalitions have the ability to span levels of government from local to national, and they integrate traditional points of influence in a political system, from electoral politics to regulatory decision-making.

How to think about coalitions and their settings

Consider the context in which political debates over policy issues occur. Context might include the socio-cultural norms and rules that shape what strategies might be affected and the usefulness of political resources.

The ACF elevates the importance of context from an overlooked set of opportunities and constraints to a set of factors that should be considered as conditioning political behavior.  We can develop coalition strategies and identify key political resources, but their utility and effectiveness will be contextually driven and will change over time. That is, what works for political influence today might not work in the future.

How to become involved in an advocacy coalition?

People engage in politics differently based on a range of factors, including how important the issue is to them, their available time, skills, and resources, and general motivations.

  • People with less time or knowledge can engage in coalitions as “auxiliary participants.”
  • Individuals for whom an issue is of high relevance, or those who see their major expertise in a specific subsystem, might want to shape coalition politics and strategies decisively and become “principal participants.”
  • People wanting to mitigate conflict might choose to play a “policy broker” role
  • People championing ideas can play the role of “policy entrepreneurs.”
  • General citizens can see themselves playing the role of a “political soldier” contributing to their cause when called upon by the leaders of any coalition.

How do coalitions form and maintain themselves? 

Underlying the coalition concept is an assumption that people are most responsive to threats to what they care about. Coalitions form because of these threats that might come from a rival’s proposed policy solutions, a particular characterization of problems, and from major events (e.g., a disaster). Motivated by fundamental values, the chronic presence of threats from opponents is another reason that coalitions persist. People stay mobilized because they know that, if they disengage, people with whom they disagree may influence societal outcomes.

How to identify an advocacy coalition?

There is no single way to identify a coalition, but here are four strategies to try.

  1. Look for people holding formal elected or unelected positions in government with authority and an interest to affect a public policy issue.
  2. Identify people from outside of government participating in the policy process (e.g., rulemaking, legislative hearings, etc.).
  3. Identify people with influential reputations that often seek to influence government through more informal means (e.g., blogging).
  4. Uncover those individuals who are not currently mobilized but who might be in the future, for example by identifying who is threatened or who could benefit from the policy decision.

These four strategies emphasize formal competences and informal relations, and the motivations that actors might have to participate in an issue.

This blog is more about how to think about relations between people and government and less on identifying concrete strategies for influencing government. Political strategies are not applicable all the time and vary in degree of success and failure based on a gamut of factors.

The best we can do is to offer ways of thinking about political engagement, such as through the ideas that are summarized here and then trust people to assess their current situation, and act in effective ways.

2 Comments

Filed under public policy

The Politics of Evidence

This is a draft of my review of Justin Parkhurst (2017) The Politics of Evidence (Routledge, Open Access)

Justin Parkhurst’s aim is to identify key principles to take forward the ‘good governance of evidence’. The good governance of scientific evidence in policy and policymaking requires us to address two fundamentally important ‘biases’:

  1. Technical bias. Some organisations produce bad evidence, some parts of government cherry-pick, manipulate, or ignore evidence, and some politicians misinterpret the implications of evidence when calculating risk. Sometimes, these things are done deliberately for political gain. Sometimes they are caused by cognitive biases which cause us to interpret evidence in problematic ways. For example, you can seek evidence that confirms your position, and/ or only believe the evidence that confirms it.
  2. Issue bias. Some evidence advocates use the mantra of ‘evidence based policy’ to depoliticise issues or downplay the need to resolve conflicts over values. They also focus on the problems most conducive to study via their most respected methods such as randomised control trials (RCTs). Methodological rigour trumps policy relevance and simple experiments trump the exploration of complex solutions. So, we lose sight of the unintended consequences of producing the ‘best’ evidence to address a small number of problems, and making choices about the allocation of research resources and attention. Again, this can be deliberate or caused by cognitive biases, such as to seek simpler and more answerable questions than complex questions with no obvious answer.

To address both problems, Parkhurst seeks pragmatic ways to identify principles to decide what counts as ‘good evidence to inform policy’ and ‘what constitutes the good use of evidence within a policy process’:

‘it is necessary to consider how to establish evidence advisory systems that promote the good governance of evidence – working to ensure that rigorous, sys­tematic and technically valid pieces of evidence are used within decision-making processes that are inclusive of, representative of and accountable to the multiple social interests of the population served’ (p8).

Parkhurst identifies some ways in which to bring evidence and policy closer together. First, to produce evidence more appropriate for, or relevant to, policymaking (‘good evidence for policy’):

  1. Relate evidence more closely to policy goals.
  2. Modify research approaches and methods to answer policy relevant questions.
  3. Ensure that the evidence relates to the local or relevant context.

Second, to produce the ‘good use of evidence’, combine three forms of ‘legitimacy’:

  1. Input, to ensure democratic representative bodies have the final say.
  2. Throughput, to ensure widespread deliberation.
  3. Output, to ensure proper consideration the use of the most systematic, unbiased and rigorously produced scientific evidence relevant to the problem.

In the final chapter, Parkhurst suggests that these aims can be pursued in many ways depending on how governments want to design evidence advisory systems, but that it’s worth drawing on the examples of good practice he identifies. Parkhurst also explores the role for Academies of science, or initiatives such as the Cochrane Collaboration, to provide independent advice. He then outlines the good governance of evidence built on key principles: appropriate evidence, accountability in evidence use, transparency, and contestability (to ensure sufficient debate).

The overall result is a book full of interesting discussion and very sensible, general advice for people new to the topic of evidence and policy. This is no mean feat: most readers will seek a clearly explained and articulate account of the subject, and they get it here.

For me, the most interesting thing about Parkhurst’s book is the untold story, or often-implicit reasoning behind the way in which it is framed. We can infer that it is not a study aimed primarily at a political science or social science audience, because most of that audience would take its starting point for granted: the use of evidence is political, and politics involves values. Yet, Parkhurst feels the need to remind the reader of this point, in specific (“it is worth noting that the US presidency is a decidedly political role”, p43) and general circumstances (‘the nature of policymaking is inherently political’, p65). Throughout, the audience appears to be academics who begin with a desire for ‘evidence based policy’ without fully thinking through the implications, either about the lack of a magic bullet of evidence to solve a policy problem, how we might maintain a political system conducive to democratic principles and good evidence use, how we might design a system to reduce key ‘barriers’ between the supply of evidence by scientists and its demand by policymakers, and why few such designs have taken off.

In other words, the book appeals primarily to scientists trained outside social science, some of whom think about politics in their spare time, or encounter it in dispiriting encounters with policymakers. It appeals to that audience with a statement on the crucial role of high quality evidence in policymaking, highlights barriers to its use, tells scientists that they might be part of the problem, but then provides them with the comforting assurance that we can design better systems to overcome at least some of those barriers. For people trained in policy studies, this concluding discussion seems like a tall order, and I think most would read it with great scepticism.

Policy scientists might also be sceptical about the extent to which scientists from other fields think this way about hierarchies of scientific evidence and the desire to depoliticise politics with a primary focus on ‘what works’. Yet, I too hear this language regularly in interdisciplinary workshops (often while standing next to Justin!), and it is usually accompanied by descriptions of the pathology of policymaking, the rise of post-truth politics and rejection of experts, and the need to focus on the role of objective facts in deciding what policy solutions work best. Indeed, I was impressed recently by the skilled way in which another colleague prepared this audience for some provocative remarks when he suggested that the production and use of evidence is about power, not objectivity. OMG: who knew that policymaking was political and about power?!

So, the insights from this book are useful to a large audience of scientists while, for a smaller audience of policy scientists, they remind us that there is an audience out there for many of the statements that many of us would take for granted. Some evidence advocates use the language of ‘evidence based policymaking’ strategically, to get what they want. Others appear to use it because they believe it can exist. Keep this in mind when you read the book.

Parkhurst

4 Comments

Filed under Evidence Based Policymaking (EBPM)

Three ways to communicate more effectively with policymakers

By Paul Cairney and Richard Kwiatkowski

Use psychological insights to inform communication strategies

Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).

They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.

Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.

If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.

We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.

To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.

Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.

On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:

  1. Tailor framing strategies to policymaker bias. If people are cognitive misers, minimise the cognitive burden of your presentation. If policymakers combine cognitive and emotive processes, combine facts with emotional appeals. If policymakers make quick choices based on their values and simple moral judgements, tell simple stories with a hero and moral. If policymakers reflect a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with those beliefs.
  2. Identify ‘windows of opportunity’ to influence individuals and processes. ‘Timing’ can refer to the right time to influence an individual, depending on their current way of thinking, or to act while political conditions are aligned.
  3. Adapt to real-world ‘dysfunctional’ organisations rather than waiting for an orderly process to appear. Form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.


This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.

Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

The impact of multi-level policymaking on the UK energy system

Cairney et al UKERC

In September, we will begin a one-year UKERC project examining current and future energy policy and multi-level policymaking and its impact on ‘energy systems’. This is no mean feat, since the meaning of policy, policymaking (or the ‘policy process’), and ‘system’ are not clear, and our description of the components parts of an energy system and a complex policymaking system may differ markedly. So, one initial aim is to provide some way to turn a complex field of study into something simple enough to understand and engage with.

We do so by focusing on ‘multi-level policymaking’ – which can encompass concepts such as multi-level governance and intergovernmental relations – to reflect the fact that the responsibility for policies relevant to energy are often Europeanised, devolved, and shared between several levels of government. Brexit will produce a major effect on energy and non-energy policies, and prompt the UK and devolved governments to produce relationships, but we all need more clarity on the dynamics of current arrangements before we can talk sensibly about the future. To that end, we pursue three main work packages:

1. What is the ‘energy policymaking system’ and how does it affect the energy system?

Chaudry et al (2009: iv) define the UK energy system as ‘the set of technologies, physical infrastructure, institutions, policies and practices located in and associated with the UK which enable energy services to be delivered to UK consumers’. UK policymaking can have a profound impact, and constitutional changes might produce policy change, but their impacts require careful attention. So, we ‘map’ the policy process and the effect of policy change on energy supply and demand. Mapping sounds fairly straightforward but contains a series of tasks whose level of difficulty rises each time:

  1. Identify which level or type of government is responsible – ‘on paper’ and in practice – for the use of each relevant policy instrument.
  2. Identify how these actors interact to produce what we call ‘policy’, which can range from statements of intent to final outcomes.
  3. Identify an energy policy process containing many actors at many levels, the rules they follow, the networks they form, the ‘ideas’ that dominate discussion, and the conditions and events (often outside policymaker control) which constrain and facilitate action. By this stage, we need to draw on particular policy theories to identify key venues, such as subsystems, and specific collections of actors, such as advocacy coalitions, to produce a useful model of activity.

2. Who is responsible for action to reduce energy demand?

Energy demand is more challenging to policymakers than energy supply because the demand side involves millions of actors who, in the context of household energy use, also constitute the electorate. There are political tensions in making policies to reduce energy demand and carbon where this involves cost and inconvenience for private actors who do not necessarily value the societal returns achieved, and the political dynamics often differ from policy to regulate industrial demand. There are tensions around public perceptions of whose responsibility it is to take action – including local, devolved, national, or international government agencies – and governments look like they are trying to shift responsibility to each other or individuals and firms.

So, there is no end of ways in which energy demand could be regulated or influenced – including energy labelling and product/building standards, emissions reduction measures, promotion of efficient generation, and buildings performance measures – but it is an area of policy which is notoriously diffuse and lacking in co-ordination. So, for the large part, we consider if Brexit provides a ‘window of opportunity’ to change policy and policymaking by, for example, clarifying responsibilities and simplifying relationships.

3: Does Brexit affect UK and devolved policy on energy supply?

It is difficult for single governments to coordinate an overall energy mix to secure supply from many sources, and multi-level policymaking adds a further dimension to planning and cooperation. Yet, the effect of constitutional changes is highly uneven. For example, devolution has allowed Scotland to go its own way on renewable energy, nuclear power and fracking, but Brexit’s impact ranges from high to low. It presents new and sometimes salient challenges for cooperation to supply renewable energy but, while fracking and nuclear are often the most politically salient issues, Brexit may have relatively little impact on policymaking within the UK.

We explore the possibility that renewables policy may be most impacted by Brexit, while nuclear and fracking are examples in which Brexit may have a minimal direct impact on policy. Overall, the big debates are about the future energy mix, and how local, devolved, and UK governments balance the local environmental impacts of, and likely political opposition to, energy development against the economic and energy supply benefits.

For more details, see our 4-page summary

Powerpoint for 13.7.17

Cairney et al UKERC presentation 23.10.17

1 Comment

Filed under Fracking, public policy, UKERC

Policy concepts in 1000 or 500 words

Imagine that your audience is a group of scientists who have read everything and are only interested in something new. You need a new theory, method, study, or set of results to get their attention.

Let’s say that audience is a few hundred people, or half a dozen in each subfield. It would be nice to impress them, perhaps with some lovely jargon and in-jokes, but almost no-one else will know or care what you are talking about.

Imagine that your audience is a group of budding scientists, researchers, students, practitioners, or knowledge-aware citizens who are new to the field and only interested in what they can pick up and use (without devoting their life to each subfield). Novelty is no longer your friend. Instead, your best friends are communication, clarity, synthesis, and a constant reminder not to take your knowledge and frame of reference for granted.

Let’s say that audience is a few gazillion people. If you want to impress them, imagine that you are giving them one of the first – if not the first – ways of understanding your topic. Reduce the jargon. Explain your problem and why people should care about how you try to solve it. Clear and descriptive titles. No more in-jokes (just stick with the equivalent of ‘I went to the doctor because a strawberry was growing in my arse, and she gave me some cream for it’).

At least, that’s what I’ve been telling myself lately. As things stand, my most-read post of all time is destined to be on the policy cycle, and most people read it because it’s the first entry on a google search. Most readers of that post may never read anything else I’ve written (over a million words, if I cheat a bit with the calculation). They won’t care that there are a dozen better ways to understand the policy process. I have one shot to make it interesting, to encourage people to read more. The same goes for the half-dozen other concepts (including multiple streams, punctuated equilibrium theory, the Advocacy Coalition Framework) which I explain to students first because I now do well in google search (go on, give it a try!).

I also say this because I didn’t anticipate this outcome when I wrote those posts. Now, a few years on, I’m worried that they are not very good. They were summaries of chapters from Understanding Public Policy, rather than first principles discussions, and lots of people have told me that UPP is a little bit complicated for the casual reader. So, when revising it, I hope to make it better, and by better I mean to appeal to a wider audience without dumping the insights. I have begun by trying to write 500-words posts as, I hope, improvements on the 1000-word versions. However, I am also open to advice on the originals. Which ones work, and which ones don’t? Where are the gaps in exposition? Where are the gaps in content?

This post is 500 words.

https://paulcairney.wordpress.com/1000-words/

https://paulcairney.wordpress.com/500-words/

Leave a comment

Filed under 1000 words, 500 words, Uncategorized

Three habits of successful policy entrepreneurs

This post is one part of a series – called Practical Lessons from Policy Theories and it summarizes this article (PDF).

Policy entrepreneurs’ invest their time wisely for future reward, and possess key skills that help them adapt particularly well to their environments. They are the agents for policy change who possess the knowledge, power, tenacity, and luck to be able to exploit key opportunities. They draw on three strategies:

1. Don’t focus on bombarding policymakers with evidence.

Scientists focus on making more evidence to reduce uncertainty, but put people off with too much information. Entrepreneurs tell a good story, grab the audience’s interest, and the audience demands information.

Table 1

2. By the time people pay attention to a problem it’s too late to produce a solution.

So, you produce your solution then chase problems.

Table 2

3. When your environment changes, your strategy changes.

For example, in the US federal level, you’re in the sea, and you’re a surfer waiting for the big wave. In the smaller subnational level, on a low attention and low budget issue, you can be Poseidon moving the ‘streams’. In the US federal level, you need to ‘soften’ up solutions over a long time to generate support. In subnational or other countries, you have more opportunity to import and adapt ready-made solutions.

Table 3

It all adds up to one simple piece of advice – timing and luck matters when making a policy case – but policy entrepreneurs know how to influence timing and help create their own luck.

Full paper: Three habits of successful policy entrepreneurs

(Note: the previous version was friendlier and more focused on entrepreneurs)

For more on ‘multiple streams’ see:

Paul Cairney and Michael Jones (2016) ‘Kingdon’s Multiple Streams Approach: What Is the Empirical Impact of this Universal Theory?’ Policy Studies Journal, 44, 1, 37-58 PDF (Annex to Cairney Jones 2016) (special issue of PSJ)

Paul Cairney and Nikos Zahariadis (2016) ‘Multiple streams analysis: A flexible metaphor presents an opportunity to operationalize agenda setting processes’ in Zahariadis, N. (eds) Handbook of Public Policy Agenda-Setting (Cheltenham: Edward Elgar) PDF see also

I use a space launch metaphor in the paper. If you prefer different images, have a look at 5 images of the policy process. If you prefer a watery metaphor (it’s your life, I suppose), click Policy Concepts in 1000 Words: Multiple Streams Analysis

For more on entrepreneurs:

18 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Folksy wisdom, public policy, Storytelling

Practical Lessons from Policy Theories

These links to blog posts (the underlined headings) and tweets (with links to their full article) describe a new special issue of Policy and Politics, published in April 2018 and free to access until the end of May.

Weible Cairney abstract

Three habits of successful policy entrepreneurs

Telling stories that shape public policy

How to design ‘maps’ for policymakers relying on their ‘internal compass’

Three ways to encourage policy learning

How can governments better collaborate to address complex problems?

How do we get governments to make better decisions?

How to navigate complex policy designs

Why advocacy coalitions matter and how to think about them

None of these abstract theories provide a ‘blueprint’ for action (they were designed primarily to examine the policy process scientifically). Instead, they offer one simple insight: you’ll save a lot of energy if you engage with the policy process that exists, not the one you want to see.

Then, they describe variations on the same themes, including:

  1. There are profound limits to the power of individual policymakers: they can only process so much information, have to ignore almost all issues, and therefore tend to share policymaking with many other actors.
  2. You can increase your chances of success if you work with that insight: identify the right policymakers, the ‘venues’ in which they operate, and the ‘rules of the game’ in each venue; build networks and form coalitions to engage in those venues; shape agendas by framing problems and telling good stories, design politically feasible solutions, and learn how to exploit ‘windows of opportunity’ for their selection.

Background to the special issue

Chris Weible and I asked a group of policy theory experts to describe the ‘state of the art’ in their field and the practical lessons that they offer.

Our next presentation was at the ECPR in Oslo:

The final articles in this series are now complete, but our introduction discusses the potential for more useful contributions

Weible Cairney next steps pic

20 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

 

 

5 Comments

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?

“There is extensive health and public health literature on the ‘evidence-policy gap’, exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems”.

We use this table to describe how the policy process works, how effective actors respond, and the dilemmas that arise for advocates of scientific evidence: should they act this way too?

We summarise this argument in two posts for:

The Guardian If scientists want to influence policymaking, they need to understand it

Sax Institute The evidence policy gap: changing the research mindset is only the beginning

The article is part of a wider body of work in which one or both of us considers the relationship between evidence and policy in different ways, including:

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review PDF

Paul Cairney (2016) The Politics of Evidence-Based Policy Making (PDF)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Paul Cairney (2016) Evidence-based best practice is more political than it looks in Evidence and Policy

Many of my blog posts explore how people like scientists or researchers might understand and respond to the policy process:

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

‘Evidence-based Policymaking’ and the Study of Public Policy

How far should you go to secure academic ‘impact’ in policymaking?

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

What 10 questions should we put to evidence for policy experts?

Why doesn’t evidence win the day in policy and policymaking?

We all want ‘evidence based policy making’ but how do we do it?

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

The Politics of Evidence Based Policymaking:3 messages

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

There are more posts like this on my EBPM page

I am also guest editing a series of articles for the Open Access journal Palgrave Communications on the ‘politics of evidence-based policymaking’ and we are inviting submissions throughout 2017.

There are more details on that series here.

And finally ..

… if you’d like to read about the policy theories underpinning these arguments, see Key policy theories and concepts in 1000 words and 500 words.

 

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy