Cool guy on a skateboard

At the end of this post is my favourite picture of the year.

I took it by mistake in California.

I was trying to take a free picture of crap Spiderman.

If you go down W Holywood, you can pay $10 for a selfie with yer da dressed up as Spiderman so that he can chat up Catwoman.

screenshot_20181211-210337_gallery2959951522820473631.jpg

I was trying to get a free picture of Spiderman’s dad without attracting their attention.

So I asked my son to pretend to take a picture of him, to zoom into the background.

This is the first go, before we got the good pic.

20181211_2103088985459823632904806.jpg

In the better pic, there is a lot going on.

  1. Spiderman’s dad saying ‘hot enough for you?’ to Catwoman

20181211_2101373266282630158324330.jpg

2. The boy looking glaikit (for photographic purposes of course).

20181211_2101598770949661871347987.jpg

3. And this cool guy – possibly Jesus – going past on a skateboard.

20181211_2102366839039721680876023.jpg

You can also buy crêpes.

20180414_171238-15658039000154801005.jpg

Leave a comment

Filed under Uncategorized

Garden of Earthly Delights

If you want to watch a Chinese 20-minute video equivalent of a Bosch painting, with commentary on politics and the environment, I recommend Zhou Xiaohu’s – Garden of Earthly Delights. You can see it at the White Rabbit Gallery if you don’t mind the flight to Sydney, or here if you don’t mind a few Youku ads:

http://v.youku.com/v_show/id_XMTYxOTM2ODQ3Ng.html

The part that stuck with me was the discussion of the impossibility of producing adequate rules to settle debates. The first two stills provide a nice antidote to the ‘debate me’ fools you see on twitter or makeshift stalls on University campuses:

garden 1

garden 2

The rest of the stills rehearse the discussions you might have in philosophy of research workshops after debating some clown on a camping chair next to your seminar room:

garden 3Garden 4Garden 5

The final segment does an even better job than Question Time of ridiculing the idea that such debates are anything more than performances for our supporters. Farage and co come close, but there is nothing quite like the image of a giraffe and a weird mask debating through megaphones to highlight the futility of discussion under certain circumstances.

Garden 6Then some figures come along and ruin the environment while some other figures do a bit of dancing. It’s an excellent way to sum up how doomed we all are.

See also:

Policy Concepts in 1000 Words: Combining Theories

Leave a comment

Filed under Uncategorized

Evidence-based policymaking and the ‘new policy sciences’

image policy process round 2 25.10.18

[I wasn’t happy with the first version, so this is version 2]

In the ‘new policy sciences’, Chris Weible and I advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

However, there is a lot of policy theory out there, and we can’t put policy theory together like Lego to produce consistent insights to inform policy analysis.

Rather, each concept in my image of the policy process represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events.

What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process.

However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

On that basis, I’d encourage you to think of these attempts to synthesise as stories. I tell these stories a lot, but someone else could describe theory very differently (perhaps by relying on fewer male authors or US-derived theories in which there is a very specific reference points and positivism is represented well).

The example of EBPM

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

Further, policy theories/ studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

As described, this focus on the new policy sciences and synthesising insights helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

From one story to many?

However, I tell these stories without my audience having the time to look further into each theory and its individual insights. If they do have a little more time, I go into the possible contribution of individual insights to debate.

For example, they adapt insights from psychology in different ways …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… even though the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

They also present different conceptions of the policymaking environment in which actors make choices. See this post for more on this discussion in relation to EBPM.

My not-brilliant conclusion is that:

  1. Policy theory/ policy studies has a lot to offer other disciplines and professions, particularly in field like EBPM in which we need to account for politics and, more importantly, policymaking systems, but
  2. Beware any policy theory story that presents the source literature as coherent and consistent.
  3. Rather, any story of the field involves a series of choices about what counts as a good theory and good insight.
  4. In other words, the exhortation to think more about what counts as ‘good evidence’ applies just as much to political science as any other.

Postscript: well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one and see it as a sequel to this one!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy

Evidence-based policymaking and the ‘new policy sciences’

Circle image policy process 24.10.18

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

In most cases, we don’t have time to discuss a more fundamental issue (at least for researchers using policy theory and political science concepts):

From where did these concepts come, and how well do we know them?

To cut a long story short, each concept represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events. What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process. However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

The new policy sciences

More recently, in the ‘new policy sciences’, Chris Weible and I present a more provocative story of these efforts, in which we advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

This focus on psychology is not new …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… but the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

Perhaps more importantly, policy studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

Then, have a look at this discussion of ‘synthetic’ policy theories, designed to prompt people to consider how far they would go to get their evidence into policy.

Theory-driven policy analysis

As described, this focus on the new policy sciences helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

Epilogue

Well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one in Auckland and see it as a sequel to this one in Brisbane!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Theory and Practice: How to Communicate Policy Research beyond the Academy

Notes for my first talk at the University of Queensland, Wednesday 24th October, 12.30pm, Graduate Centre, room 402.

Here is the powerpoint that I tend to use to inform discussions with civil servants (CS). I first used it for discussion with CS in the Scottish and UK governments, followed by remarkably similar discussions in parts of New Zealand and Australian government. Partly, it provides a way into common explanations for gaps between the supply of, and demand for, research evidence. However, it also provides a wider context within which to compare abstract and concrete reasons for those gaps, which inform a discussion of possible responses at individual, organisational, and systemic levels. Some of the gap is caused by a lack of effective communication, but we should also discuss the wider context in which such communication takes place.

I begin by telling civil servants about the message I give to academics about why policymakers might ignore their evidence:

  1. There are many claims to policy relevant knowledge.
  2. Policymakers have to ignore most evidence.
  3. There is no simple policy cycle in which we all know at what stage to provide what evidence.

slide 3 24.10.18

In such talks, I go into different images of policymaking, comparing the simple policy cycle with images of ‘messy’ policymaking, then introducing my own image which describes the need to understand the psychology of choice within a complex policymaking environment.

Under those circumstances, key responses include:

  • framing evidence in terms of the ways in which your audience understands policy problems
  • engaging in networks to identify and exploit the right time to act, and
  • venue shopping to find sympathetic audiences in different parts of political systems.

However, note the context of those discussions. I tend to be speaking with scientific researcher audiences to challenge some preconceptions about: what counts as good evidence, how much evidence we can reasonably expect policymakers to process, and how easy it is to work out where and when to present evidence. It’s generally a provocative talk, to identify the massive scale of the evidence-to-policy task, not a simple ‘how to do it’ guide.

In that context, I suggest to civil servants that many academics might be interested in more CS engagement, but might be put off by the overwhelming scale of their task, and – even if they remained undeterred – would face some practical obstacles:

  1. They may not know where to start: who should they contact to start making connections with policymakers?
  2. The incentives and rewards for engagement may not be clear. The UK’s ‘impact’ agenda has changed things, but not to the extent that any engagement is good engagement. Researchers need to tell a convincing story that they made an impact on policy/ policymakers with their published research, so there is a notional tipping point of engagement in which it reaches a scale that makes it worth doing.
  3. The costs are clearer. For example, any time spent doing engagement is time away from writing grant proposals and journal articles (in other words, the things that still make careers).
  4. The rewards and costs are not spread evenly. Put most simply, white male professors may have the most opportunities and face the fewest penalties for engagement in policymaking and social media. Or, the opportunities and rewards may vary markedly by discipline. In some, engagement is routine. In others, it is time away from core work.

In that context, I suggest that CS should:

  • provide clarity on what they expect from academics, and when they need information
  • describe what they can offer in return (which might be as simple as a written and signed acknowledgement of impact, or formal inclusion on an advisory committee).
  • show some flexibility: you may have a tight deadline, but can you reasonably expect an academic to drop what they are doing at short notice?
  • Engage routinely with academics, to help form networks and identify the right people you need at the right time

These introductory discussions provide a way into common descriptions of the gap between academic and policymaker:

  • Technical languages/ jargon to describe their work
  • Timescales to supply and demand information
  • Professional incentives (such as to value scientific novelty in academia but evidential synthesis in government
  • Comfort with uncertainty (often, scientists project relatively high uncertainty and don’t want to get ahead of the evidence; often policymakers need to project certainty and decisiveness)
  • Assessments of the relative value of scientific evidence compared to other forms of policy-relevant information
  • Assessments of the role of values and beliefs (some scientists want to draw the line between providing evidence and advice; some policymakers want them to go much further)

To discuss possible responses, I use the European Commission Joint Research Centre’s ‘knowledge management for policy’ project in which they identify the 8 core skills of organisations bringing together the suppliers and demanders of policy-relevant knowledge

Figure 1

However, I also use the following table to highlight some caution about the things we can achieve with general skills development and organisational reforms. Sometimes, the incentives to engage will remain low. Further, engagement is no guarantee of agreement.

In a nutshell, the table provides three very different models of ‘evidence-informed policymaking’ when we combine political choices about what counts as good evidence, and what counts as good policymaking (discussed at length in teaching evidence-based policy to fly). Discussion and clearer communication may help clarify our views on what makes a good model, but I doubt it will produce any agreement on what to do.

Table 1 3 ideal types of EBBP

In the latter part of the talk, I go beyond that powerpoint into two broad examples of practical responses:

  1. Storytelling

The Narrative Policy Framework describes the ‘science of stories’: we can identify stories with a 4-part structure (setting, characters, plot, moral) and measure their relative impact.  Jones/ Crow and Crow/Jones provide an accessible way into these studies. Also look at Davidson’s article on the ‘grey literature’ as a rich source of stories on stories.

On one hand, I think that storytelling is a great possibility for researchers: it helps them produce a core – and perhaps emotionally engaging – message that they can share with a wider audience. Indeed, I’d see it as an extension of the process that academics are used to: identifying an audience and framing an argument according to the ways in which that audience understands the world.

On the other hand, it is important to not get carried away by the possibilities:

  • My reading of the NPF empirical work is that the most impactful stories are reinforcing the beliefs of the audience – to mobilise them to act – not changing their minds.
  • Also look at the work of the Frameworks Institute which experiments with individual versus thematic stories because people react to them in very different ways. Some might empathise with an individual story; some might judge harshly. For example, they discusse stories about low income families and healthy eating, in which they use the theme of a maze to help people understand the lack of good choices available to people in areas with limited access to healthy food.

See: Storytelling for Policy Change: promise and problems

  1. Evidence for advocacy

The article I co-authored with Oxfam staff helps identify the lengths to which we might think we have to go to maximise the impact of research evidence. Their strategies include:

  1. Identifying the policy change they would like to see.
  2. Identifying the powerful actors they need to influence.
  3. A mixture of tactics: insider, outsider, and supporting others by, for example, boosting local civil society organisations.
  4. A mix of ‘evidence types’ for each audience

oxfam table 2

  1. Wider public campaigns to address the political environment in which policymakers consider choices
  2. Engaging stakeholders in the research process (often called the ‘co-production of knowledge’)
  3. Framing: personal stories, ‘killer facts’, visuals, credible messenger
  4. Exploiting ‘windows of opportunity’
  5. Monitoring, learning, trial and error

In other words, a source of success stories may provide a model for engagement or the sense that we need to work with others to engage effectively. Clear communication is one thing. Clear impact at a significant scale is another.

See: Using evidence to influence policy: Oxfam’s experience

 

 

 

 

 

 

 

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM)

Taking lessons from policy theory into practice: 3 examples

Notes for ANZSOG/ ANU Crawford School/ UNSW Canberra workshop. Powerpoint here. The recording of the lecture (skip to 2m30) and Q&A is here (right click to download mp3 or dropbox link):

The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?

The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).

However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:

  1. Central government policymakers may accept the descriptive accuracy of policy theories emphasising limited central control, but not the recommendation that they should let go, share power, and describe their limits to the public.
  2. Scientists may accept key limitations to ‘evidence based policymaking’ but reject the idea that they should respond by becoming better storytellers or more manipulative operators.
  3. Researchers and practitioners struggle to resolve hard choices when combining evidence and ‘coproduction’ while ‘scaling up’ policy interventions. Evidence choice is political choice. Can we do more than merely encourage people to accept this point?

I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.

Example 1: the policy cycle endures despite its descriptive inaccuracy

cycle

The policy cycle does not describe and explain the policy process well:

  • If we insist on keeping the cycle metaphor, it is more accurate to see the process as a huge set of policy cycles that connect with each other in messy and unpredictable ways.
  • The cycle approach also links strongly to the idea of ‘comprehensive rationality’ in which a small group of policymakers and analysts are in full possession of the facts and full control of the policy process. They carry out their aims through a series of stages.

Policy theories provide more descriptive and explanatory usefulness. Their insights include:

  • Limited choice. Policymakers inherit organisations, rules, and choices. Most ‘new’ choice is a revision of the old.
  • Limited attention. Policymakers must ignore almost all of the policy problems for which they are formally responsible. They pay attention to some, and delegate most responsibility to civil servants. Bureaucrats rely on other actors for information and advice, and they build relationships on trust and information exchange.
  • Limited central control. Policy may appear to be made at the ‘top’ or in the ‘centre’, but in practice policymaking responsibility is spread across many levels and types of government (many ‘centres’). ‘Street level’ actors make policy as they deliver. Policy outcomes appear to ‘emerge’ locally despite central government attempts to control their fate.
  • Limited policy change. Most policy change is minor, made and influenced by actors who interpret new evidence through the lens of their beliefs. Well-established beliefs limit the opportunities of new solutions. Governments tend to rely on trial-and-error, based on previous agreements, rather than radical policy change based on a new agenda. New solutions succeed only during brief and infrequent windows of opportunity.

However, the cycle metaphor endures because:

  • It provides a simple model of policymaking with stages that map onto important policymaking functions.
  • It provides a way to project policymaking to the public. You know how we make policy, and that we are in charge, so you know who to hold to account.

In that context, we may want to be pragmatic about our advice:

  1. One option is via complexity theory, in which scholars generally encourage policymakers to accept and describe their limits:
  • Accept routine error, reduce short-term performance management, engage more in trial and error, and ‘let go’ to allow local actors the flexibility to adapt and respond to their context.
  • However, would a government in the Westminster tradition really embrace this advice? No. They need to balance (a) pragmatic policymaking, and (b) an image of governing competence.
  1. Another option is to try to help improve an existing approach.

Further reading (blog posts):

The language of complexity does not mix well with the language of Westminster-style accountability

Making Sense of Policymaking: why it’s always someone else’s fault and nothing ever changes

Two stories of British politics: the Westminster model versus Complex Government

Example 2: how to deal with a lack of ‘evidence based policymaking’

I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.

I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.

So, I wrote The Politics of Evidence-Based Policymaking and one zillion blog posts to help identify the ways in which policy theories could help explain the relationship between evidence and policy.

Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.

But how exactly do we turn empirical descriptions of policymaking into practical advice?

For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:

  1. they have a broader view on what counts as good evidence (see ANZSOG description)
  2. they have to ignore almost all information (a nice way into bounded rationality and policymaker psychology)
  3. they do not understand or control the process in which they seek to use evidence (a way into ‘the policy process’)

Cairney 2017 image of the policy process

We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?

Examples include:

  • ‘How to do it’ advice. I compare tips for individuals (from experienced practitioners) with tips based on policy concepts. They are quite similar-looking tips – e.g. find out where the action is, learn the rules, tell good stories, engage allies, seek windows of opportunity – but I describe mine as 5 impossible tasks!
  • Organisational reform. I describe work with the European Commission Joint Research Centre to identify 8 skills or functions of an organisation bringing together the supply/demand of knowledge.
  • Ethical dilemmas. I use key policy theories to ask people how far they want to go to privilege evidence in policy. It’s fun to talk about these things with the type of scientist who sees any form of storytelling as manipulation.

Further reading:

Is Evidence-Based Policymaking the same as good policymaking?

A 5-step strategy to make evidence count

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Principles of science advice to government: key problems and feasible solutions

Example 3: how to encourage realistic evidence-informed policy transfer

This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.

I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!

For me, the context is potentially overwhelming:

So, when we start to focus on sharing lessons, we will have three things to discover:

  1. What is the evidence for success, and from where does it come? Governments often project success without backing it up.
  2. What story do policymakers tell about the problem they are trying to solve, the solutions they produced, and why? Two different governments may be framing and trying to solve the same problem in very different ways.
  3. Was the policy introduced in a comparable policymaking system? People tend to focus on political system comparability (e.g. is it unitary or federal?), but I think the key is in policymaking system comparability (e.g. what are the rules and dominant ideas?).

To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.

 

The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018

I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), IMAJINE, Policy learning and transfer

Prevention is better than cure, so why aren’t we doing more of it?

This post provides a generous amount of background for my ANZSOG talk Prevention is better than cure, so why aren’t we doing more of it? If you read all of it, it’s a long read. If not, it’s a short read before the long read. Here is the talk’s description:

‘Does this sound familiar? A new government comes into office, promising to shift the balance in social and health policy from expensive remedial, high dependency care to prevention and early intervention. They commit to better policy-making; they say they will join up policy and program delivery, devolving responsibility to the local level and focusing on long term outcomes rather than short term widgets; and that they will ensure policy is evidence-based.  And then it all gets too hard, and the cycle begins again, leaving some exhausted and disillusioned practitioners in its wake. Why does this happen repeatedly, across different countries and with governments of different persuasions, even with the best will in the world?’ 

  • You’ll see from the question that I am not suggesting that all prevention or early intervention policies fail. Rather, I use policy theories to provide a general explanation for a major gap between the (realistic) expectations expressed in prevention strategies and the actual outcomes. We can then talk about how to close that gap.
  • You’ll also see the phrase ‘even with the best will in the world’, which I think is key to this talk. No-one needs me to rehearse the usually-vague and often-stated ways to explain failed prevention policies, including the ‘wickedness’ of policy problems, or the ‘pathology’ of public policy. Rather, I show that such policies may ‘fail’ even when there is wide and sincere cross-party agreement about the need to shift from reactive to more prevention policy design. I also suggest that the general explanation for failure – low ‘political will’ – is often damaging to the chances for future success.
  • Let’s start by defining prevention policy and policymaking.

When engaged in ‘prevention’, governments seek to:

  1. Reform policy.

Prevention policy is really a collection of policies designed to intervene as early as possible in people’s lives to improve their wellbeing and reduce inequalities and/or demand for acute services. The aim is to move from reactive to preventive public services, intervening earlier in people’s lives to address a wide range of longstanding problems – including crime and anti-social behaviour, ill health and unhealthy behaviour, low educational attainment, unemployment and low employability – before they become too severe.

  1. Reform policymaking.

Preventive policymaking describes the ways in which governments reform their practices to support prevention policy, including a commitment to:

  • ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area
  • give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users produce long term aims for outcomes, and
  • reduce short term performance targets in favour of long term outcomes agreements.
  1. Ensure that policy is ‘evidence based’.

Three general reasons why ‘prevention’ policies never seem to succeed.

  1. Policymakers don’t know what prevention means.

They express a commitment to prevention before defining it fully. When they start to make sense of prevention, they find out how difficult it is to pursue, and how many controversial choices it involves (see also uncertainty versus ambiguity)

  1. They engage in a policymaking system that is too complex to control.

They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes.

Yet, they also need to demonstrate to the electorate that they are in control, and find out how difficult it is to localise and centralise policy.

  1. They are unable and unwilling to produce ‘evidence based policymaking’.

Policymakers seek cognitive shortcuts (and their organisational equivalents) to gather enough information to make ‘good enough’ decisions. When they seek evidence on prevention, they find that it is patchy, inconclusive, often counter to their beliefs, and not a ‘magic bullet’ to help justify choices.

Throughout this process, their commitment to prevention policy can be sincere but unfulfilled. They do not articulate fully what prevention means or appreciate the scale of their task. When they try to deliver prevention strategies, they face several problems that, on their own, would seem daunting. Many of the problems they seek to ‘prevent’ are ‘wicked’, or difficult to define and seemingly impossible to solve, such as poverty, unemployment, low quality housing and homelessness, crime, and health and education inequalities. They face stark choices on how far they should go to shift the balance between state and market, redistribute wealth and income, distribute public resources, and intervene in people’s lives to change their behaviour and ways of thinking. Their focus on the long term faces major competition from more salient short-term policy issues that prompt them to maintain ‘reactive’ public services. Their often-sincere desire to ‘localise’ policymaking often gives way to national electoral politics, in which central governments face pressure to make policy from the ‘top’ and be decisive. Their pursuit of ‘evidence based’ policymaking often reveals a lack of evidence about which policy interventions work and the extent to which they can be ‘scaled up’ successfully.

These problems will not be overcome if policy makers and influencers misdiagnose them

  • If policy influencers make the simplistic assumption that this problem is caused by low political they will provide bad advice.
  • If new policymakers truly think that the problem was the low commitment and competence of their predecessors, they will begin with the same high hopes about the impact they can make, only to become disenchanted when they see the difference between their abstract aims and real world outcomes.
  • Poor explanation of limited success contributes to the high potential for (a) an initial period of enthusiasm and activity, replaced by (b) disenchantment and inactivity, and (c) for this cycle to be repeated without resolution.

Let’s add more detail to these general explanations:

1. What makes prevention so difficult to define?

When viewed as a simple slogan, ‘prevention’ seems like an intuitively appealing aim. It can generate cross-party consensus, bringing together groups on the ‘left’, seeking to reduce inequalities, and on the ‘right’, seeking to reduce economic inactivity and the cost of services.

Such consensus is superficial and illusory. When making a detailed strategy, prevention is open to many interpretations by many policymakers. Imagine the many types of prevention policy and policymaking that we could produce:

  1. What problem are we trying to solve?

Prevention policymaking represents a heroic solution to several crises: major inequalities, underfunded public services, and dysfunctional government.

  1. On what measures should we focus?

On which inequalities should we focus primarily? Wealth, occupation, income, race, ethnicity, gender, sexuality, disability, mental health.

On which measures of inequality? Economic, health, healthy behaviour, education attainment, wellbeing, punishment.

  1. On what solution should we focus?

To reduce poverty and socioeconomic inequalities, improve national quality of life, reduce public service costs, or increase value for money

  1. Which ‘tools’ or policy instruments should we use?

Redistributive policies to address ‘structural’ causes of poverty and inequality?

Or, individual-focused policies to: (a) boost the mental ‘resilience’ of public service users, (b) oblige, or (c) exhort people to change behaviour.

  1. How do we intervene as early as possible in people’s lives?

Primary prevention. Focus on the whole population to stop a problem occurring by investing early and/or modifying the social or physical environment. Akin to whole-population immunizations.

Secondary prevention. Focus on at-risk groups to identify a problem at a very early stage to minimise harm.

Tertiary prevention. Focus on affected groups to stop a problem getting worse.

  1. How do we pursue ‘evidence based policymaking’? 3 ideal-types

Using randomised control trials and systematic review to identify the best interventions?

Storytelling to share best governance practice?

‘Improvement’ methods to experiment on a small scale and share best practice?

  1. How does evidence gathering connect to long-term policymaking?

Does a national strategy drive long-term outcomes?

Does central government produce agreements with or targets for local authorities?

  1. Is preventive policymaking a philosophy or a profound reform process?

How serious are national governments – about localism, service user-driven public services, and joined up or holistic policymaking – when their elected policymakers are held to account for outcomes?

  1. What is the nature of state intervention?

It may be punitive or supportive. See: How would Lisa Simpson and Monty Burns make progressive social policy?

2.     Making ‘hard choices’: what problems arise when politics meets policymaking?

When policymakers move from idiom and broad philosophy towards specific policies and practices, they find a range of obstacles, including:

The scale of the task becomes overwhelming, and not suited to electoral cycles.

Developing policy and reforming policymaking takes time, and the effect may take a generation to see.

There is competition for policymaking resources such as attention and money.

Prevention is general, long-term, and low salience. It competes with salient short-term problems that politicians feel compelled to solve first.

Prevention is akin to capital investment with no guarantee of a return. Reductions in funding ‘fire-fighting’, ‘frontline’ services to pay for prevention initiatives, are hard to sell. Governments invest in small steps, and investment is vulnerable when money is needed quickly to fund public service crises.

The benefits are difficult to measure and see.

Short-term impacts are hard to measure, long-term impacts are hard to attribute to a single intervention, and prevention does not necessarily save money (or provide ‘cashable’ savings’).

Reactive policies have a more visible impact, such as to reduce hospital waiting times or increase the number of teachers or police officers.

Problems are ‘wicked’.

Getting to the ‘root causes’ of problems is not straightforward; policymakers often have no clear sense of the cause of problems or effect of solutions. Few aspects of prevention in social policy resemble disease prevention, in which we know the cause of many diseases, how to screen for them, and how to prevent them in a population.

Performance management is not conducive to prevention.

Performance management systems encourage public sector managers to focus on their services’ short-term and measurable targets over shared aims with public service partners or the wellbeing of their local populations.

Performance management is about setting priorities when governments have too many aims to fulfil. When central governments encourage local governing bodies to form long-term partnerships to address inequalities and meet short-term targets, the latter come first.

Governments face major ethical dilemmas.

Political choices co-exist with normative judgements concerning the role of the state and personal responsibility, often undermining cross-party agreement.

One aspect of prevention may undermine the other.

A cynical view of prevention initiatives is that they represent a quick political fix rather than a meaningful long-term solution:

  • Central governments describe prevention as the solution to public sector costs while also delegating policymaking responsibility to, and reducing the budgets of, local public bodies.
  • Then, public bodies prioritise their most pressing statutory responsibilities.

Someone must be held to account.

If everybody is involved in making and shaping policy, it becomes unclear who can be held to account over the results. This outcome is inconsistent with Westminster-style democratic accountability in which we know who is responsible and therefore who to praise or blame.

3.      ‘The evidence’ is not a ‘magic bullet’

In a series of other talks, I identify the reasons why ‘evidence based policymaking’ (EBPM) does not describe the policy process well.

Elsewhere, I also suggest that it is more difficult for evidence to ‘win the day’ in the broad area of prevention policy compared to the more specific field of tobacco control.

Generally speaking, a good simple rule about EBPM is that there is never a ‘magic bullet’ to take the place of judgement. Politics is about making choices which benefit some while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution.

A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention in ‘families policies’ seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to the highest evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field.

The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention:

  • intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour
  • an outreach model of support and training.

The evidence of success comes from evaluation plus a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without this intervention.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success built on RCT evidence. There is more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and services.

2. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem:

  • there are few examples of taking effective specialist projects ‘to scale’
  • there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners)
  • it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

3. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

Conclusion: vague consensus is no substitute for political choice

Governments begin with the sense that they have found the solution to many problems, only to find that they have to make and defend highly ‘political’ choices.

For example, see the UK government’s ‘imaginative’ use of evidence to make families policy. In a nutshell, it chose to play fast and loose with evidence, and demonise 117000 families, to provide political cover to a redistribution of resources to family intervention projects.

We can, with good reason, object to this style of politics. However, we would also have to produce a feasible alternative.

For example, the Scottish Government has taken a different approach (perhaps closer to what one might often expect in New Zealand), but it still needs to produce and defend a story about its choices, and it faces almost the same constraints as the UK. It’s self-described ‘decisive shift’ to prevention was no a decisive shift to prevention.

Overall, prevention is no different from any other policy area, except that it has proven to be much more complicated and difficult to sustain than most others. Prevention is part of an excellent idiom but not a magic bullet for policy problems.

Further reading:

Prevention

See also

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

 

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy