Category Archives: public policy

Evidence-based policymaking and the ‘new policy sciences’

image policy process round 2 25.10.18

[I wasn’t happy with the first version, so this is version 2]

In the ‘new policy sciences’, Chris Weible and I advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

However, there is a lot of policy theory out there, and we can’t put policy theory together like Lego to produce consistent insights to inform policy analysis.

Rather, each concept in my image of the policy process represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events.

What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process.

However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

On that basis, I’d encourage you to think of these attempts to synthesise as stories. I tell these stories a lot, but someone else could describe theory very differently (perhaps by relying on fewer male authors or US-derived theories in which there is a very specific reference points and positivism is represented well).

The example of EBPM

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

Further, policy theories/ studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

As described, this focus on the new policy sciences and synthesising insights helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

From one story to many?

However, I tell these stories without my audience having the time to look further into each theory and its individual insights. If they do have a little more time, I go into the possible contribution of individual insights to debate.

For example, they adapt insights from psychology in different ways …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… even though the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

They also present different conceptions of the policymaking environment in which actors make choices. See this post for more on this discussion in relation to EBPM.

My not-brilliant conclusion is that:

  1. Policy theory/ policy studies has a lot to offer other disciplines and professions, particularly in field like EBPM in which we need to account for politics and, more importantly, policymaking systems, but
  2. Beware any policy theory story that presents the source literature as coherent and consistent.
  3. Rather, any story of the field involves a series of choices about what counts as a good theory and good insight.
  4. In other words, the exhortation to think more about what counts as ‘good evidence’ applies just as much to political science as any other.

Postscript: well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one and see it as a sequel to this one!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy

Evidence-based policymaking and the ‘new policy sciences’

Circle image policy process 24.10.18

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

In most cases, we don’t have time to discuss a more fundamental issue (at least for researchers using policy theory and political science concepts):

From where did these concepts come, and how well do we know them?

To cut a long story short, each concept represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events. What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process. However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

The new policy sciences

More recently, in the ‘new policy sciences’, Chris Weible and I present a more provocative story of these efforts, in which we advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

This focus on psychology is not new …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… but the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

Perhaps more importantly, policy studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

Then, have a look at this discussion of ‘synthetic’ policy theories, designed to prompt people to consider how far they would go to get their evidence into policy.

Theory-driven policy analysis

As described, this focus on the new policy sciences helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

Epilogue

Well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one in Auckland and see it as a sequel to this one in Brisbane!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Prevention is better than cure, so why aren’t we doing more of it?

This post provides a generous amount of background for my ANZSOG talk Prevention is better than cure, so why aren’t we doing more of it? If you read all of it, it’s a long read. If not, it’s a short read before the long read. Here is the talk’s description:

‘Does this sound familiar? A new government comes into office, promising to shift the balance in social and health policy from expensive remedial, high dependency care to prevention and early intervention. They commit to better policy-making; they say they will join up policy and program delivery, devolving responsibility to the local level and focusing on long term outcomes rather than short term widgets; and that they will ensure policy is evidence-based.  And then it all gets too hard, and the cycle begins again, leaving some exhausted and disillusioned practitioners in its wake. Why does this happen repeatedly, across different countries and with governments of different persuasions, even with the best will in the world?’ 

  • You’ll see from the question that I am not suggesting that all prevention or early intervention policies fail. Rather, I use policy theories to provide a general explanation for a major gap between the (realistic) expectations expressed in prevention strategies and the actual outcomes. We can then talk about how to close that gap.
  • You’ll also see the phrase ‘even with the best will in the world’, which I think is key to this talk. No-one needs me to rehearse the usually-vague and often-stated ways to explain failed prevention policies, including the ‘wickedness’ of policy problems, or the ‘pathology’ of public policy. Rather, I show that such policies may ‘fail’ even when there is wide and sincere cross-party agreement about the need to shift from reactive to more prevention policy design. I also suggest that the general explanation for failure – low ‘political will’ – is often damaging to the chances for future success.
  • Let’s start by defining prevention policy and policymaking.

When engaged in ‘prevention’, governments seek to:

  1. Reform policy.

Prevention policy is really a collection of policies designed to intervene as early as possible in people’s lives to improve their wellbeing and reduce inequalities and/or demand for acute services. The aim is to move from reactive to preventive public services, intervening earlier in people’s lives to address a wide range of longstanding problems – including crime and anti-social behaviour, ill health and unhealthy behaviour, low educational attainment, unemployment and low employability – before they become too severe.

  1. Reform policymaking.

Preventive policymaking describes the ways in which governments reform their practices to support prevention policy, including a commitment to:

  • ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area
  • give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users produce long term aims for outcomes, and
  • reduce short term performance targets in favour of long term outcomes agreements.
  1. Ensure that policy is ‘evidence based’.

Three general reasons why ‘prevention’ policies never seem to succeed.

  1. Policymakers don’t know what prevention means.

They express a commitment to prevention before defining it fully. When they start to make sense of prevention, they find out how difficult it is to pursue, and how many controversial choices it involves (see also uncertainty versus ambiguity)

  1. They engage in a policymaking system that is too complex to control.

They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes.

Yet, they also need to demonstrate to the electorate that they are in control, and find out how difficult it is to localise and centralise policy.

  1. They are unable and unwilling to produce ‘evidence based policymaking’.

Policymakers seek cognitive shortcuts (and their organisational equivalents) to gather enough information to make ‘good enough’ decisions. When they seek evidence on prevention, they find that it is patchy, inconclusive, often counter to their beliefs, and not a ‘magic bullet’ to help justify choices.

Throughout this process, their commitment to prevention policy can be sincere but unfulfilled. They do not articulate fully what prevention means or appreciate the scale of their task. When they try to deliver prevention strategies, they face several problems that, on their own, would seem daunting. Many of the problems they seek to ‘prevent’ are ‘wicked’, or difficult to define and seemingly impossible to solve, such as poverty, unemployment, low quality housing and homelessness, crime, and health and education inequalities. They face stark choices on how far they should go to shift the balance between state and market, redistribute wealth and income, distribute public resources, and intervene in people’s lives to change their behaviour and ways of thinking. Their focus on the long term faces major competition from more salient short-term policy issues that prompt them to maintain ‘reactive’ public services. Their often-sincere desire to ‘localise’ policymaking often gives way to national electoral politics, in which central governments face pressure to make policy from the ‘top’ and be decisive. Their pursuit of ‘evidence based’ policymaking often reveals a lack of evidence about which policy interventions work and the extent to which they can be ‘scaled up’ successfully.

These problems will not be overcome if policy makers and influencers misdiagnose them

  • If policy influencers make the simplistic assumption that this problem is caused by low political they will provide bad advice.
  • If new policymakers truly think that the problem was the low commitment and competence of their predecessors, they will begin with the same high hopes about the impact they can make, only to become disenchanted when they see the difference between their abstract aims and real world outcomes.
  • Poor explanation of limited success contributes to the high potential for (a) an initial period of enthusiasm and activity, replaced by (b) disenchantment and inactivity, and (c) for this cycle to be repeated without resolution.

Let’s add more detail to these general explanations:

1. What makes prevention so difficult to define?

When viewed as a simple slogan, ‘prevention’ seems like an intuitively appealing aim. It can generate cross-party consensus, bringing together groups on the ‘left’, seeking to reduce inequalities, and on the ‘right’, seeking to reduce economic inactivity and the cost of services.

Such consensus is superficial and illusory. When making a detailed strategy, prevention is open to many interpretations by many policymakers. Imagine the many types of prevention policy and policymaking that we could produce:

  1. What problem are we trying to solve?

Prevention policymaking represents a heroic solution to several crises: major inequalities, underfunded public services, and dysfunctional government.

  1. On what measures should we focus?

On which inequalities should we focus primarily? Wealth, occupation, income, race, ethnicity, gender, sexuality, disability, mental health.

On which measures of inequality? Economic, health, healthy behaviour, education attainment, wellbeing, punishment.

  1. On what solution should we focus?

To reduce poverty and socioeconomic inequalities, improve national quality of life, reduce public service costs, or increase value for money

  1. Which ‘tools’ or policy instruments should we use?

Redistributive policies to address ‘structural’ causes of poverty and inequality?

Or, individual-focused policies to: (a) boost the mental ‘resilience’ of public service users, (b) oblige, or (c) exhort people to change behaviour.

  1. How do we intervene as early as possible in people’s lives?

Primary prevention. Focus on the whole population to stop a problem occurring by investing early and/or modifying the social or physical environment. Akin to whole-population immunizations.

Secondary prevention. Focus on at-risk groups to identify a problem at a very early stage to minimise harm.

Tertiary prevention. Focus on affected groups to stop a problem getting worse.

  1. How do we pursue ‘evidence based policymaking’? 3 ideal-types

Using randomised control trials and systematic review to identify the best interventions?

Storytelling to share best governance practice?

‘Improvement’ methods to experiment on a small scale and share best practice?

  1. How does evidence gathering connect to long-term policymaking?

Does a national strategy drive long-term outcomes?

Does central government produce agreements with or targets for local authorities?

  1. Is preventive policymaking a philosophy or a profound reform process?

How serious are national governments – about localism, service user-driven public services, and joined up or holistic policymaking – when their elected policymakers are held to account for outcomes?

  1. What is the nature of state intervention?

It may be punitive or supportive. See: How would Lisa Simpson and Monty Burns make progressive social policy?

2.     Making ‘hard choices’: what problems arise when politics meets policymaking?

When policymakers move from idiom and broad philosophy towards specific policies and practices, they find a range of obstacles, including:

The scale of the task becomes overwhelming, and not suited to electoral cycles.

Developing policy and reforming policymaking takes time, and the effect may take a generation to see.

There is competition for policymaking resources such as attention and money.

Prevention is general, long-term, and low salience. It competes with salient short-term problems that politicians feel compelled to solve first.

Prevention is akin to capital investment with no guarantee of a return. Reductions in funding ‘fire-fighting’, ‘frontline’ services to pay for prevention initiatives, are hard to sell. Governments invest in small steps, and investment is vulnerable when money is needed quickly to fund public service crises.

The benefits are difficult to measure and see.

Short-term impacts are hard to measure, long-term impacts are hard to attribute to a single intervention, and prevention does not necessarily save money (or provide ‘cashable’ savings’).

Reactive policies have a more visible impact, such as to reduce hospital waiting times or increase the number of teachers or police officers.

Problems are ‘wicked’.

Getting to the ‘root causes’ of problems is not straightforward; policymakers often have no clear sense of the cause of problems or effect of solutions. Few aspects of prevention in social policy resemble disease prevention, in which we know the cause of many diseases, how to screen for them, and how to prevent them in a population.

Performance management is not conducive to prevention.

Performance management systems encourage public sector managers to focus on their services’ short-term and measurable targets over shared aims with public service partners or the wellbeing of their local populations.

Performance management is about setting priorities when governments have too many aims to fulfil. When central governments encourage local governing bodies to form long-term partnerships to address inequalities and meet short-term targets, the latter come first.

Governments face major ethical dilemmas.

Political choices co-exist with normative judgements concerning the role of the state and personal responsibility, often undermining cross-party agreement.

One aspect of prevention may undermine the other.

A cynical view of prevention initiatives is that they represent a quick political fix rather than a meaningful long-term solution:

  • Central governments describe prevention as the solution to public sector costs while also delegating policymaking responsibility to, and reducing the budgets of, local public bodies.
  • Then, public bodies prioritise their most pressing statutory responsibilities.

Someone must be held to account.

If everybody is involved in making and shaping policy, it becomes unclear who can be held to account over the results. This outcome is inconsistent with Westminster-style democratic accountability in which we know who is responsible and therefore who to praise or blame.

3.      ‘The evidence’ is not a ‘magic bullet’

In a series of other talks, I identify the reasons why ‘evidence based policymaking’ (EBPM) does not describe the policy process well.

Elsewhere, I also suggest that it is more difficult for evidence to ‘win the day’ in the broad area of prevention policy compared to the more specific field of tobacco control.

Generally speaking, a good simple rule about EBPM is that there is never a ‘magic bullet’ to take the place of judgement. Politics is about making choices which benefit some while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution.

A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention in ‘families policies’ seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to the highest evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field.

The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention:

  • intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour
  • an outreach model of support and training.

The evidence of success comes from evaluation plus a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without this intervention.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success built on RCT evidence. There is more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and services.

2. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem:

  • there are few examples of taking effective specialist projects ‘to scale’
  • there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners)
  • it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

3. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

Conclusion: vague consensus is no substitute for political choice

Governments begin with the sense that they have found the solution to many problems, only to find that they have to make and defend highly ‘political’ choices.

For example, see the UK government’s ‘imaginative’ use of evidence to make families policy. In a nutshell, it chose to play fast and loose with evidence, and demonise 117000 families, to provide political cover to a redistribution of resources to family intervention projects.

We can, with good reason, object to this style of politics. However, we would also have to produce a feasible alternative.

For example, the Scottish Government has taken a different approach (perhaps closer to what one might often expect in New Zealand), but it still needs to produce and defend a story about its choices, and it faces almost the same constraints as the UK. It’s self-described ‘decisive shift’ to prevention was no a decisive shift to prevention.

Overall, prevention is no different from any other policy area, except that it has proven to be much more complicated and difficult to sustain than most others. Prevention is part of an excellent idiom but not a magic bullet for policy problems.

Further reading:

Prevention

See also

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

 

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Talks and blogs: ANZSOG trip

I took a trip to New Zealand and Australia as a guest of ANZSOG. Here is a list of dates and titles for each talk. There is usually a link in the name of the city to the advert for the talk. There is also a link in each title to a blog post on the talk. Some of the talks were recorded and I will add them when I get them. In the meantime, there is also a selection of tweets at the end to prove that I’m not making up the trip.

  1. Auckland 11 October University of Auckland (mostly academic audience) Why don’t policymakers listen to my evidence (powerpoint)
  2. Auckland 12 October Auckland Art Gallery (NZ public managers from Auckland council and State sector agencies)  Teaching evidence-based policy to fly: transferring sound policies across the world (not recorded)
  3. Auckland 12 October Working with complexity – evidence based practice and practice based evidence (including Auckland Co-design Lab and Southern Initiative, roundtable discussion of this kind of thing)
  4. Wellington October 15 The politics of evidence-based policy: the expectations of academics and policymakers and ways to meet in the middle (Policy Project, Department of the Prime Minister and Cabinet, discussion based on this kind of old post, new post and powerpoint presentation )
  5. Wellington 15 October (200 NZ civil servants from Wellington-based departments and agencies) Prevention is better than cure: so why aren’t we doing more of it? (ppt)
  6. Wellington 16 October Victoria University (academic audience) The politics of evidence-based policy
  7. Wellington 17 October Victoria University (academic audience) Blogging and reaching out (see discussion of previous event and Q. Should PhD students blog? A. Yes)
  8. Melbourne 18 October (Department of Premier and Cabinet, Victoria State Government) The politics of evidence-based policy making (DPC)
  9. Sydney UNSW 19 October Why don’t policymakers listen to your evidence? (ppt presentation then roundtable on impact – videos below)
  10. Canberra (ANU) 22 October Taking lessons from policy theory into practice (lecture and discussion). Powerpoint here. Audio (skip to 2m30) (or right click download or dropbox link
  11. Canberra (ANU) 22 October podcast ‘Why prevention policies fail
  12. Canberra (UNSW) 23 October Why might policymakers listen to your evidence? (PhD workshop, discussing elements of a ppt I used for the SGSSS and this powerpoint presentation that I’ve used with UK and Scottish government audiences and in Wellington)
  13. Brisbane 24 October University of Queensland Theory and Practice: How to Communicate Policy Research beyond the Academy
  14. Brisbane 25 October University of Queensland Evidence-based policymaking and the new policy sciences (see ppt and article with Chris Weible)

UQ isn’t really a big tweety place, so here is a picture of a lovely tree there:

2 Comments

Filed under public policy

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in a forthcoming article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Emotion and reason in politics: the rational/ irrational distinction

In ‘How to communicate effectively with policymakers’, Richard Kwiatkowski and I use the distinction between ‘rational’ and ‘irrational’ cognitive shortcuts ‘provocatively’. I sort of wish we had been more direct, because I have come to realise that:

  1. My attempts to communicate with sarcasm and facial gestures may only ever appeal to a niche audience, and
  2. even if you use the scare quotes – around a word like ‘irrational’ – to denote the word’s questionable use, it’s not always clear what I’m questioning, because
  3. you need to know the story behind someone’s discussion to know what they are questioning.*

So, here are some of the reference points I’m using when I tell a story about ‘irrationality’:

1. I’m often invited to be the type of guest speaker that challenges the audience, it is usually a scientific audience, and the topic is usually evidence based policymaking.

So, when I say ‘irrational’, I’m speaking to (some) scientists who think of themselves as rational and policymakers as irrational, and use this problematic distinction to complain about policy-based evidence, post-truth politics, and perhaps even the irrationality of voters for Brexit. Action based on this way of thinking would be counterproductive. In that context, I use the word ‘irrational’ as a way into some more nuanced discussions including:

  • all humans combine cognition and emotion to make choices; and,
  • emotions are one of many sources of ‘fast and frugal heuristics’ that help us make some decisions very quickly and often very well.

In other words, it is silly to complain that some people are irrational, when we are all making choices this way, and such decision-making is often a good thing.

2. This focus on scientific rationality is part of a wider discussion of what counts as good evidence or valuable knowledge. Examples include:

  • Policy debates on the value of bringing together many people with different knowledge claims – such as through user and practitioner experience – to ‘co-produce’ evidence.
  • Wider debates on the ‘decolonization of knowledge’ in which narrow ‘Western’ scientific principles help exclude the voices of many populations by undermining their claims to knowledge.

3. A focus on rationality versus irrationality is still used to maintain sexist and racist caricatures or stereotypes, and therefore dismiss people based on a misrepresentation of their behaviour.

I thought that, by now, we’d be done with dismissing women as emotional or hysterical, but apparently not. Indeed, as some recent racist and sexist coverage of Serena Williams demonstrates, the idea that black women are not rational is still tolerated in mainstream discussion.

4. Part of the reason that we can only conclude that people combine cognition and emotion, without being able to separate their effects in a satisfying way, is that the distinction is problematic.

It is difficult to demonstrate empirically. It is also difficult to assign some behaviours to one camp or the other, such as when we consider moral reasoning based on values and logic.

To sum up, I’ve been using the rational/irrational distinction explicitly to make a simple point that is relevant to the study of politics and policymaking:

  • All people use cognitive shortcuts to help them ignore almost all information about the world, to help them make decisions efficiently.
  • If you don’t understand and act on this simple insight, you’ll waste your time by trying to argue someone into submission or giving them a 500-page irrelevant report when they are looking for one page written in a way that makes sense to them.

Most of the rest has been mostly implicit, and communicated non-verbally, which is great when you want to keep a presentation brief and light, but not if you want to acknowledge nuance and more serious issues.

 

 

 

 

*which is why I’m increasingly interested in Riker’s idea of heresthetics, in which the starting point of a story is crucial. We can come to very different conclusions about a problem and its solution by choosing different starting points, to accentuate one aspect of a problem and downplay another, even when our beliefs and preferences remain basically the same.

 

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

How far should you go to privilege evidence? 2. Policy theories, scenarios, and ethical dilemmas

If you have read Why don’t policymakers listen to your evidence? and What can you do when policymakers ignore your evidence? then join me as we get into the thornier dilemmas in this punchline post. Maybe you already appreciate the importance of bounded rationality and policymaking complexity. Maybe you’ve read the gazillion posts which think through the relationship between ‘evidence based policymaking’ and policy theories and now hope that there’s nothing more to say. Well, you hope in vain.

This post compares some general bland advice, based on these posts, to the dilemmas that you might encounter if you really take key parts of these theories to heart.

FUSE slide 5 bland

My first aim is to compare the ‘how to’ advice that you might take from policy theories versus the grey literature.

Policy concepts describe a wider context in which to produce practical advice:

  • If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is.
  • Even if you find the right venue, you will not know the unwritten rules unless you study them intensely.
  • Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence.
  • Research advocates can be privileged insiders in some venues and excluded completely in others.
  • If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way.
  • You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.
  • And so on.

In that context, theory-informed studies recommend investing your time over the long term – to built up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off.

The despair does not stop there. More specific theories and studies help us combine practical considerations with the ethical dilemmas that evidence advocates face when trying to be effective in a highly political policymaking environment.

First, refresh your memory of key images of the policy process. Or, if you are on a PC, you can keep the two tabs open for comparison.

Second, consider the following staircase analogy in which the ethical dilemmas – regarding how far you should go to get attention for your evidence – seems to become more problematic with each upwards step:

Step 1: Change levels of attention to issues, not minds.

The narrative policy framework (NPF) suggests that ‘narratives’ – consisting of a setting, characters, plot, and moral – can produce a measurable policy impact, but primarily to reinforce the beliefs of policy actors. The existing beliefs of the audience often seem more important than the skills of the storyteller. Therefore, to maximise the impact of evidence, (a) tell a story which appeals to the biases of your audiences, and (b) employ ‘heresthetic’ strategies in which we try to increase the salience of one belief at the expense of another rather than ask someone to change their belief entirely.

Step 2: Engage only with actors who share your beliefs.

The advocacy coalition framework (ACF) suggests that actors enter politics to turn their beliefs into policy. In highly salient issues, coalition actors romanticise their own cause and demonize their opponents. This competition extends to the use of evidence: each coalition may demand different evidence, or interpret the same evidence differently, to support their own cause. If so, the most feasible strategy may be to treat evidence as a resource to support the coalitions which support your cause, and to engage minimally with competitor coalitions who seek to ignore or discredit your evidence. Only in less salient issues will we find a ‘brokerage’ role for scientists.

Step 3: Exercise power to limit debate and dominate policymaker attention.

Punctuated equilibrium theory (PET) suggests that policy actors frame issues to limit external attention. If they can define a problem successfully as solved, bar the technical details relating to regulation and implementation, they can help reduce external attention and privilege the demand for evidence from scientific experts.

Step 4. Frame evidence to be consistent with objectionable beliefs.

Social construction and policy design theory (SCPD) suggests that, when dealing with salient issues, policymakers exploit social stereotypes strategically, or rely on their emotions, to define target populations as deserving of government benefits or punishments. Some populations can challenge (or exploit the rewards of) their image, but many are powerless to respond. Or, in lower salience issues, there is more scope for bureaucrats and experts to contain discussion to small groups (as in the discussion of PET). In both cases, many social groups become disenchanted with politics because they are punished by government policy and excluded from debate. To find an influential audience for evidence, one may be most effective by framing evidence to be sympathetic to stereotype-led or other forms of misleading political strategy.

The main role of these discussions is to expose the assumptions that we make about the primacy of research evidence and the lengths to which we are willing to go to privilege its use. Policy studies suggest that the most effective ways to privilege research evidence may be to:

  • manipulate the order in which we consider issues and make choices
  • refuse to engage in debate with our competitors
  • frame issues to minimise attention or maximise the convergence between evidence and the rhetorical devices of cynical politicians.

However, they also expose stark ethical dilemmas regarding the consequences for democracy. Put simply, the most effective evidence advocacy strategies may be inconsistent with wider democratic principles and key initiatives such as participatory policymaking.

If so, these discussions prompt us to consider the ways in which we can value research evidence up to a certain point, to produce more ‘co-productive’ strategies which balance efforts to limit participation (to privilege expertise) and encourage it (to privilege deliberative and participatory forms of democracy). This approach is more honest and realistic than the more common story that science is, by its very nature, the antidote to populist or dysfunctional politics.

[If you came here in error, or to continue your adventure, go to page 100]

See also:

EBPM key themes

Policy theories in 1000 words (or short podcasts) and 500 words

8 Comments

Filed under Evidence Based Policymaking (EBPM), public policy