Category Archives: Public health

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

2 Comments

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Why doesn’t evidence win the day in policy and policymaking?

cairney-southampton-evidence-win-the-dayPolitics has a profound influence on the use of evidence in policy, but we need to look ‘beyond the headlines’ for a sense of perspective on its impact.

It is tempting for scientists to identify the pathological effect of politics on policymaking, particularly after high profile events such as the ‘Brexit’ vote in the UK and the election of Donald Trump as US President. We have allegedly entered an era of ‘post-truth politics’ in which ideology and emotion trumps evidence and expertise (a story told many times at events like this), particularly when issues are salient.

Yet, most policy is processed out of this public spotlight, because the flip side of high attention to one issue is minimal attention to most others. Science has a crucial role in this more humdrum day-to-day business of policymaking which is far more important than visible. Indeed, this lack of public visibility can help many actors secure a privileged position in the policy process (and further exclude citizens).

In some cases, experts are consulted routinely. There is often a ‘logic’ of consultation with the ‘usual suspects’, including the actors most able to provide evidence-informed advice. In others, scientific evidence is often so taken for granted that it is part of the language in which policymakers identify problems and solutions.

In that context, we need better explanations of an ‘evidence-policy’ gap than the pathologies of politics and egregious biases of politicians.

To understand this process, and appearance of contradiction between excluded versus privileged experts, consider the role of evidence in politics and policymaking from three different perspectives.

The perspective of scientists involved primarily in the supply of evidence

Scientists produce high quality evidence only for politicians often ignore it or, even worse, distort its message to support their ideologically-driven policies. If they expect ‘evidence-based policymaking’ they soon become disenchanted and conclude that ‘policy-based evidence’ is more likely. This perspective has long been expressed in scientific journals and commentaries, but has taken on new significance following ‘Brexit’ and Trump.

The perspective of elected politicians

Elected politicians are involved primarily in managing government and maximising public and organisational support for policies. So, scientific evidence is one piece of a large puzzle. They may begin with a manifesto for government and, if elected, feel an obligation to carry it out. Evidence may play a part in that process but the search for evidence on policy solutions is not necessarily prompted by evidence of policy problems.

Further, ‘evidence based policy’ is one of many governance principles that politicians should feel the need to juggle. For example, in Westminster systems, ministers may try to delegate policymaking to foster ‘localism’ and/ or pragmatic policymaking, but also intervene to appear to be in control of policy, to foster a sense of accountability built on an electoral imperative. The likely mix of delegation and intervention seems almost impossible to predict, and this dynamic has a knock-on effect for evidence-informed policy. In some cases, central governments roll out the same basic policy intervention and limit local discretion; in others, it identifies broad outcomes and invites other bodies to gather evidence on how best to meet them. These differences in approach can have profound consequences on the models of evidence-informed policy available to us (see the example of Scottish policymaking).

Political science and policy studies provide a third perspective

Policy theories help us identify the relationship between evidence and policy by showing that a modern focus on ‘evidence-based policymaking’ (EBPM) is one of many versions of the same fairy tale – about ‘rational’ policymaking – that have developed in the post-war period. We talk about ‘bounded rationality’ to identify key ways in which policymakers or organisations could not achieve ‘comprehensive rationality’:

  1. They cannot separate values and facts.
  2. They have multiple, often unclear, objectives which are difficult to rank in any meaningful way.
  3. They have to use major shortcuts to gather a limited amount of information in a limited time.
  4. They can’t make policy from the ‘top down’ in a cycle of ordered and linear stages.

Limits to ‘rational’ policymaking: two shortcuts to make decisions

We can sum up the first three bullet points with one statement: policymakers have to try to evaluate and solve many problems without the ability to understand what they are, how they feel about them as a whole, and what effect their actions will have.

To do so, they use two shortcuts: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly.

Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing issues to produce or reinforce a dominant way to define policy problems. Successful actors combine evidence and emotional appeals or simple stories to capture policymaker attention, and/ or help policymakers interpret information through the lens of their strongly-held beliefs.

Scientific evidence plays its part, but scientists often make the mistake of trying to bombard policymakers with evidence when they should be trying to (a) understand how policymakers understand problems, so that they can anticipate their demand for evidence, and (b) frame their evidence according to the cognitive biases of their audience.

Policymaking in ‘complex systems’ or multi-level policymaking environments

Policymaking takes place in less ordered, less hierarchical, and less predictable environment than suggested by the image of the policy cycle. Such environments are made up of:

  1. a wide range of actors (individuals and organisations) influencing policy at many levels of government
  2. a proliferation of rules and norms followed by different levels or types of government
  3. close relationships (‘networks’) between policymakers and powerful actors
  4. a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  5. shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

These five properties – plus a ‘model of the individual’ built on a discussion of ‘bounded rationality’ – make up the building blocks of policy theories (many of which I summarise in 1000 Word posts). I say this partly to aid interdisciplinary conversation: of course, each theory has its own literature and jargon, and it is difficult to compare and combine their insights, but if you are trained in a different discipline it’s unfair to ask you devote years of your life to studying policy theory to end up at this point.

To show that policy theories have a lot to offer, I have been trying to distil their collective insights into a handy guide – using this same basic format – that you can apply to a variety of different situations, from explaining painfully slow policy change in some areas but dramatic change in others, to highlighting ways in which you can respond effectively.

We can use this approach to help answer many kinds of questions. With my Southampton gig in mind, let’s use some examples from public health and prevention.

Why doesn’t evidence win the day in tobacco policy?

My colleagues and I try to explain why it takes so long for the evidence on smoking and health to have a proportionate impact on policy. Usually, at the back of my mind, is a public health professional audience trying to work out why policymakers don’t act quickly or effectively enough when presented with unequivocal scientific evidence. More recently, they wonder why there is such uneven implementation of a global agreement – the WHO Framework Convention on Tobacco Control – that almost every country in the world has signed.

We identify three conditions under which evidence will ‘win the day’:

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems. In leading countries, it took decades to command attention to the health effects of smoking, reframe tobacco primarily as a public health epidemic (not an economic good), and generate support for the most effective evidence-based solutions.
  2. The policy environment becomes conducive to policy change. A new and dominant frame helps give health departments (often in multiple venues) a greater role; health departments foster networks with public health and medical groups at the expense of the tobacco industry; and, they emphasise the socioeconomic conditions – reductions in smoking prevalence, opposition to tobacco control, and economic benefits to tobacco – supportive of tobacco control.
  3. Actors exploit ‘windows of opportunity’ successfully. A supportive frame and policy environment maximises the chances of high attention to a public health epidemic and provides the motive and opportunity of policymakers to select relatively restrictive policy instruments.

So, scientific evidence is a necessary but insufficient condition for major policy change. Key actors do not simply respond to new evidence: they use it as a resource to further their aims, to frame policy problems in ways that will generate policymaker attention, and underpin technically and politically feasible solutions that policymakers will have the motive and opportunity to select. This remains true even when the evidence seems unequivocal and when countries have signed up to an international agreement which commits them to major policy change. Such commitments can only be fulfilled over the long term, when actors help change the policy environment in which these decisions are made and implemented. So far, this change has not occurred in most countries (or, in other aspects of public health in the UK, such as alcohol policy).

Why doesn’t evidence win the day in prevention and early intervention policy?

UK and devolved governments draw on health and economic evidence to make a strong and highly visible commitment to preventive policymaking, in which the aim is to intervene earlier in people’s lives to improve wellbeing and reduce socioeconomic inequalities and/ or public sector costs. This agenda has existed in one form or another for decades without the same signs of progress we now associate with areas like tobacco control. Indeed, the comparison is instructive, since prevention policy rarely meets the three conditions outlined above:

  1. Prevention is a highly ambiguous term and many actors make sense of it in many different ways. There is no equivalent to a major shift in problem definition for prevention policy as a whole, and little agreement on how to determine the most effective or cost-effective solutions.
  2. A supportive policy environment is far harder to identify. Prevention policy cross-cuts many policymaking venues at many levels of government, with little evidence of ‘ownership’ by key venues. Consequently, there are many overlapping rules on how and from whom to seek evidence. Networks are diffuse and hard to manage. There is no dominant way of thinking across government (although the Treasury’s ‘value for money’ focus is key currency across departments). There are many socioeconomic indicators of policy problems but little agreement on how to measure or which measures to privilege (particularly when predicting future outcomes).
  3. The ‘window of opportunity’ was to adopt a vague solution to an ambiguous policy problem, providing a limited sense of policy direction. There have been several ‘windows’ for more specific initiatives, but their links to an overarching policy agenda are unclear.

These limitations help explain slow progress in key areas. The absence of an unequivocal frame, backed strongly by key actors, leaves policy change vulnerable to successful opposition, especially in areas where early intervention has major implications for redistribution (taking from existing services to invest in others) and personal freedom (encouraging or obliging behavioural change). The vagueness and long term nature of policy aims – to solve problems that often seem intractable – makes them uncompetitive, and often undermined by more specific short term aims with a measurable pay-off (as when, for example, funding for public health loses out to funding to shore up hospital management). It is too easy to reframe existing policy solutions as preventive if the definition of prevention remains slippery, and too difficult to demonstrate the population-wide success of measures generally applied to high risk groups.

What happens when attitudes to two key principles – evidence based policy and localism – play out at the same time?

A lot of discussion of the politics of EBPM assumes that there is something akin to a scientific consensus on which policymakers do not act proportionately. Yet, in many areas – such as social policy and social work – there is great disagreement on how to generate and evaluate the best evidence. Broadly speaking, a hierarchy of evidence built on ‘evidence based medicine’ – which has randomised control trials and their systematic review at the top, and practitioner knowledge and service user feedback at the bottom – may be completely subverted by other academics and practitioners. This disagreement helps produce a spectrum of ways in which we might roll-out evidence based interventions, from an RCT-driven roll-out of the same basic intervention to a storytelling driven pursuit of tailored responses built primarily on governance principles (such as to co-produce policy with users).

At the same time, governments may be wrestling with their own governance principles, including EBPM but also regarding the most appropriate balance between centralism and localism.

If you put both concerns together, you have a variety of possible outcomes (and a temptation to ‘let a thousand flowers bloom’) and a set of competing options (outlined in table 1), all under the banner of ‘evidence based’ policymaking.

Table 1 Three ideal types EBBP

What happens when a small amount of evidence goes a very long way?

So, even if you imagine a perfectly sincere policymaker committed to EBPM, you’d still not be quite sure what they took it to mean in practice. If you assume this commitment is a bit less sincere, and you add in the need to act quickly to use the available evidence and satisfy your electoral audience, you get all sorts of responses based in some part on a reference to evidence.

One fascinating case is of the UK Government’s ‘troubled families’ programme which combined bits and pieces of evidence with ideology and a Westminster-style-accountability imperative, to produce:

  • The argument that the London riots were caused by family breakdown and bad parenting.
  • The use of proxy measures to identify the most troubled families
  • The use of superficial performance management to justify notionally extra expenditure for local authorities
  • The use of evidence in a problematic way, from exaggerating the success of existing ‘family intervention projects’ to sensationalising neuroscientific images related to brain development in deprived children …

normal brain

…but also

In other words, some governments feel the need to dress up their evidence-informed policies in a language appropriate to Westminster politics. Unless we understand this language, and the incentives for elected policymakers to use it, we will fail to understand how to act effectively to influence those policymakers.

What can you do to maximise the use of evidence?

When you ask the generic question you can generate a set of transferable strategies to engage in policymaking:

how-to-be-heard

ebpm-5-things-to-do

Yet, as these case studies of public health and social policy suggest, the question lacks sufficient meaning when applied to real world settings. Would you expect the advice that I give to (primarily) natural scientists (primarily in the US) to be identical to advice for social scientists in specific fields (in, say, the UK)?

No, you’d expect me to end with a call for more research! See for example this special issue in which many scholars from many disciplines suggest insights on how to maximise the use of evidence in policy.

Palgrave C special

5 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

caspi-et-al-abstract

Avshalom Caspi and colleagues have used the 45-year ‘Dunedin’ study in New Zealand to identify the ‘large economic burden’ associated with ‘a small segment of the population’. They don’t quite achieve the 20%-causes-80% mark, but suggest that 22% of the population account disproportionately for the problems that most policymakers would like to solve, including unhealthy, economically inactive, and criminal behaviour. Most importantly, they discuss some success in predicting such outcomes from a 45-minute diagnostic test of 3 year olds.

Of course, any such publication will prompt major debates about how we report, interpret, and deal with such information, and these debates tend to get away from the original authors as soon as they publish and others report (follow the tweet thread):

This is true even though the authors have gone to unusual lengths to show the many ways in which you could interpret their figures. Theirs is a politically aware report, using some of the language of elected politicians but challenging simple responses. You can see this in their discussion which has a lengthy list of points about the study’s limitations.

The ambiguity dilemma: more evidence does not produce more agreement

‘The most costly adults in our cohort started the race of life from a starting block somewhere behind the rest, and while carrying a heavy handicap in brain health’.

The first limitation is that evidence does not help us adjudicate between competing attempts to define the problem. For some, it reinforces the idea of an ‘underclass’ or small collection of problem/ troubled families that should be blamed for society’s ills (it’s the fault of families and individuals). For others, it reinforces the idea that socio-economic inequalities harm the life chances of people as soon as they are born (it is out of the control of individuals).

The intervention dilemma: we know more about the problem than its solution

The second limitation is that this study tells us a lot about a problem but not its solution. Perhaps there is some common ground on the need to act, and to invest in similar interventions, but:

  1. The evidence on the effectiveness of solutions is not as strong or systematic as this new evidence on the problem.
  2. There are major dilemmas involved in ‘scaling up’ such solutions and transferring them from one area to another.
  3. The overall ‘tone’ of debate still matters to policy delivery, to determine for example if any intervention should be punitive and compulsory (you will cause the problem, so you have to engage with the solution) or supportive and voluntary (you face disadvantages, so we’ll try to help you if you let us).

The moral dilemma: we may only pay attention to the problem if there is a feasible solution

Prevention and early intervention policy agendas often seem to fail because the issues they raise seem too difficult to solve. Governments make the commitment to ‘prevention’ in the abstract but ‘do not know what it means or appreciate scale of their task’.

A classic policymaker heuristic described by Kingdon is that policymakers only pay attention to problems they think they can solve. So, they might initially show enthusiasm, only to lose interest when problems seem intractable or there is high opposition to specific solutions.

This may be true of most policies, but prevention and early intervention also seem to magnify the big moral question that can stop policy in its tracks: to what extent is it appropriate to intervene in people’s lives to change their behaviour?

Some may vocally oppose interventions based on their concern about the controlling nature of the state, particularly when it intervenes to prevent (say, criminal) behaviour that will not necessarily occur. It may be easier to make the case for intervening to help children, but difficult to look like you are not second guessing their parents.

Others may quietly oppose interventions based on an unresolved economic question: does it really save money to intervene early? Put bluntly, a key ‘economic burden’ relates to population longevity; the ‘20%’ may cause economic problems in their working years but die far earlier than the 80%. Put less bluntly by the authors:

This is an important question because the health-care burden of developed societies concentrates in older age groups. To the extent that factors such as smoking, excess weight and health problems during midlife foretell health-care burden and social dependency, findings here should extend to later life (keeping in mind that midlife smoking, weight problems and health problems also forecast premature mortality)’.

So, policymakers find initially that ‘early intervention’ a valence issue only in the abstract – who wouldn’t want to intervene as early as possible in a child’s life to protect them or improve their life chances? – but not when they try to deliver concrete policies.

The evidence-based policymaking dilemma

Overall, we are left with the sense that even the best available evidence of a problem may not help us solve it. Choosing to do nothing may be just as ‘evidence based’ as choosing a solution with minimal effects. Choosing to do something requires us to use far more limited evidence of solution effectiveness and to act in the face of high uncertainty. Add into the mix that prevention policy does not seem to be particularly popular and you might wonder why any policymaker would want to do anything with the best evidence of a profound societal problem.

 

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

There is no blueprint for evidence-based policy, so what do you do?

In my speech to COPOLAD I began by stating that, although we talk about our hopes for evidence-based policy and policymaking (EBP and EBPM), we don’t really know what it is.

I also argued that EBPM is not like our image of evidence-based medicine (EBM), in which there is a clear idea of: (a) which methods/ evidence counts, and (b) the main aim, to replace bad interventions with good.

In other words, in EBPM there is no blueprint for action, either in the abstract or in specific cases of learning from good practice.

To me, this point is underappreciated in the study of EBPM: we identify the politics of EBPM, to highlight the pathologies of/ ‘irrational’ side to policymaking, but we don’t appreciate the more humdrum limits to EBPM even when the political process is healthy and policymakers are fully committed to something more ‘rational’.

Examples from best practice

The examples from our next panel session* demonstrated these limitations to EBPM very well.

The panel contained four examples of impressive policy developments with the potential to outline good practice on the application of public health and harm reduction approaches to drugs policy (including the much-praised Portuguese model).

However, it quickly became apparent that no country-level experience translated into a blueprint for action, for some of the following reasons:

  • It is not always clear what problems policymakers have been trying to solve.
  • It is not always clear how their solutions, in this case, interact with all other relevant policy solutions in related fields.
  • It is difficult to demonstrate clear evidence of success, either before or after the introduction of policies. Instead, most policies are built on initial deductions from relevant evidence, followed by trial-and-error and some evaluations.

In other words, we note routinely the high-level political obstacles to policy emulation, but these examples demonstrate the problems that would still exist even if those initial obstacles were overcome.

A key solution is easier said than done: if providing lessons to others, describe it systematically, in a form that describes the steps to take to turn this model into action (and in a form that we can compare with other experiences). To that end, providers of lessons might note:

  • The problem they were trying to solve (and how they framed it to generate attention, support, and action, within their political systems)
  • The detailed nature of the solution they selected (and the conditions under which it became possible to select that intervention)
  • The evidence they used to guide their initial policies (and how they gathered it)
  • The evidence they collected to monitor the delivery of the intervention, evaluate its impact (was it successful?), and identify cause and effect (why was it successful?)

Realistically this is when the process least resembles (the ideal of) EBM because few evaluations of success will be based on a randomised control trial or some equivalent (and other policymakers may not draw primarily on RCT evidence even when it exists).

Instead, as with much harm reduction and prevention policy, a lot of the justification for success will be based on a counterfactual (what would have happened if we did not intervene?), which is itself based on:

(a) the belief that our object of policy is a complex environment containing many ‘wicked problems’, in which the effects of one intervention cannot be separated easily from that of another (which makes it difficult, and perhaps even inappropriate, to rely on RCTs)

(b) an assessment of the unintended consequence of previous (generally more punitive) policies.

So, the first step to ‘evidence-based policymaking’ is to make a commitment to it. The second is to work out what it is. The third is to do it in a systematic way that allows others to learn from your experience.

The latter may be more political than it looks: few countries (or, at least, the people seeking re-election within them) will want to tell the rest of the world: we innovated and we don’t think it worked.

*I also discuss this problem of evidence-based best practice within single countries

 

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco policy

What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco

This post contains preliminary notes for my keynote speech ‘The politics of evidence-based policymaking’ for the COPOLAD annual conference, ‘From evidence to practice: challenges in the field of drugs policies’ (14th June). I may amend them in the run up to the speech (and during their translation into Spanish).

COPOLAD (Cooperation Programme on Drugs Policies) is a ‘partnership cooperation programme between the European Union, Latin America and the Caribbean countries aiming at improving the coherence, balance and impact of drugs policies, through the exchange of mutual experiences, bi-regional coordination and the promotion of multisectoral, comprehensive and coordinated responses’. It is financed by the EU.

My aim is to draw on policy studies, and the case study of tobacco/ public health policy, to identify four lessons:

  1. ‘Evidence-based policymaking’ is difficult to describe and understand, but we know it’s a highly political process which differs markedly from ‘evidence based medicine’.
  2. Actors focus as much on persuasion to reduce ambiguity as scientific evidence to reduce uncertainty. They also develop strategies to navigate complex policymaking ‘systems’ or ‘environments’.
  3. Tobacco policy demonstrates three conditions for the proportionate uptake of evidence: it helps ‘reframe’ a policy problem; it is used in an environment conducive to policy change; and, policymakers exploit ‘windows of opportunity’ for change.
  4. Even the ‘best cases’ of tobacco control highlight a gap of 20-30 years between the production of scientific evidence and a proportionate policy response. In many countries it could be 50. I’ll use this final insight to identify some scenarios on how evidence might be used in areas, such as drugs policy, in which many of the ‘best case’ conditions are not met.

‘Evidence-based policymaking’ is highly political and difficult to understand

Evidence-based policymaking (EBPM) is so difficult to understand that we don’t know how to define it or each word in it! People use phrases like ‘policy-based evidence’, to express cynicism about the sincere use of evidence to guide policy, or ‘evidence informed policy’, to highlight its often limited impact. It is more important to try to define each element of EBPM – to identify what counts as evidence, what is policy, who are the policymakers, and what an ‘evidence-based’ policy would look like – but this is easier said than done.

In fact, it is far easier to say what EBPM is not:

It is not ‘comprehensively rational’

Comprehensive rationality’ describes, in part, the absence of ambiguity and uncertainty:

  • Policymakers translate their values into policy in a straightforward manner – they know what they want and about the problem they seek to solve.
  • Policymakers and governments can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and what is familiar to them, to make decisions quickly.

It does not take place in a policy cycle with well-ordered stages

Policy cycle’ describes the ides that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation.

It does not describe or explain policymaking well. Instead, we tend to identify the role of environments or systems.

When describing less ordered and predictable policy environments, we describe:

  • a wide range of actors (individuals and organisations) influencing policy at many levels of government
  • a proliferation of rules and norms followed by different levels or types of government
  • important relationships (‘networks’) between policymakers and powerful actors (with material resources, or the ability to represent a profession or social group)
  • a tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion
  • shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

When describing complex policymaking systems we show that, for example, (a) the same inputs of evidence or policy activity can have no, or a huge, effect, and (b) policy outcomes often ‘emerge’ in the absence of central government control (which makes it difficult to know how, and to whom, to present evidence or try to influence).

It does not resemble ‘evidence based medicine’ or the public health culture

In health policy we can identify an aim, associated with ‘evidence-based medicine’ (EBM), to:

(a) gather the best evidence on the effectiveness of policy interventions, based on a hierarchy of research methods which favours, for example, the systematic review of randomised control trials (RCTs)

(b) ensure that this evidence has a direct impact on healthcare and public health, to exhort practitioners to replace bad interventions with good, as quickly as possible.

Instead, (a) policymakers can ignore the problems raised by scientific evidence for long periods of time, only for (b) their attention to lurch, prompting them to beg, borrow, or steal information quickly from readily available sources. This can involve many sources of evidence (such as the ‘grey literature’) that some scientists would not describe as reliable.

Actors focus as much on persuasion to reduce ambiguity as scientific evidence to reduce uncertainty.

In that context, ‘evidence-based policymaking’ is about framing problems and adapting to complexity.

Framing refers to the ways in which policymakers understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one ‘image’ at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), who is responsible for policy, how much attention they pay, their demand for evidence on policy solutions, and what kind of solution they favour.

Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with reference to evidence. Rather, policy theories signal the strategies that actors adopt to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (punctuated equilibrium theory)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (narrative policy framework)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (advocacy coalition framework)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (multiple streams analysis).

This takes place in complex ‘systems’ or ‘environments’

A focus on this bigger picture shifts our attention from the use of evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multi-level policy process. It shows actors that:

  • They are competing with many others to present evidence in a particular way to secure a policymaker audience.
  • Support for particular solutions varies according to which organisation takes the lead and how it understands the problem.
  • Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others
  • There is a language – indicating which ideas, beliefs, or ways of thinking are most accepted by policymakers and their stakeholders – that takes time to learn.
  • Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift. However, major policy shifts are rare.

In other words, successful actors develop pragmatic strategies based on the policy process that exists, not the process they’d like to see

We argue that successful actors: identify where the ‘action is’ (in networks and organisations in several levels of government); learn and follow the ‘rules of the game’ within networks to improve strategies and help build up trust; form coalitions with actors with similar aims and beliefs; and, frame the evidence to appeal to the biases, beliefs, and priorities of policymakers.

Tobacco policy demonstrates three conditions for the proportionate uptake of evidence

Case studies allow us to turn this general argument into insights generated from areas such as public health.

There are some obvious and important differences between tobacco and (illegal) drugs policies, but an initial focus on tobacco allows us to consider the conditions that might have to be met to use the best evidence on a problem to promote (what we consider to be) a proportionate and effective solution.

We can then use the experience of a ‘best case scenario’ to identify the issues that we face in less ideal circumstances (first in tobacco, and second in drugs).

With colleagues, I have been examining:

Our studies help us identify the conditions under which scientific evidence, on the size of the tobacco problem and the effectiveness of solutions, translates into a public policy response that its advocates would consider to be proportionate.

  1. Actors are able to use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems.

Although scientific evidence helps reduce uncertainty, it does not reduce ambiguity. Rather, there is high competition to define problems, and the result of this competition helps determine the demand for subsequent evidence.

In tobacco, the evidence on smoking and then passive smoking helped raise attention to public health, but it took decades to translate into a proportionate response, even in ‘leading’ countries such as the UK.

The comparison with ‘laggard’ countries is crucial to show that the same evidence can produce a far more limited response, as policymakers compare the public health imperative with other ‘frames’, relating to their beliefs on personal responsibility, civil liberties, and the economic consequences of tobacco controls.

  1. The policy environment becomes conducive to policy change.

Public health debates take place in environments more or less conducive to policy change. In the UK, actors used scientific evidence to help reframe the problem. Then, this new understanding helped give the Department of Health a greater role, the health department fostered networks with public health and medical groups at the expense of the industry and, while pursuing policy change, policymakers emphasised the reduced opposition to tobacco control, smoking prevalence, and economic benefits to tobacco,.

In many other countries, these conditions are far less apparent: there are multiple tobacco frames (including economic and civil liberties); economic and trade departments are still central to policy; the industry remains a key player; and, policymakers pay more attention to opposition to tobacco controls (such as bans on smoking in public places) and their potential economic consequences.

Further, differences between countries have largely endured despite the fact that most countries are parties to the FCTC. In other words, a commitment to evidence basedpolicy transfer’ does not necessarily produce actual policy change.

  1. Actors generate and exploit ‘windows of opportunity’ for major policy change.

Even in favourable policy environments, it is not inevitable that major policy changes will occur. Rather, the UK’s experience of key policy instruments – such as legislation to ban smoking in public places (a major commitment of the FCTC) – shows the high level of serendipity involved in the confluence of three necessary but insufficient conditions:

  1. high policymaker attention to tobacco as a policy problem
  2. the production of solutions, introducing partial or comprehensive bans on smoking in public places, that are technically and politically feasible
  3. the willingness and ability of policymakers to choose the more restrictive solution.

In many other countries, there has been no such window of opportunity, or only an opportunity for a far weaker regulation.

So, this condition – the confluence of three ‘streams’ during a ‘window of opportunity’ – shows the major limits to the effect of scientific evidence. The evidence on the health effects of passive smoking have been available since the 1980s, but they only contributed to comprehensive smoking bans in the UK in the mid-2000s, and they remain unlikely in many other countries.

Comparing ‘best case’ and ‘worst case’ scenarios for policy change

These discussions help us clarify the kinds of conditions that need to be met to produce major ‘evidence based’ policy change, even when policymakers have made a commitment to it, or are pursuing an international agreement.

I provide a notional spectrum of ‘best’ and ‘worst’ case scenarios in relation to these conditions:

  1. Actors agree on how to gather and interpret scientific evidence.
  • Best case: governments fund effective ways to gather and interpret the most relevant evidence on the size of policy problems and the effectiveness of solutions. Policymakers can translate large amounts of evidence on complex situations into simple and effective stories (that everyone can understand) to guide action. This includes evidence of activity in one’s own country, and of transferable success from others.
  • Worst case: governments do not know the size of the problem or what solutions have the highest impacts. They rely on old stories that reinforce ineffective action, and do not know how to learn from the experience of other regions (note the ‘not invented hereissue).
  1. Actors ‘frame’ the problem simply and/or unambiguously.
  • Best case: governments maintain a consensus on how best to understand the cause of a policy problem and therefore which evidence to gather and solutions to seek.
  • Worst case: governments juggle many ‘frames’, there is unresolved competition to define the problem, and the best sources of evidence and solutions remain unclear.
  1. A new policy frame is not undermined by the old way of thinking about, and doing, things
  • Best case: the new frame sets the agenda for actors in existing organisations and networks; there is no inertia linked to the old way of thinking about and doing things.
  • Worst case: there is a new policy, but it is undermined by old beliefs, rules, pre-existing commitments (for example, we talk of ‘path dependence’ and ‘inheritance before choice’), or actors opposed to the new policy.
  1. There is a clear ‘delivery chain’ from policy choice to implementation
  • Best case: policymakers agree on a solution, they communicate their aims well, and they secure the cooperation of the actors crucial to policy delivery in many levels and types of government.
  • Worst case: policymakers communicate an ambiguous message and/ or the actors involved in policy delivery pursue different – and often contradictory – ways to try to solve the same problem.

In international cooperation, it is natural to anticipate and try to minimise at least some of these worst case scenarios. Problems are more difficult to solve when they are transnational. Our general sense of uncertainty and complexity is more apparent when there are many governments involved and we cannot rely on a single authoritative actor to solve problems. Each country (and regions within it) has its own beliefs and ways of doing things, and it is not easy to simply emulate another country (even if we think it is successful and know why). Some countries do not have access to the basic information (for example, on health and mortality, alongside statistics on criminal justice) that others take for granted when they monitor the effectiveness of policies.

Further, these obstacles exist in now-relatively-uncontroversial issues, such as tobacco, in which there is an international consensus on the cause of the problem and the appropriateness and effectiveness of public solutions. It is natural to anticipate further problems when we also apply public health (and, in this case, ‘harm reduction’) measures to more controversial areas such as illegal drugs.

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, tobacco, tobacco policy, UK politics and policy

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

I have reblogged this post on EBPM and the Family Nurse Partnership, with an update, at the bottom, on its first RCT-based evaluation (which did not recommend continuing the programme in its current form).

Paul Cairney: Politics & Public Policy

We await the results of the randomised control trial (RCT) on family nurse partnerships in England. While it looks like an innocuous review of an internationally well-respected programme, and will likely receive minimal media attention, I think it has high-stakes symbolic value in relation to the role of RCTs in British government.

EBM versus EBPM?

We know a lot about the use of evidence in politics – and we hear that politicians play fast and loose with it. We also know that some professions have a very clear idea about what counts as evidence, and that this view is not shared by politicians and policymakers. Somehow, ‘politics’ gets in the way of the good production and use of evidence.

A key example is the ideal of ‘Evidence Based Medicine’ (EBM), which is associated with a hierarchy of evidence in which the status of the RCT is only exceeded by…

View original post 1,421 more words

6 Comments

Filed under Public health, public policy, UK politics and policy

The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty

There is now a large literature on the gaps between the production of scientific evidence and a policy or policymaking response. However, the literature in key fields – such as health and environmental sciences – does not use policy theory to help explain the gap. In this book, and in work that I am developing with Kathryn Oliver and Adam Wellstead, I explain why this matters by identifying the difference between empirical uncertainty and policy ambiguity. Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to consider all evidence relevant to policy problems. Instead, they employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, and habits to make decisions quickly. This takes place in a complex policymaking system in which policymaker attention can lurch from issue to issue, policy is made routinely in subsystems, and the ‘rules of the game’ take time to learn.

The key problem in the health and environmental sciences is that studies focus only on the first short cut. They identify the problem of uncertainty that arises when policymakers have incomplete information, and seek to solve it by improving the supply of information and encouraging academic-practitioner networks and workshops. They ignore the importance of a wider process of debate, coalition formation, lobbying, and manipulation, to reduce ambiguity and establish a dominant way to frame policy problems. Further, while scientific evidence cannot solve the problem of ambiguity, persuasion and framing can help determine the demand for scientific evidence.

Therefore, the second solution is to engage in a process of framing and persuasion by, for example, forming coalitions with actors with the same aims or beliefs, and accompanying scientific information with simple stories to exploit or adapt to the emotional and ideological biases of policymakers. This is less about packaging information to make it simpler to understand, and more about responding to the ways in which policymakers think – in general, and in relation to emerging issues – and, therefore, how they demand information.

In the book, I present this argument in three steps. First, I bring together a range of insights from policy theory, to show the huge amount of accumulated knowledge of policymaking on which other scientists and evidence advocates should draw. Second, I discuss two systematic reviews – one by Oliver et al, and one that Wellstead and I developed – of the literature on ‘barriers’ to evidence and policy in health and environmental studies. They show that the vast majority of studies in each field employ minimal policy theory and present solutions which focus only on uncertainty. Third, I identify the practical consequences for actors trying to maximize the uptake of scientific evidence within government.

My conclusion has profound implications for the role of science and scientific experts in policymaking. Scientists have a stark choice: to produce information and accept that it will have a limited impact (but that scientists will maintain an often-useful image of objectivity), or to go beyond one’s comfort zone, and expertise, to engage in a normative enterprise that can increase impact at the expense of objectivity.

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Public health, public policy