Tag Archives: EBP

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

The politics of implementing evidence-based policies

This post by me and Kathryn Oliver appeared in the Guardian political science blog on 27.4.16: If scientists want to influence policymaking, they need to understand it . It builds on this discussion of ‘evidence based best practice’ in Evidence and Policy. There is further reading at the end of the post.

Three things to remember when you are trying to close the ‘evidence-policy gap’

Last week, a new major report on The Science of Using Science: Researching the Use of Research Evidence in Decision-Making suggested that there is very limited evidence of ‘what works’ to turn scientific evidence into policy. There are many publications out there on how to influence policy, but few are proven to work.

This is because scientists think about how to produce the best possible evidence rather than how different policymakers use evidence differently in complex policymaking systems (what the report describes as the ‘capability, motivation, and opportunity’ to use evidence). For example, scientists identify, from their perspective, a cultural gap between them and policymakers. This story tells us that we need to overcome differences in the languages used to communicate findings, the timescales to produce recommendations, and the incentives to engage.

This scientist perspective tends to assume that there is one arena in which policymakers and scientists might engage. Yet, the action takes place in many venues at many levels involving many types of policymaker. So, if we view the process from many different perspectives we see new ways in which to understand the use of evidence.

Examples from the delivery of health and social care interventions show us why we need to understand policymaker perspectives. We identify three main issues to bear in mind.

First, we must choose what counts as ‘the evidence’. In some academic disciplines there is a strong belief that some kinds of evidence are better than others: the best evidence is gathered using randomised control trials and accumulated in systematic reviews. In others, these ideas have limited appeal or are rejected outright, in favour of (say) practitioner experience and service user-based feedback as the knowledge on which to base policies. Most importantly, policymakers may not care about these debates; they tend to beg, borrow, or steal information from readily available sources.

Second, we must choose the lengths to which we are prepared to go ensure that scientific evidence is the primary influence on policy delivery. When we open up the ‘black box’ of policymaking we find a tendency of central governments to juggle many models of government – sometimes directing policy from the centre but often delegating delivery to public, third, and private sector bodies. Those bodies can retain some degree of autonomy during service delivery, often based on governance principles such as ‘localism’ and the need to include service users in the design of public services.

This presents a major dilemma for scientists because policy solutions based on RCTs are likely to come with conditions that limit local discretion. For example, a condition of the UK government’s license of the ‘family nurse partnership’ is that there is ‘fidelity’ to the model, to ensure the correct ‘dosage’ and that an RCT can establish its effect. It contrasts with approaches that focus on governance principles, such as ‘my home life’, in which evidence – as practitioner stories – may or may not be used by new audiences. Policymakers may not care about the profound differences underpinning these approaches, preferring to use a variety of models in different settings rather than use scientific principles to choose between them.

Third, scientists must recognise that these choices are not ours to make. We have our own ideas about the balance between maintaining evidential hierarchies and governance principles, but have no ability to impose these choices on policymakers.

This point has profound consequences for the ways in which we engage in strategies to create impact. A research design to combine scientific evidence and governance seems like a good idea that few pragmatic scientists would oppose. However, this decision does not come close to settling the matter because these compromises look very different when designed by scientists or policymakers.

Take for example the case of ‘improvement science’ in which local practitioners are trained to use evidence to experiment with local pilots and learn and adapt to their experiences. Improvement science-inspired approaches have become very common in health sciences, but in many examples the research agenda is set by research leads and it focuses on how to optimise delivery of evidence-based practice.

In contrast, models such as the Early Years Collaborative reverse this emphasis, using scholarship as one of many sources of information (based partly on scepticism about the practical value of RCTs) and focusing primarily on the assets of practitioners and service users.

Consequently, improvement science appears to offer pragmatic solutions to the gap between divergent approaches, but only because they mean different things to different people. Its adoption is only one step towards negotiating the trade-offs between RCT-driven and story-telling approaches.

These examples help explain why we know so little about how to influence policy. They take us beyond the bland statement – there is a gap between evidence and policy – trotted out whenever scientists try and maximise their own impact. The alternative is to try to understand the policy process, and the likely demand for and uptake of evidence, before working out how to produce evidence that would fit into the process. This different mind-set requires a far more sophisticated knowledge of the policy process than we see in most studies of the evidence-policy gap.  Before trying to influence policymaking, we should try to understand it.

Further reading

The initial further reading uses this table to explore three ways in which policymakers, scientists, and other groups have tried to resolve the problems we discuss:

Table 1 Three ideal types EBBP

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer

You can also explore these links to discussions of EBPM, policy theory, and specific policy fields such as prevention

  1. My academic articles on these topics
  2. The Politics of Evidence Based Policymaking
  3. Key policy theories and concepts in 1000 words
  4. Prevention policy

 

Leave a comment

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based best practice: 4 messages

Well, it’s really a set of messages, geared towards slightly different audiences, and summed up by this table:

Table 1 Three ideal types EBBP.JPG

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer.

Further reading (links):

My academic articles on these topics

The Politics of Evidence Based Policymaking

Key policy theories and concepts in 1000 words

Prevention policy

8 Comments

Filed under 1000 words, ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), Prevention policy, Scottish politics, UK politics and policy

The Politics of Evidence Based Policymaking:3 messages

Really, it’s three different ways to make the same argument in the number of words that suits you:

  1. Guardian post (700 words): ‘When presenting evidence to policymakers, scientists and other experts need to engage with the policy process that exists, not the one we wish existed’
  2. Public Administration Review article (3000 words) To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty (free version)
  3. Book (40,000 words)The Politics of Evidence Based Policymaking (free version)

For even more words, see my EBPM page

5 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

‘Evidence-based Policymaking’ and the Study of Public Policy

This post accompanies a 40 minute lecture (download) which considers ‘evidence-based policymaking’ (EBPM) through the lens of policy theory. The theory is important, to give us a language with which to understand EBPM as part of a wider discussion of the policy process, while the lens of EBPM allows us to think through the ‘real world’ application of concepts and theories.

To that end, I’ll make three key points:

  1. Definitions and clarity are important. ‘Evidence-based policymaking’, ‘evidence-based policy’ and related phrases such as ‘policy based evidence’ are used incredibly loosely in public debates. A focus on basic questions in policy studies – what is policy, and how can we measure policy change? – helps us clarify the issues, reject superficial debates on ‘evidence-based policy versus policy-based evidence’, and in some cases identify the very different assumptions people make about how policymaking works and should work.
  2. Realistic models are important. Discussing EBPM helps us identify the major flaws in simple models of policymaking such as the ‘policy cycle’. I’ll discuss the insights we gain by considering how policy scholars describe the implications of policymaker ‘bounded rationality’ and policymaking complexity.
  3. Realistic strategies are important. There is a lot of academic discussion of the need to overcome ‘barriers’ between evidence and policy. It is often atheoretical, producing naïve recommendations about improving the supply of evidence and training policymakers to understand it. I identify two more useful (but potentially controversial) strategies: be manipulative and learn where the ‘action’ is.

Definitions and clarity are important, so what is ‘evidence-based policymaking’?

What is Policy? It is incredibly difficult to say what policy is and measure how much it has changed. I use the working definition, ‘the sum total of government action, from signals of intent to the final outcomes’ to raise important qualifications: (a) it is problematic to conflate what people say they will do and what they actually do; (b) a policy outcome can be very different from the intention; (c) policy is made routinely through cooperation between elected and unelected policymakers and actors with no formal role in the process; (d) policymaking is also about the power not to do something. It is also important to identify the many components or policy instruments that make up policies, including: the level of spending; the use of economic incentives/ penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organisational change; and, the levels of resources/ methods dedicated to policy implementation (2012a: 26).

In that context, we are trying to capture a process in which actors make and deliver ‘policy’ continuously, not identify a set-piece event which provides a single opportunity to use a piece of scientific evidence to prompt a policymaker response.

Who are the policymakers? The intuitive definition is ‘people who make policy’, but there are two important distinctions: (1) between elected and unelected participants, since people such as civil servants also make important decisions; (2) between people and organisations, with the latter used as a shorthand to refer to a group of people making decisions collectively. There are blurry dividing lines between the people who make and influence policy. Terms such as ‘policy community’ suggest that policy decisions are made by a collection of people with formal responsibility and informal influence. Consequently, we need to make clear what we mean by ‘policymakers’ when we identify how they use evidence.

What is evidence? We can define evidence as an argument backed by information. Scientific evidence describes information produced in a particular way. Some describe ‘scientific’ broadly, to refer to information gathered systematically using recognised methods, while others refer to a specific hierarchy of scientific methods, with randomized control trials (RCTs) and the systematic review of RCTs at the top. This is a crucial point:

policymakers will seek many kinds of information that many scientists would not consider to be ‘the evidence’.

This discussion helps identify two key points of potential confusion when people discuss EBPM:

  1. When you describe ‘evidence-based policy’ and EBPM you need to clarify what the policy is and who is making it. This is not just about some elected politicians making announcements.
  2. When you describe ‘evidence’ you need to clarify what counts as evidence and what an ‘evidence-based’ policy response would look like. This point is at the heart of often fruitless discussions about ‘policy based evidence’, which seems to describe almost a dozen alleged mistakes by policymakers (relating to ignoring evidence, using the wrong kinds, and/ or producing a disproportionate response).

Realistic models are important, so what is wrong with the policy cycle?

One traditional way to understand policymaking in the ‘real world’ is to compare it to an ideal-type: what happens when the conditions of the ideal-type are not met? We do this in particular with the ‘policy cycle and ‘comprehensive rationality’.

So, consider this modified ideal-type of EBPM:

  • There is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, breaking down their task into clearly defined and well-ordered stages;
  • Scientists are in a privileged position to help those policymakers make good decisions by getting them as close as possible to the ideal of ‘comprehensive rationality’ in which they have the best information available to inform all options and consequences.

So far, so good (although you might stop to consider who is best placed to provide evidence, and who – or which methods of evidence gathering – should be privileged or excluded), but what happens when we move away from the ideal-type? Here are two insights from a forthcoming paper (Cairney Oliver Wellstead 26.1.16).

Lessons from policy theory: 1. Identify multi-level policymaking environments

First, policymaking takes place in less ordered and predictable policy environment, exhibiting:

  • a wide range of actors (individuals and organisations) influencing policy at many levels of government
  • a proliferation of rules and norms followed by different levels or types of government
  • close relationships (‘networks’) between policymakers and powerful actors
  • a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  • shifting policy conditions and events that can prompt policymaker attention to lurch at short notice.

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multi-level policy process. It shows scientists and practitioners that they are competing with many actors to present evidence in a particular way to secure a policymaker audience. Support for particular solutions varies according to which organisation takes the lead and how it understands the problem. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ (such as ‘value for money’) – that takes time to learn. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift. In this context, too many practitioner studies analyse, for example, a singular point of central government decision rather than the longer term process. Overcoming barriers to influence in that small part of the process will not provide an overall solution.

Lessons from policy theory: 2. Policymakers use two ‘shortcuts’ to make decisions

How do policymakers deal with their ‘bounded rationality’? They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and the familiar to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing (in the wider context of a tendency for certain beliefs to dominate discussion).

Framing refers to the ways in which we understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), who is responsible for policy, how much attention they pay, and what kind of solution they favour. For example, tobacco control is more likely when policymakers view it primarily as a public health epidemic rather than an economic good, while ‘fracking’ policy depends on its primary image as a new oil boom or environmental disaster (I discuss both examples in depth here).

Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with reference to evidence. Rather, policy theories signal the strategies that practitioners may have to adopt to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (punctuated equilibrium theory)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (narrative policy framework)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (advocacy coalition framework)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (multiple streams analysis).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, for example, it can take years to produce support for an ‘evidence-based’ policy solution, built on its technical and political feasibility (will it work as intended, and do policymakers have the motive and opportunity to select it?).

This discussion helps identify two key points of potential confusion when people discuss the policy cycle and comprehensive rationality:

  1. These concepts are there to help us understand what doesn’t happen. What are the real world implications of the limits to these models?
  2. They do not help you give good advice to people trying to influence the policy process. A focus on going through policymaking ‘stages’ and improving ‘rationality’ is always relevant when you give advice to policymakers. However unrealistic these models are, you would still want to gather the maximum information and go through a process of stages. This is very different from (a) giving advice on how to influence the process, or (b) evaluating the pros and cons of a political system with reference to ideal-types.

Realistic strategies are important, so how far should you go to overcome ‘barriers’ between evidence and policy?

You can’t take the politics out of EBPM. Even the selection of ‘the evidence’ is political (should evidence be scientific, and what counts as scientific evidence?).

Further, providers of scientific evidence face major dilemmas when they seek to maximise the ‘impact’ of their research. Armed with this knowledge of the policy process, how should you seek to engage and influence decisions made within it?

If you are interested in this final discussion, please see the short video here and the follow up blog post: Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

See also:

To bridge the divide between evidence and policy: reduce ambiguity as much as uncertainty

 

8 Comments

Filed under 1000 words, Evidence Based Policymaking (EBPM), public policy

Is Evidence-Based Policymaking the same as good policymaking?

Evidence based policymaking (EBPM) is a great idea, isn’t it? Who could object to it, apart from the enemies of science? Well, I’m sort-of going to object in two ways, by arguing: that we partly like it so much because it’s a vague idea and we don’t know what it means; and that when we are clearer on its meaning, one type of EBPM seems very problematic indeed.

Carol Weiss gives us a menu of sensible EBPM options from which to choose. Evidence can be used:

  • to inform solutions to a problem identified by policymakers
  • as one of many sources of information used by policymakers, alongside ‘stakeholder’ advice and professional and service user experience
  • as a resource used selectively by politicians, with entrenched positions, to bolster their case
  • as a tool of government, to show it is acting (by setting up a scientific study), or to measure how well policy is working
  • as a source of ‘enlightenment’, shaping how people think over the long term.

In other words, these options provide a description of the use of evidence within an often messy political system: scientists may have a role, but they struggle for attention alongside many other people. This is where a separate definition of EBPM comes in, often as a prescription for the policy process: there should be a much closer link between the process in which scientists identify major policy problems with evidence, and the process in which politicians make policy decisions. We should seek to close the ‘evidence-policy gap’. The evidence should come first and we should bemoan the inability of policymakers to act accordingly.

Most policy science-based studies of EBPM would reject this idea on descriptive grounds – as a rather naïve view about the policy process. In that sense, the call for EBPM is a revival of the idea of ‘comprehensive rationality’ in policymaking – which describes an ‘ideal-type’ and, in one sense, an optimal policy process. We assume that the values of society are reflected in the values of policymakers, and that a small number of policymakers control the policy process from its centre. Then, we highlight the conditions that would have to be met to allow those policymakers to use the government machine to turn those aims into policies – we can separate facts from values, organisations can rank a government’s preferences, the policy process is ‘linear’ and separated into clear stages, and analysis of the world is comprehensive. The point of this ideal-type is that it doesn’t exist. Instead, policy theory is about providing more realistic descriptions of the world.

On that basis, we might argue that scientists should quit moaning about the real world and start adapting to it. Stop bemoaning the pathologies of public policy – and some vague notion of the ‘lack of political will’ – and hoping for something better. If the policy process is messy and unpredictable, be pragmatic about how to engage. Balance the desire to produce a direct evidence-policy effect with a realisation that we need to frame the evidence to make it attractive to actors with very different ideas and incentives to act. Accept that policymakers seek many other legitimate sources of information and knowledge, and do not recognise, in the same way, your evidential hierarchies favouring RCTs and systematic reviews.

Even so, should we still secretly fantasise about the idealistic prescriptive side? Is EBPM an ‘ideal’, or something to aspire to even though it is unrealistic?  Not if it means something akin to comprehensive rationality. Look again at the assumptions – one of which is that a small number of policymakers control the policy process from its centre. In this scenario, EBPM is about closing the evidence policy gap by providing a clear link between scientists and politicians who centralise policymaking and make policy from the top-down with little role for debate, consultation and other forms of knowledge (one might call this ‘leadership’, often in the face of public opinion). This raises a potentially fundamental tension between EBPM and other sources of ‘good’ policymaking. What if the acceptance of one form of EBPM undermines the other roles of government?

A government may legitimately adopt a ‘bottom up’ approach to policymaking and delivery – consulting widely with a range of interest groups and public bodies to inform its aims, and working in partnership with those groups to deliver policy (perhaps by using long term, ‘co-produced’ outcomes rather than top-down and short term targets to measure success). This approach has important benefits – it generates wide ‘ownership’ of a policy solution and allows governments to generate useful feedback on the effects of policy instruments (which is important since, in practice, it may be impossible to separate the effect of an instrument from the effect of the way in which it was implemented).

If so, it would be difficult to maintain a separate EBPM process in which the central government commissions and receives the evidence which directly informs its aims to be carried out elsewhere. If a government is committed to a bottom-up policy style, it seems inevitable that it would adopt the same approach to evidence – sharing it with a wide range of bodies and ‘co-producing’ a response. If so, the use of evidence becomes much less like a linear and simple process, and much more like a complicated and interactive process in which many actors negotiate the practical implications of scientific evidence – considering it alongside other sources of policy relevant information. This has the potential to take us away from the idea of evidence-driven policy, based on external scientific standards and ‘objective’ evidence based on a hierarchy of methods, towards treating evidence as a resource to be used by actors within political systems who draw on different ideas about the hierarchy of evidential sources. As such, ‘the evidence’ is not a resource that is controlled by the scientists producing the information.

From there, we might ask: is this still EBPM? Well, this takes us back to what it means. If it means that a ‘scientific consensus’ should have super-direct policy effects, then no. If it means that scientists provide information to inform the deliberations of policymakers, who claim a legitimate policymaking role, and engage in other forms of ‘good’ policymaking – by consulting widely and generating a degree of societal, governmental and/or practitioner consensus – then yes.

Full paper: Cairney PSA 2014 EBPM 28.2.14

See also: Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Policy Concepts in 1000 Words: Bounded Rationality and Incrementalism

9 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy