In other posts on evidence-based policymaking I’m critical of this idea: the main barriers to getting evidence into policy relate to the presentation of scientific evidence, timing, and the scientific skills of policymakers. You may overcome these barriers without closing the ‘evidence-policy gap’ and, for example, spend too much effort trying to reduce scientific uncertainty on the size of a policy problem without addressing ambiguity and the tendency of policymakers to be willing to consider only a limited range of solutions.
In this post, I try to reframe this discussion by generally describing the EBPM process as a series of political choices made as much by scientists as policymakers. The choices associated primarily with policymakers are also made by academics, and they relate to inescapable trade-offs rather than policymaking problems that can somehow be solved with more evidence.
In this context, a key role of policy analysis is to improve policymaking by clarifying key elements of the process in which we produce, understand, and use evidence on policy solutions[i] (in part by encouraging scientists to understand the choices of policymakers by reflecting on their own).
The three Cs: comparability, control, and centralisation
When we focus on the evidence underpinning policy solutions or ‘interventions’, let’s start with three key terms and note the potential to identify a notional spectrum of approaches to the issues they raise. I’ll describe them as ideal-types for now:
In randomised control trials (RCTs) we conduct experiments to determine the effect of an intervention by giving the dose to one group and placebo to the other. This takes place in a controlled environment in which we are able to isolate the effect of an intervention by making sure that the only difference between the control groups is the (non)introduction of the intervention.
A key tenet of policy analysis is that it is difficult if not impossible to control real-world environments and, therefore, to be sure that an intervention works as intended.
Indeed, some scholars argue that complex policymaking systems defy control and, therefore, are not conducive to the use of RCTs. Instead, we use other methods such as case studies to highlight the interaction between a large number of factors which combine to produce an outcome (the result of which cannot be linked simply to the independent effects of each factor).
As a result, we have our first notional spectrum: at one end is complete confidence in RCTs to conduct policy-relevant experiments; at the other is a complete lack of confidence in their value.
This choice about how to address the issue of control feeds directly into our understanding of comparability. If you have complete confidence in RCTs you can produce a series of studies in different times/places/populations, based on the understanding that the results are directly comparable. You might even say that: if it worked there, and there, and there, it will work here. If you have no confidence in RCTs, you seek other ways to consider comparability, such as by producing case study analysis that is detailed enough to allow you to highlight patterns which might be apparent in multiple cases.
If a national central government identifies effective policy solutions, should it roll them out uniformly across the country or allow local public bodies the chance to tailor solutions to their areas? At one end of another notional spectrum is the idea that they should seek uniformity to ensure ‘fidelity’ to a successful programme which requires a specific dosage, or simply to minimise a ‘postcode’ lottery in which people receive different levels of service in different parts of the country. At the other end is the preference for localism built on arguments including: one size does not fit all, local authorities have their own mandates to make local policy choices, a policy will not succeed without consultation and local ‘ownership’, and/ or local public bodies need the ability to adapt to quickly changing local circumstances.
What happens when the three Cs come together?
Although we can separate these terms and their related choices analytically, in practice people make these choices simultaneously or one choice in one category influences choices in the others.
In effect, two fundamental debates play out at the same time: epistemological and methodological disagreements on the nature of good evidence; and, practical disagreements regarding the best way for national policymakers to translate evidence into local policy and practice. Such disagreements present a major dilemma for policymakers, which cannot be solved by scientists or with reference to evidence. Instead, it involves political choices about which forms of evidence to prioritise and how to combine evidence with governance choices to inform practice.
Our new spectrum of choice may involve a range of options within the following extremes: oblige policy emulation/ uniformity and roll out policy interventions that require ‘fidelity’ to the policy intervention (minimal discretion to adapt interventions to local circumstances); or, encourage policy inspiration, as people tell detailed stories of their experiences and invite others to learn from them.
These approaches to policymaking are tied strongly to approaches to evidence gathering, such as when programmes based on RCTs require fidelity to ensure the correct dosage of an intervention during a continuous process of policy delivery and evaluation. They are also influenced by the need to adapt policies to local circumstances, to address (a) the ‘not invented here’ problem, in which local policymakers are sceptical about importing policies that were not developed in their area; and, (b) normative arguments about the relative benefits of centralisation and localism, regarding the extent to which we should value policy flexibility and local political autonomy, and the generation of normative principles guiding service delivery (e.g. include service users and communities in the design or ‘co-production’ of policy) as much as alleged effectiveness.
What is the value of such discussions?
First, elected policymakers are often portrayed as the villains of this piece because, for example, they don’t understand RCTs and the need for RCT-driven evaluations, they don’t recognise a hierarchy of evidence in which the systematic review of RCTs represents the gold standard, and/ or are unwilling to overcome ethical dilemmas about who gets to be in/ out of the control group of a promising intervention.
Yet, there are also academics who remain sceptical of the value of RCTs, have different views on the hierarchy of evidence (many scholars value practitioner experience and service user feedback more highly) and/ or who promote different ways to gather and use comparable policy-relevant evidence (see for example this entertaining exchange between Axford and Pawson).
Second, EBPM is not just about the need for policymakers to understand how evidence is produced and should be used. It is also about the need for academics to reflect on, for example:
- the assumptions they make about the best ways to gather evidence and put the results into practice (in a political environment where other people may not share, or even know about, their understanding of the world).
- the difference between the identification of evidence on the success of an intervention, in one place and one point in time (or several such instances), and the political choice to roll it out, based on the assumption that national governments are best placed to spread that success throughout the country.
Third, I have largely discussed extremes or ideal-types. In practice, people may be willing to compromise or produce pragmatic responses to the need to adapt research methods to real world situations. In that context, this kind of discussion should help clarify why scientists may need to work with policymakers or practitioners to produce a solution that each actor can support.
Further reading: EBPM and best practice 5.11.15
[i] Note: we can generate and use evidence on two elements – (1) the nature of a problem, and (2) the value of possible solutions – in very different ways. A conflation of the two leads to a lot of confused debate about how evidence-based a policy or policymaking process tends to be.