This post by me and Kathryn Oliver appeared in the Guardian political science blog on 27.4.16: If scientists want to influence policymaking, they need to understand it . It builds on this discussion of ‘evidence based best practice’ in Evidence and Policy. There is further reading at the end of the post.
Three things to remember when you are trying to close the ‘evidence-policy gap’
Last week, a new major report on The Science of Using Science: Researching the Use of Research Evidence in Decision-Making suggested that there is very limited evidence of ‘what works’ to turn scientific evidence into policy. There are many publications out there on how to influence policy, but few are proven to work.
This is because scientists think about how to produce the best possible evidence rather than how different policymakers use evidence differently in complex policymaking systems (what the report describes as the ‘capability, motivation, and opportunity’ to use evidence). For example, scientists identify, from their perspective, a cultural gap between them and policymakers. This story tells us that we need to overcome differences in the languages used to communicate findings, the timescales to produce recommendations, and the incentives to engage.
This scientist perspective tends to assume that there is one arena in which policymakers and scientists might engage. Yet, the action takes place in many venues at many levels involving many types of policymaker. So, if we view the process from many different perspectives we see new ways in which to understand the use of evidence.
Examples from the delivery of health and social care interventions show us why we need to understand policymaker perspectives. We identify three main issues to bear in mind.
First, we must choose what counts as ‘the evidence’. In some academic disciplines there is a strong belief that some kinds of evidence are better than others: the best evidence is gathered using randomised control trials and accumulated in systematic reviews. In others, these ideas have limited appeal or are rejected outright, in favour of (say) practitioner experience and service user-based feedback as the knowledge on which to base policies. Most importantly, policymakers may not care about these debates; they tend to beg, borrow, or steal information from readily available sources.
Second, we must choose the lengths to which we are prepared to go ensure that scientific evidence is the primary influence on policy delivery. When we open up the ‘black box’ of policymaking we find a tendency of central governments to juggle many models of government – sometimes directing policy from the centre but often delegating delivery to public, third, and private sector bodies. Those bodies can retain some degree of autonomy during service delivery, often based on governance principles such as ‘localism’ and the need to include service users in the design of public services.
This presents a major dilemma for scientists because policy solutions based on RCTs are likely to come with conditions that limit local discretion. For example, a condition of the UK government’s license of the ‘family nurse partnership’ is that there is ‘fidelity’ to the model, to ensure the correct ‘dosage’ and that an RCT can establish its effect. It contrasts with approaches that focus on governance principles, such as ‘my home life’, in which evidence – as practitioner stories – may or may not be used by new audiences. Policymakers may not care about the profound differences underpinning these approaches, preferring to use a variety of models in different settings rather than use scientific principles to choose between them.
Third, scientists must recognise that these choices are not ours to make. We have our own ideas about the balance between maintaining evidential hierarchies and governance principles, but have no ability to impose these choices on policymakers.
This point has profound consequences for the ways in which we engage in strategies to create impact. A research design to combine scientific evidence and governance seems like a good idea that few pragmatic scientists would oppose. However, this decision does not come close to settling the matter because these compromises look very different when designed by scientists or policymakers.
Take for example the case of ‘improvement science’ in which local practitioners are trained to use evidence to experiment with local pilots and learn and adapt to their experiences. Improvement science-inspired approaches have become very common in health sciences, but in many examples the research agenda is set by research leads and it focuses on how to optimise delivery of evidence-based practice.
In contrast, models such as the Early Years Collaborative reverse this emphasis, using scholarship as one of many sources of information (based partly on scepticism about the practical value of RCTs) and focusing primarily on the assets of practitioners and service users.
Consequently, improvement science appears to offer pragmatic solutions to the gap between divergent approaches, but only because they mean different things to different people. Its adoption is only one step towards negotiating the trade-offs between RCT-driven and story-telling approaches.
These examples help explain why we know so little about how to influence policy. They take us beyond the bland statement – there is a gap between evidence and policy – trotted out whenever scientists try and maximise their own impact. The alternative is to try to understand the policy process, and the likely demand for and uptake of evidence, before working out how to produce evidence that would fit into the process. This different mind-set requires a far more sophisticated knowledge of the policy process than we see in most studies of the evidence-policy gap. Before trying to influence policymaking, we should try to understand it.
Further reading
The initial further reading uses this table to explore three ways in which policymakers, scientists, and other groups have tried to resolve the problems we discuss:
- This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
- I explore some of the scientific issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
- For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
- For students and fans of policy theory, I show the links between the use of evidence and policy transfer
You can also explore these links to discussions of EBPM, policy theory, and specific policy fields such as prevention
- My academic articles on these topics
- The Politics of Evidence Based Policymaking
- Key policy theories and concepts in 1000 words
- Prevention policy
Pingback: Kathryn Oliver and I have just published an article on the relationship between evidence and policy | Paul Cairney: Politics & Public Policy
Pingback: Evidence based policymaking: 7 key themes | Paul Cairney: Politics & Public Policy