Paul Cairney is Professor of Politics and Public Policy, University of Stirling email@example.com. This post will appear in The Guardian’s Political Science blog. It is based on his book The Politics of Evidence Based Policymaking, launched by the Alliance for Useful Evidence and developed on his EBPM webpage.
‘Evidence-based policymaking’ is now central to the agenda of scientists: academics need to demonstrate that they are making an ‘impact’ on policy, and scientists want to close the ‘evidence-policy gap’. The live debate on energy policy is one of many examples in which scientists bemoan a tendency for policymakers to produce ideological rather than ‘evidence based’ decisions, and seek ways to change their minds.
Yet, they will fail if they do not understand how the policy process works. To do so requires us to reject two romantic notions: (1) that policymakers will ever think like scientists; and, (2) that there is a clearly identifiable point of decision at which scientists can contribute evidence to key policymakers to make a demonstrable impact.
To better understand how policymakers think, we need a full account of ‘bounded rationality’. This phrase partly describes the fact that policymakers can only gather limited information before they make decisions quickly. They will have made a choice before you have a chance to say ‘more research is needed’! To do so, they use two short cuts: ‘rational’ ways to gather quickly the best evidence on solutions to meet their goals, and ‘irrational’ ways – including drawing on emotions and gut feeling – to identify problems even more quickly.
This insight shows us one potential flaw in academic strategies. The most common response to bounded rationality in scientific articles is to focus on the supply of evidence: develop a hierarchy of evidence which privileges the systematic review of randomised control trials, generate knowledge, and present it in a form that is understandable to policymakers. We need to pay more attention to the demand for evidence, following lurches of policymaker attention, often driven by quick and emotional decisions. For example, there is no point in taking the time to make evidence-based solutions easier to understand if policymakers are not (or no longer) interested. Instead, successful advocates recognize the value of emotional appeals and simple stories to generate attention to a problem.
To identify when and how to contribute evidence, we need to understand the complicated environment in which policy takes place. There is no ‘policy cycle’ in which to inject scientific evidence at the point of decision. Rather, the policy process is messy and often unpredictable, and better described as a complex system in which, for example, the same injection of evidence can have no effect or a major effect. It contains: many actors presenting evidence to influence policymakers in many levels and types of government; networks which are often close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others; and, a language within policymaking institutions indicating what ways of thinking are in good ‘currency’ (such as ‘value for money’). Social or economic ‘crises’ can prompt lurches of attention from one issue to another, or even prompt policymakers to change completely the ways in which they understand a policy problem. However, while lurches of attention are common, changes to well-established ways of thinking in government are rare, or take place only in the long term.
This insight shows us a second potential flaw in academic strategies: the idea that research ‘impact’ can be described as a set-piece event, separable from the policy process as a whole. It compares with the kind of advice – develop a long-term strategy – that we would generate from policy studies: invest in the time to find out (a) where the ‘action is’, and (b) how you can boost your influence as part of a coalition of like-minded actors looking of opportunities to raise attention to problems and push your solutions.
Unfortunately, these insights mostly help us identify what not to do. Further, the alternatives may be difficult to accept (how many scientists would make manipulative or emotional appeals to generate attention to their research?) or deliver (who has the time to conduct research and seek meaningful influence?). However, by engaging with these practical and ethical dilemmas, that the policy process creates for advocates of scientific evidence, we can help produce strategies better suited to the complex real world than a simple process that we wish existed.