The European Commission’s Joint Research Centre’s Science Hub is making some videos about evidence and policy, asking 10 questions. Here are my answers. The video will come later (in the meantime, at the end you can find two people saying ‘would that it were so simple’):
- Who are you?
Paul Cairney, Professor of Politics and Public Policy, University of Stirling. I write about public policy, applying theoretical insight to issues such as ‘the politics of EBPM’.
- How did you become interested in evidence for policy?
It was always in the back of my mind because it is the latest version of a long-standing interest (in policy studies) about the absence of ‘comprehensive rationality’: what do policymakers do when they can’t consider all information, and what are the consequences for politics and policy? Do they use ‘irrational’ shortcuts? Does their attention tend to lurch? Does policy become incremental or ‘punctuated’? There are many different answers, explored in this ‘1000 Words series’.
- Why is evidence-informed policy important?
It’s part of the broader importance of inclusive policymaking based on a diversity of voices and the generation of knowledge about how the world works (alongside a debate about how it should work).
- What is the most common misconception about evidence-informed policy?
I think that many scientists are too quick to dismiss politics – and identify ‘policy based evidence’ driven by ideological and emotional politicians – rather than understand the ever-present limits to the use of evidence in policy. I think many also exaggerate the lack of scientific influence on policy by focusing on the most salient issues.
- What are the most common mistakes made by researchers or policymakers?
The classic mistake by researchers is to think that you make a good argument by bombarding people with a lot of information without thinking about how they’ll receive it. An important mistake that policymakers can make is to rely too much on the experts they know and trust, rather than seeking ways to identify diverse and ‘state of the art’ sources of information.
- What is the single most important advice to researchers/scientists who want to have policy impact?
Think about your audience and how they demand information: get their attention with a simple story, describe the problem in ways they understand (and think about the world), and show that your solution is technically and politically feasible.
- How do you change minds with facts and evidence?
Engage for the long term, recognising your ‘enlightenment’ role. Something dramatic would have to happen to change minds immediately and dramatically – it would be akin to a religious conversion. Or, in politics, it’s about finding a sympathetic audience (different minds) in another policymaking venue or hoping for a change of government. In other words, this is about the power of participants as much as the power of evidence and ideas.
- How should you communicate uncertainty about the evidence?
Since I study politics, I’d focus on the political choices here. You can communicate uncertainty in academic journals via ‘limitations’ sections and expect robust challenge on your evidence from your peers. In politics, if you show uncertainty – and your competitor does not – you may be at a disadvantage, and may need to do some soul searching about how much uncertainty you hold back. The rules change as soon as you become a scientist and advocate.
- How do you measure the policy impact of evidence?
In ways that are not conducive to ‘impact’ measurement by research bodies! For example, with colleagues, I tracked the influence of evidence on smoking harms on policy. In ‘leading countries’ it took 2-3 decades, and depended on three conditions: (1) key actors ‘frame’ the evidence to set a policy agenda; (2) the policy environment is generally conducive to evidence-informed change; and (3) key actors exploit ‘windows of opportunity’ for each policy change. In most countries, policy change of this scale has not happened. In such cases, we can never say that evidence simply wins the day.
- Who or What are your “must-reads”?
I partly took more notice of this topic after reading two articles by Kathryn Oliver and colleagues:
Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2
Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf
I was struck by the argument here, that policymakers often fund sophisticated models for evidence-based policymaking but don’t understand or use them:
Nilsson, M., Jordan, A., Turnpenny, J., Hertin, J., Nykvist, B. and Russel, D. (2008) ‘The use and non-use of policy appraisal tools in public policy making: an analysis of three European countries and the European Union’, Policy Sciences, 41, 4, 335-55
It’s also worth reading this account, which shows that policymakers don’t have the same respect for a ‘hierarchy’ of evidence/ methods as many scientists:
Bédard, P. and Ouimet, M. (2012) ‘Cognizance and consultation of randomized controlled trials among ministerial policy analysts’ Review of Policy Research, 29, 5, 625-644
For more information, start with my EBPM page