This post forms one part of the Policy Analysis in 750 words series overview.
Think of two visions for policy analysis. It should be primarily:
- ‘evidence based’, or
These choices are not mutually exclusive, but there are key tensions between them that should not be ignored, such as when we ask:
- how many people should be involved in policy analysis?
- whose knowledge counts?
- who should control policy design?
Perhaps we can only produce a sensible combination of the two if we clarify their often very different implications for policy analysis. Let’s begin with one story for each and see where they take us.
A story of ‘evidence-based policymaking’
One story of ‘evidence based’ policy analysis is that it should be based on the best available evidence of ‘what works’.
Often, the description of the ‘best’ evidence relates to the idea that there is a notional hierarchy of evidence according to the research methods used.
At the top would be the systematic review of randomised control trials, and nearer the bottom would be expertise, practitioner knowledge, and stakeholder feedback.
This kind of hierarchy has major implications for policy learning and transfer, such as when importing policy interventions from abroad or ‘scaling up’ domestic projects.
Put simply, the experimental method is designed to identify the causal effect of a very narrowly defined policy intervention. Its importation or scaling up would be akin to the description of medicine, in which the evidence suggests the causal effect of a specific active ingredient to be administered with the correct dosage. A very strong commitment to a uniform model precludes the processes we might associate with co-production, in which many voices contribute to a policy design to suit a specific context (see also: the intersection between evidence and policy transfer).
A story of co-production in policymaking
One story of ‘co-produced’ policy analysis is that it should be ‘reflexive’ and based on respectful conversations between a wide range of policymakers and citizens.
Often, the description is of the diversity of valuable policy relevant information, with scientific evidence considered alongside community voices and normative values.
This rejection of a hierarchy of evidence also has major implications for policy learning and transfer. Put simply, a co-production method is designed to identify the positive effect – widespread ‘ownership’ of the problem and commitment to a commonly-agreed solution – of a well-discussed intervention, often in the absence of central government control.
Its use would be akin to a collaborative governance mechanism, in which the causal mechanism is perhaps the process used to foster agreement (including to produce the rules of collective action and the evaluation of success) rather than the intervention itself. A very strong commitment to this process precludes the adoption of a uniform model that we might associate with narrowly-defined stories of evidence based policymaking.
Where can you find these stories in the 750-words series?
- Texts focusing on policy analysis as evidence-based/ informed practice (albeit subject to limits) include: Weimer and Vining, Meltzer and Schwartz, Brans, Geva-May, and Howlett (compare with Mintrom, Dunn)
- Texts on being careful while gathering and analysing evidence include: Spiegelhalter
- Texts that challenge the ‘evidence based’ story include: Bacchi, T. Smith, Hindess, Stone
How can you read further?
See the EBPM page and special series ‘The politics of evidence-based policymaking: maximising the use of evidence in policy’
There are 101 approaches to co-production, but let’s see if we can get away with two categories:
- Co-producing policy (policymakers, analysts, stakeholders). Some key principles can be found in Ostrom’s work and studies of collaborative governance.
- Co-producing research to help make it more policy-relevant (academics, stakeholders). See the Social Policy and Administration special issue ‘Inside Co-production’ and Oliver et al’s ‘The dark side of coproduction’ to get started.
To compare ‘epistemic’ and ‘reflexive’ forms of learning, see Dunlop and Radaelli’s ‘The lessons of policy learning: types, triggers, hindrances and pathologies’
My interest has been to understand how governments juggle competing demands, such as to (a) centralise and localise policymaking, (b) encourage uniform and tailored solutions, and (c) embrace and reject a hierarchy of evidence. What could possibly go wrong when they entertain contradictory objectives? For example:
- Paul Cairney (2019) “The myth of ‘evidence based policymaking’ in a decentred state”, forthcoming in Public Policy and Administration(Special Issue, The Decentred State) (accepted version)
- Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open AccessPDF
- Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x PDF
- Paul Cairney (2017) “Evidence-based best practice is more political than it looks: a case study of the ‘Scottish Approach’”, Evidence and Policy, 13, 3, 499-515 PDF