All going well, it will be out in November 2019. We are now at the proofing stage.
I thank James Georgalakis for inviting me to speak at the inaugural event of IDS’ new Evidence into Policy and Practice Series, and the audience for giving extra meaning to my story about the politics of ‘evidence-based based policymaking’. The talk (using powerpoint) and Q&A is here:
James invited me to respond to some of the challenges raised to my talk – in his summary of the event – so here it is.
I’m working on a ‘show, don’t tell’ approach, leaving some of the story open to interpretation. As a result, much of the meaning of this story – and, in particular, the focus on limiting participation – depends on the audience.
For example, consider the impact of the same story on audiences primarily focused on (a) scientific evidence and policy, or (b) participation and power.
Normally, when I talk about evidence and policy, my audience is mostly people with scientific or public health backgrounds asking why do policymakers ignore scientific evidence? I am usually invited to ruffle feathers, mostly by challenging a – remarkably prevalent – narrative that goes like this:
In that context, I suggest that there are many claims to policy-relevant knowledge, policymakers have to ignore most information before making choices, and they are not in control of the policy process for which they are ostensibly in charge.
Limiting participation as a strategic aim
Then, I say to my audience that – if they are truly committed to maximising the use of scientific evidence in policy – they will need to consider how far they will go to get what they want. I use the metaphor of an ethical ladder in which each rung offers more influence in exchange for dirtier hands: tell stories and wait for opportunities, or demonise your opponents, limit participation, and humour politicians when they cherry-pick to reinforce emotional choices.
It’s ‘show don’t tell’ but I hope that the take-home point for most of the audience is that they shouldn’t focus so much on one aim – maximising the use of scientific evidence – to the detriment of other important aims, such as wider participation in politics beyond a reliance on a small number of experts. I say ‘keep your eyes on the prize’ but invite the audience to reflect on which prizes they should seek, and the trade-offs between them.
Limited participation – and ‘windows of opportunity’ – as an empirical finding
I did suggest that most policymaking happens away from the sphere of ‘exciting’ and ‘unruly’ politics. Put simply, people have to ignore almost every issue almost all of the time. Each time they focus their attention on one major issue, they must – by necessity – ignore almost all of the others.
For me, the political science story is largely about the pervasiveness of policy communities and policymaking out of the public spotlight.
The logic is as follows. Elected policymakers can only pay attention to a tiny proportion of their responsibilities. They delegate the rest to bureaucrats at lower levels of government. Bureaucrats lack specialist knowledge, and rely on other actors for information and advice. Those actors trade information for access. In many cases, they develop effective relationships based on trust and a shared understanding of the policy problem.
Trust often comes from a sense that everyone has proven to be reliable. For example, they follow norms or the ‘rules of the game’. One classic rule is to contain disputes within the policy community when actors don’t get what they want: if you complain in public, you draw external attention and internal disapproval; if not, you are more likely to get what you want next time.
For me, this is key context in which to describe common strategic concerns:
Where is the power analysis in all of this?
I rarely use the word power directly, partly because – like ‘politics’ or ‘democracy’ – it is an ambiguous term with many interpretations (see Box 3.1). People often use it without agreeing its meaning and, if it means everything, maybe it means nothing.
However, you can find many aspects of power within our discussion. For example, insider and outsider strategies relate closely to Schattschneider’s classic discussion in which powerful groups try to ‘privatise’ issues and less powerful groups try to ‘socialise’ them. Agenda setting is about using resources to make sure issues do, or do not, reach the top of the policy agenda, and most do not.
These aspects of power sometimes play out in public, when:
However, they are no less important when they play out routinely:
In other words, the word ‘power’ is often hidden because the most profound forms of power often seem to be hidden.
In the context of our discussion, power comes from the ability to define some evidence as essential and other evidence as low quality or irrelevant, and therefore define some people as essential or irrelevant. It comes from defining some issues as exciting and worthy of our attention, or humdrum, specialist and only relevant to experts. It is about the subtle, unseen, and sometimes thoughtless ways in which we exercise power to harness people’s existing beliefs and dominate their attention as much as the transparent ways in which we mobilise resources to publicise issues. Therefore, to ‘maximise the use of evidence’ sounds like an innocuous collective endeavour, but it is a highly political and often hidden use of power.
The EBPM talks begin with a discussion of the same three points: what counts as evidence, why we must ignore most of it (and how), and the policy process in which policymakers use some of it. However, the framing of these points, and the ways in which we discuss the implications, varies markedly by audience. So, in this post, I provide a short discussion of the three points, then show how the audience matters (referring to the city as a shorthand for each talk).
The overall take-home points are highly practical, in the same way that critical thinking has many practical applications (in other words, I’m not offering a map, toolbox, or blueprint):
3 ways to describe the use of evidence in policymaking
However, it only remains a valence issue when we refuse to define evidence and justify what counts as good evidence. After that, you soon see the political choices emerge. A reference to evidence is often a shorthand for scientific research evidence, and good often refers to specific research methods (such as randomised control trials). Or, you find people arguing very strongly in the almost-opposite direction, criticising this shorthand as exclusionary and questioning the ability of scientists to justify claims to superior knowledge. Somewhere in the middle, we find that a focus on evidence is a good way to think about the many forms of information or knowledge on which we might make decisions, including: a wider range of research methods and analyses, knowledge from experience, and data relating to the local context with which policy would interact.
So, what begins as a valence issue becomes a gateway to many discussions about how to understand profound political choices regarding: how we make knowledge claims, how to ‘co-produce’ knowledge via dialogue among many groups, and the relationship between choices about evidence and governance.
There is far more information about the world than we are able to process. A focus on evidence gaps often gives way to the recognition that we need to find effective ways to ignore most evidence.
There are many ways to describe how individuals combine cognition and emotion to limit their attention enough to make choices, and policy studies (to all intents and purposes) describe equivalent processes – described, for example, as ‘institutions’ or rules – in organisations and systems.
One shortcut between information and choice is to set aims and priorities; to focus evidence gathering on a small number of problems or one way to define a problem, and identify the most reliable or trustworthy sources of evidence (often via evidence ‘synthesis’). Another is to make decisions quickly by relying on emotion, gut instinct, habit, and existing knowledge or familiarity with evidence.
Either way, agenda setting and problem definition are political processes that address uncertainty and ambiguity. We gather evidence to reduce uncertainty, but first we must reduce ambiguity by exercising power to define the problem we seek to solve.
Policy textbooks (well, my textbook at least!) provide a contrast between:
Overall, policy theories have much to offer people with an interest in evidence-use in policy, but primarily as a way to (a) manage expectations, to (b) produce more realistic strategies and less dispiriting conclusions. It is useful to frame our aim as to analyse the role of evidence within a policy process that (a) we don’t quite understand, rather than (b) we would like to exist.
The events themselves
Below, you will find a short discussion of the variations of audience and topic. I’ll update and reflect on this discussion (in a revised version of this post) after taking part in the events.
Social science and policy studies: knowledge claims, bounded rationality, and policy theory
For Auckland and Wellington A, I’m aiming for an audience containing a high proportion of people with a background in social science and policy studies. I describe the discussion as ‘meta’ because I am talking about how I talk about EBPM to other audiences, then inviting discussion on key parts of that talk, such as how to conceptualise the policy process and present conceptual insights to people who have no intention of deep dives into policy theory.
I often use the phrase ‘I’ve read it, so you don’t have to’ partly as a joke, but also to stress the importance of disciplinary synthesis when we engage in interdisciplinary (and inter-professional) discussion. If so, it is important to discuss how to produce such ‘synthetic’ accounts.
I tend to describe key components of a policymaking environment quickly: many policy makers and influencers spread across many levels and types of government, institutions, networks, socioeconomic factors and events, and ideas. However, each of these terms represents a shorthand to describe a large and diverse literature. For example, I can describe an ‘institution’ in a few sentences, but the study of institutions contains a variety of approaches.
Academic-practitioner discussions: improving the use of research evidence in policy
For Wellington B and Melbourne, the audience is an academic-practitioner mix. We discuss ways in which we can encourage the greater use of research evidence in policy, perhaps via closer collaboration between suppliers and users.
Discussions with scientists: why do policymakers ignore my evidence?
Sydney UNSW focuses more on researchers in scientific fields (often not in social science). I frame the question in a way that often seems central to scientific researcher interest: why do policymakers seem to ignore my evidence, and what can I do about it?
Then, I tend to push back on the idea that the fault lies with politics and policymakers, to encourage researchers to think more about the policy process and how to engage effectively in it. If I’m trying to be annoying, I’ll suggest to a scientific audience that they see themselves as ‘rational’ and politicians as ‘irrational’. However, the more substantive discussion involves comparing (a) ‘how to make an impact’ advice drawn from the personal accounts of experienced individuals, giving advice to individuals, and (b) the sort of advice you might draw from policy theories which focus more on systems.
Background post: What can you do when policymakers ignore your evidence?
Early career researchers: the need to build ‘impact’ into career development
Canberra UNSW is more focused on early career researchers. I think this is the most difficult talk because I don’t rely on the same joke about my role: to turn up at the end of research projects to explain why they failed to have a non-academic impact. Instead, my aim is to encourage intelligent discussion about situating the ‘how to’ advice for individual researchers into a wider discussion of policymaking systems.
Similarly, Brisbane A and B are about how to engage with practitioners, and communicate well to non-academic audiences, when most of your work and training is about something else entirely (such as learning about research methods and how to engage with the technical language of research).
2. European Health Forum Gastein 2018 ‘Policy in Evidence’ (from 6 minutes)
|Evidence-based policymaking and the new policy sciences|
These are some opening remarks for my talk on EBPM at Open Society Foundations (New York), 24th October 2016. The OSF recorded the talk, so you can listen below, externally, or by right clicking and saving. Please note that it was a lunchtime talk, so the background noises are plates and glasses.
‘Evidence based policy making’ is a good political slogan, but not a good description of the policy process. If you expect to see it, you will be disappointed. If you seek more thoughtful ways to understand and act within political systems, you need to understand five key points then decide how to respond.
EBPM looks like a valence issue in which most of us agree that policy and policymaking should be ‘evidence based’ (perhaps like ‘evidence based medicine’). Yet, valence issues only command broad agreement on vague proposals. By defining each term we highlight ambiguity and the need to make political choices to make sense of key terms:
‘Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.
Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.
I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.
‘Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.
These factors suggest that an effective engagement strategy is not straightforward: our instinct may be to influence elected policymakers at the ‘centre’ making authoritative choices, but the ‘return on investment’ is not clear. So, you need to decide how and where to engage, but it takes time to know ‘where the action is’ and with whom to form coalitions.
There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected local policymakers.
Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from service user and local practitioner experience. This principle seems to rule out the use of RCTs. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach or help produce the evidence that they favour.
These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking. For example, if policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals? If policymaking systems are so complex, should we devote huge amounts of resources to make sure we’re effective? Kathryn Oliver and I also explore the implications for proponents of scientific evidence, and there is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
Where we go from there is up to you
The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.
This discussion is based on my impressions so far of realist reviews and the potential for policy studies to play a role in their effectiveness. The objectives section formed one part of a recent team bid for external funding (so, I acknowledge the influence of colleagues on this discussion, but not enough to blame them personally). We didn’t get the funding, but at least I got a lengthy blog post and a dozen hits out of it.
I like the idea of a ‘realistic’ review of evidence to inform policy, alongside a promising uptake in the use of ‘realist review’. The latter doesn’t mean realistic: it refers to a specific method or approach – realist evaluation, realist synthesis.
The agenda of the realist review already takes us along a useful path towards policy relevance, driven partly by the idea that many policy and practice ‘interventions’ are too complex to be subject to meaningful ‘systematic review’.
The latter’s aim – which we should be careful not to caricature – may be to identify something as close as possible to a general law: if you do X, the result will generally be Y, and you can be reasonably sure because the studies (such as randomised control trials) meet the ‘gold standard’ of research.
The former’s aim is to focus extensively on the context in which interventions take place: if you do X, the result will be Y under these conditions. So, for example, you identify the outcome that you want, the mechanism that causes it, and the context in which the mechanism causes the outcome. Maybe you’ll even include a few more studies, not meeting the ‘gold standard’, if they meet other criteria of high quality research (I declare that I am a qualitative researcher, so you call tell who I’m rooting for).
Realist reviews come increasingly with guide books and discussions on how to do them systematically. However, my impression is that when people do them, they find that there is an art to applying discretion to identify what exactly is going on. It is often difficult to identify or describe the mechanism fully (often because source reports are not clear on that point), say for sure it caused the outcome even in particular circumstances, and separate the mechanism from the context.
I italicised the last point because it is super-important. I think that it is often difficult to separate mechanism from context because (a) the context is often associated with a particular country’s political system and governing arrangements, and (b) it might be better to treat governing context as another mechanism in a notional chain of causality.
In other words, my impression is that realist reviews focus on the mechanism at the point of delivery; the last link in the chain in which the delivery of an intervention causes an outcome. It may be wise to also identify the governance mechanism that causes the final mechanism to work.
Why would you complicate an already complicated review?
I aim to complicate things then simplify them heroically at the end.
Here are five objectives that I maybe think we should pursue in an evidence review for policymakers (I can’t say for sure until we all agree on the principles of science advice):
Objective 1: evidence into action by addressing the politics of evidence-based policymaking
There is no shortage of scientific evidence of policy problems. Yet, we lack a way to use evidence to produce politically feasible action. The ‘politics of evidence-based policymaking’ produces scientists frustrated with the gap between their evidence and a proportionate policy response, and politicians frustrated that evidence is not available in a usable form when they pay attention to a problem and need to solve it quickly. The most common responses in key fields, such as environmental and health studies, do not solve this problem. The literature on ‘barriers’ between evidence and policy recommend initiatives such as: clearer scientific messages, knowledge brokerage and academic-practitioner workshops, timely engagement in politics, scientific training for politicians, and participation to combine evidence and community engagement.
This literature makes limited reference to policy theory and has two limitations. First, studies focus on reducing empirical uncertainty, not ‘framing’ issues to reduce ambiguity. Too many scientific publications go unread in the absence of a process of persuasion to influence policymaker demand for that information (particularly when more politically relevant and paywall-free evidence is available elsewhere). Second, few studies appreciate the multi-level nature of political systems or understand the strategies actors use to influence policy. This involves experience and cultural awareness to help learn: where key decisions are made, including in networks between policymakers and influential actors; the ‘rules of the game’ of networks; how to form coalitions with key actors; and, that these processes unfold over years or decades.
The solution is to produce knowledge that will be used by policymakers, community leaders, and ‘street level’ actors. It requires a (23%) shift in focus from the quality of scientific evidence to (a) who is involved in policymaking and the extent to which there is a ‘delivery chain’ from national to local, and (b) how actors demand, interpret, and use evidence to make decisions. For example, simple qualitative stories with a clear moral may be more effective than highly sophisticated decision-making models or quantitative evidence presented without enough translation.
Objective 2: produce simple lessons and heuristics
We know that the world is too complex to fully comprehend, yet people need to act despite uncertainty. They rely on ‘rational’ methods to gather evidence from sources they trust, and ‘irrational’ means to draw on gut feeling, emotion, and beliefs as short cuts to action (or system 1 and 2 thinking). Scientific evidence can help reduce some uncertainty, but not tell people how to behave. Scientific information strategies can be ineffective, by expecting audiences to appreciate the detail and scale of evidence, understand the methods used to gather it, and possess the skills to interpret and act on it. The unintended consequence is that key actors fall back on familiar heuristics and pay minimal attention to inaccessible scientific information. The solution is to tailor evidence reviews to audiences: examining their practices and ways of thinking; identifying the heuristics they use; and, describing simple lessons and new heuristics and practices.
Objective 3: produce a pragmatic review of the evidence
To review a wider range of evidence sources than in traditional systematic reviews is to recognise the trade-offs between measures of high quality (based on a hierarchy of methods and journal quality) and high impact (based on familiarity and availability). If scientists reject and refuse to analyse evidence that policymakers routinely take more seriously (such as the ‘grey’ literature), they have little influence on key parts of policy analysis. Instead, provide a framework that recognises complexity but produces research that is manageable at scale and translatable into key messages:
This narrow focus is crucial to the development of a research question, limiting analysis to the most relevant studies to produce a rigorous review in a challenging timeframe. Then, the idea from realist reviews is that you ‘test’ your hypotheses and clarify the theories that underpin this analysis. This should involve a test for political as well as technical feasibility: speak regularly with key actors i to gauge the likelihood that the mechanisms you recommend will be acted upon, and the extent to which the context of policy delivery is stable and predictable and if mechanism will work consistently under those conditions.
Objective 4: identify key links in the ‘causal chain’ via interdisciplinary study
We all talk about combining perspectives from multiple disciplines but I totally mean it, especially if it boosts the role of political scientists who can’t predict elections. For example, health or environmental scientists can identify the most effective interventions to produce good health or environmental outcomes, but not how to work with and influence key people. Policy scholars can identify how the policy process works and how to maximise the use of scientific evidence within it. Social science scholars can identify mechanisms to encourage community participation and the ownership of policies. Anthropologists can provide insights on the particular cultural practices and beliefs underpinning the ways in which people understand and act according to scientific evidence.
Perhaps more importantly, interdisciplinarity provides political cover: we got the best minds in many disciplines and locked them in a room until they produced an answer.
We need this cover for something I’ll call ‘informed extrapolation’ and justify with reference to pragmatism: if we do not provide well-informed analyses of the links between each mechanism, other less-informed actors will fill the gap without appreciating key aspects of causality. For example, if we identify a mechanism for the delivery of successful interventions – e.g. high levels of understanding and implementation of key procedures – there is still uncertainty: do these mechanisms develop organically through ‘bottom up’ collaboration or can they be introduced quickly from the ‘top’ to address an urgent issue? A simple heuristic for central governments could be to introduce training immediately or to resist the temptation for a quick fix.
Relatively-informed analysis, to recommend one of those choices, may only be used if we can back it up with interdisciplinary weight and produce recommendations that are unequivocal (although, again, other approaches are available).
Objective 5: focus intensively on one region, and one key issue, not ‘one size fits all’
We need to understand individual countries or regions – their political systems, communities, and cultural practices – and specific issues in depth, to know how abstract mechanisms work in concrete contexts, and how the same evidence will be interpreted and used differently by actors in those contexts. We need to avoid politically insensitive approaches based on the assumption that a policy that works in countries like (say) the UK will work in countries that are not (say) the UK, and/ or that actors in each country will understand policy problems in the same way.
It all looks incredibly complicated, doesn’t it? There’s no time to do all that, is there? It will end up as a bit of a too-rushed jumble of high-and-low quality evidence and advice, won’t it?
My argument is that these problems are actually virtues because they provide more insight into how busy policymakers will gather and use evidence. Most policymakers will not know how to do a systematic review or understand why you are so attached to them. Maybe you’ll impress them enough to get them to trust your evidence, but have you put yourself into a position to know what they’ll do with it? Have you thought about the connection between the evidence you’ve gathered, what people need to do, who needs to do it, and who you need to speak to about getting them to do it? Maybe you don’t have to, if you want to be no more than a ‘neutral scientist’ or ‘honest broker’ – but you do if you want to give science advice to policymakers that policymakers can use.
Really, it’s three different ways to make the same argument in the number of words that suits you:
For even more words, see my EBPM page