Tag Archives: evidence

Policy in 500 words: uncertainty versus ambiguity

In policy studies, there is a profound difference between uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:

  • ‘Rational’. Pursuing clear goals and prioritizing certain sources of information.
  • ‘Irrational’. Drawing on emotions, gut feelings, deeply held beliefs, and habits.

I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:

  1. Policy actors seek to resolve uncertainty by generating more information or drawing greater attention to the available information.

Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.

  1. Policy actors seek to resolve ambiguity by focusing on one interpretation of a policy problem at the expense of another.

Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:

A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.

In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.

Further reading:

Framing

The politics of evidence-based policymaking

To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty

How to communicate effectively with policymakers: combine insights from psychology and policy studies

Here is the relevant opening section in UPP:

p234 UPP ambiguity

Leave a comment

Filed under 500 words, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

The Politics of Evidence revisited

This is a guest post by Dr Justin Parkhurst, responding to a review of our books by Dr Joshua Newman, and my reply to that review.

I really like that Joshua Newman has done this synthesis of 3 recent books covering aspects of evidence use in policy. Too many book reviews these days just describe the content, so some critical comments are welcome, as is the comparative perspective.

I’m also honoured that my book was included in the shortlist (it is available here, free as an ebook: bit.ly/2gGSn0n for interested readers) and I’d like to follow on from Paul to add some discussion points to the debate here – with replies to both Joshua and Paul (hoping first names are acceptable).

Have we heard all this before?

Firstly, I agree with Paul that saying ‘we’ve heard this all before’ risks speaking about a small community of active researchers who study these issues, and not the wider community. But I’d also add that what we’ve heard before is a starting point to many of these books, not where they end up.

In terms of where we start: I’m sure many of us who work in this field are somewhat frustrated at meetings when we hear people making statements that are well established in the literature. Some examples include:

  • “There can be many types of evidence, not just scientific research…”
  • “In the legal field, ‘evidence’ means something different…”
  • “We need evidence-based policy, not policy-based evidence…”
  • “We need to know ‘what works’ to get evidence into policy…”

Thus, I do think there is still a need to cement the foundations of the field more strongly – in essence, to establish a disciplinary baseline that people weighing in on a subject should be expected to know about before providing additional opinions. One way to help do this is for scholars to continue to lay out the basic starting points in our books – typically in the first chapter or two.

Of course, other specialist fields and disciplines have managed to establish their expertise to a point that individuals with opinions on a subject typically have some awareness that there is a field of study out there which they don’t necessarily know about. This is most obvious in the natural sciences (and perhaps in economics). E.g. most people (current presidents of some large North American countries aside?) are aware that don’t know a lot about engineering, medicine, or quantum physics – so they won’t offer speculative or instinctive opinions about why airplanes stay in the air, how to do bypass surgery, or what was wrong with the ‘Ant-Man’ film. Or when individuals do offer views, they are typically expected to know the basics of the subject.

For the topic of evidence and policy, I often point people to Huw Davies, Isabel Walter, and Sandra Nutley’s book Using Evidence, which is a great introduction to much of this field, as well as Carol Weiss’ insights from the late 70s on the many meanings of research utilisation. I also routinely point people to read The Honest Broker by Roger Pielke Jr. (which I, myself, failed to read before writing my book and, as such, end up repeating many of his points – I’ve apologised to him personally).

So yes, I think there is space for work like ours to continue to establish a baseline, even if some of us know this, because the expertise of the field is not yet widely recognised or established. Yet I think is it not accurate for Joshua to argue we end up repeating what is known, considering our books diverge in key ways after laying out some of the core foundations.

Where do we go from there?

More interesting for this discussion, then, is to reflect on what our various books try to do beyond simply laying out the basics of what we know about evidence use and policy. It is here where I would disagree with Joshua’s point claiming we don’t give a clear picture about the ‘problem’ that ‘evidence-based policy’ (his term – one I reject) is meant to address. Speaking only for my own book, I lay out the problem of bias in evidence use as the key motivation driving both advocates of greater evidence use as well as policy scholars critical of (oversimplified) knowledge translation efforts. But I distinguish between two forms of bias: technical bias – whereby evidence is used in ways that do not adhere to scientific best practice and thus produce sub-optimal social outcomes; and issue bias – whereby pieces of evidence, or mechanisms of evidence use, can obscure the important political choices in decision making, skewing policy choices towards those things that have been measured, or are conducive to measurement. Both of these forms of bias are violations of widely held social values – values of scientific fidelity on the one hand, and of democratic representation on the other. As such, for me, these are the problems that I try to consider in my book, exploring the political and cognitive origins of both, in order to inform thinking on how to address them.

That said, I think Joshua is right in some of the distinctions he makes between our works in how we try to take this field forward, or move beyond current challenges in differing ways. Paul takes the position that researchers need to do something, and one thing they can do is better understand politics and policy making. I think Paul’s writings about policy studies for students is superb (see his book and blog posts about policy concepts). But in terms of applying these insights to evidence use, this is where we most often diverge. I feel that keeping the focus on researchers puts too much emphasis on achieving ‘uptake’ of researcher’s own findings. In my view, I would point to three potential (overlapping) problems with this.

  • First – I do not think it is the role or responsibility of researchers to do this, but rather a failure to establish the right system of evidence provision;
  • Second – I feel it leaves unstated the important but oft ignored normative question of how ‘should’ evidence be used to inform policy;
  • Third – I believe these calls rest on often unstated assumptions about the answer to the second point which we may wish to challenge.

In terms of the first point: I’m more of an institutionalist (as Joshua points out). My view is that the problems around non-use or misuse of evidence can be seen as resulting from a failure to establish appropriate systems that govern the use of evidence in policy processes. As such, the solution would have to lie with institutional development and changes (my final chapter advocates for this) that establish systems which serve to achieve the good governance of evidence.

Paul’s response to Joshua says that researchers are demanding action, so he speaks to them. He wants researchers to develop “useful knowledge of the policy process in which they might want to engage” (as he says above).  Yet while some researchers may wish to engage with policy processes, I think it needs to be clear that doing so is inherently a political act – and can take on a role of issue advocacy by promoting those things you researched or measured over other possible policy considerations (points made well by Roger Pielke Jr. in The Honest Broker). The alternative I point towards is to consider what good systems of evidence use would look like. This is the difference between arguing for more uptake of research, vs. arguing for systems through which all policy relevant evidence can be seen and considered in appropriate ways – regardless of the political savvy, networking, or activism of any given researcher (in my book I have chapters reflecting on what appropriate evidence for policy might be, and what a good process for its use might be, based on particular widely shared values).

In terms of the second and third points – my book might be the most explicit in its discussion of the normative values guiding efforts to improve evidence, and I am more critical than some about the assumption that getting researchers work ‘used’ by policymakers is a de-facto good thing. This is why I disagree with Joshua’s conclusion that my work frames the problem as ‘bridging the gap’. Rather I’d say I frame the problem as asking the question of ‘what does a better system of evidence use look like from a political perspective?’ My ‘good governance of evidence’ discussion presents an explicitly normative framework based the two sets of values mentioned above – those around democratic accountability and around fidelity to scientific good practice – both of which have been raised as important in discussions about evidence use in political processes.

Is the onus on researchers?

Finally, I also would argue against Joshua’s conclusion that my work places the burden of resolving the problems on researchers. Paul argues above that he does this but with good reason. I try not to do this. This is again because my book is not making an argument for more evidence to be ‘used’ per se. (and I don’t expect policy makers to just want to use it either). Rather I focus on identifying principles by which we can judge systems of evidence use, calling for guided incremental changes within national systems.

While I think academics can play an important role in establishing ‘best practice’ ideas, I explicitly argue that the mandate to establish, build, or incrementally change evidence advisory systems lies with the representatives of the people. Indeed, I include ‘stewardship’ as a core principle of my good governance of evidence framework to show that it should be those individuals who are accountable to the public that build these systems in different countries. Thus, the burden lies not with academics, but rather with our representatives – and, indirectly with all of us through the demands we make on them – to improve systems of evidence use.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Debating the politics of evidence-based policy

Joshua Newman has provided an interesting review of three recent books on evidence/ policy (click here). One of those books is mine: The Politics of Evidence-Based Policy Making (which you can access here).

His review is very polite, for which I thank him. I hope my brief response can be seen in a similarly positive light (well, I had hoped to make it brief). Maybe we disagree on one or two things, but often these discussions are about the things we emphasize and the way we describe similar points.

There are 5 points to which I respond because I have 5 digits on my right hand. I’d like you to think of me counting them out on my fingers. In doing so, I’ll use ‘Newman’ throughout, because that’s the academic convention, but I’d also like to you imagine me reading my points aloud and whispering ‘Joshua’ before each ‘Newman’.

  1. Do we really need to ‘take the debate forward’ so often?

I use this phrase myself, knowingly, to keep a discussion catchy, but I think it’s often misleading. I suggest not to get your hopes up too high when Newman raises the possibility of taking the debate forward with his concluding questions. We won’t resolve the relationship between evidence, politics & policy by pretending to reframe the same collection of questions about the prospect of political reform that people have been asking for centuries. It is useful to envisage better political systems (the subject of Newman’s concluding remarks) but I don’t think we should pretend that this is a new concern or that it will get us very far.

Indeed, my usual argument is that researchers need to do something (such as improve how we engage in the policy process) while we wait for political system reforms to happen (while doubting if they will ever happen).

Further, Newman does not produce any political reforms to address the problems he raises. Rather, for example, he draws attention to Trump to describe modern democracies as ‘not pluralist utopias’ and to identify examples in which policymakers draw primarily on beliefs, not evidence. By restating these problems, he does not solve them. So, what are researchers supposed to do after they grow tired of complaining that the world does not meet their hopes or expectations?

In other words, for me, (a) promoting political change and (b) acting during its absence are two sides of the same coin. We go round and round more often than we take things forward.

  1. What debate are we renaming?

Newman’s ‘we’ve heard it before’ argument seems more useful, but there is a lot to hear and relatively few people have heard it. I’d warn against the assumption that ‘I’ve heard this before’ can ever equal ‘we’ve heard it before’ (unless ‘we’ refers to a tiny group of specialists talking only to each other).

Rather, one of the most important things we can do as academics is to tell the same story to each other (to check if we understand the same story, in the same way, and if it remains useful) and to wider audiences (in a way that they can pick up and use without dedicating their career to our discipline).

Some of our most important insights endure for decades and they sometimes improve in the retelling. We apply them to new eras, and often come to the same basic conclusions, but it seems unhelpful to criticise a lack of complete novelty in individual texts (particularly when they are often designed to be syntheses). Why not use them to occasionally take a step back to discuss and clarify what we know?

Perhaps more importantly, I don’t think Newman is correct when he says that each book retells the story of the ‘research utilization’ literature. I’m retelling the story of policy theory, which describes how policymakers deal with bounded rationality in a complex policymaking environment. Policy theory’s intellectual histories often provide very different perspectives – of the policymaker trying to make good enough decisions, rather than the researcher trying to improve the uptake of their research – than the agenda inspired by Weiss et al (see for example The New Policy Sciences).

  1. Don’t just ‘get political’; understand the policy process

I draw on policy theory because it helps people understand policymaking. It would be a mistake to conclude from my book that I simply want researchers to ‘get political’. Rather, I want them to develop useful knowledge of the policy process in which they might want to engage. This knowledge is not freely available; it takes time to understand the discipline and reflect on policy dynamics.

Yet, the payoff can be profound, if only because it helps people think about the difference between two analytically separate causes of a notional ‘evidence policy gap’: (a) individuals making choices based on their beliefs and limited information (which is relatively easy to understand but also to caricature), and (b) systemic or ‘environmental’ causes (which are far more difficult to conceptualise and explain, but often more useful to understand).

  1. Don’t throw out the ‘two communities’ phrase without explaining why

Newman criticises the phrase ‘two communities’ as a description of silos in policymaking versus research, partly because (a) many policymakers use research frequently, and (b) the real divide is often between users/ non-users of research within policymaking organisations. In short, there are more than two communities.

I’d back up his published research with my anecdotal experience of giving talks to government audiences: researchers and analysts within government are often very similar in outlook to academics and they often talk in the same way as academics about the disconnect between their (original or synthetic) research and its use by their ‘operational’ colleagues.

Still, I’m not sure why Newman concludes that the ‘two communities’ phrase is ‘deeply flawed and probably counter-productive’. Yes, the world is more nuanced and less binary than ‘two communities’ suggests. Yes, the real divide may be harder to spot. Still, as Newman et al suggest: ‘Policy makers and academics should focus on bridging instruments that can bring their worlds closer together’. This bullet point from their article seems, to me, to be the point of using the phrase ‘two communities’. Maybe Caplan used the phrase differently in 1979, but to assert its historic meaning then reject the phrase’s use in modern discussion seems less useful than simply clarifying the argument in ways such as:

  • There is no simple policymaker/ academic divide but, … note the major difference in requirements between (a) people who produce or distribute research without taking action, which allows them (for example) to be more comfortable with uncertainty, and (b) people who need to make choices despite having incomplete information to hand.
  • You might find a more receptive audience in one part of government (e.g. research/ analytical) than another (e.g. operational), so be careful about generalising from singular experiences.
  1. Should researchers engage in the policy process?

Newman says that each book, ‘unfairly places the burden of resolving the problem in the hands of an ill-equipped group of academics, operating outside the political system’.

I agree with Newman when he says that many researchers do not possess the skills to engage effectively in the policy process. Scientific training does not equip us with political skills. Indeed, I think you could read a few of my blog posts and conclude, reasonably, that you would like nothing more to do with the policy process because you’d be more effective by focusing on research.

The reason I put the onus back on researchers is because I am engaging with arguments like the one expressed by Newman (in other words, part of the meaning comes from the audience). Many people conclude their evidence policy discussions by identifying (or ‘reframing’) the problem primarily as the need for political reform. For me, the focus on other people changing to suit your preferences seems unrealistic and misplaced. In that context, I present the counter-argument that it may be better to adapt effectively to the policy process that exists, not the one you’d like to see. Sometimes it’s more useful to wear a coat than complain about the weather.

See also:  The Politics of Evidence 

The Politics of Evidence revisited

 

Pivot cover

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy

Evidence based medicine provides a template for evidence based policy, but not in the way you expect

Guest post by Dr Kathryn Oliver and Dr Warren Pearce to celebrate the publication of their new Open Access article ‘Three lessons from evidence-based medicine and policy‘ in Palgrave Communications,

Part of the  Open Access series ‘politics of evidence based policymaking‘ (for which we still welcome submissions).

Evidence-based medicine (EBM) is often described as a ‘template’ for evidence-based policymaking (EBPM).

Critics of this idea would be 100% right if EBM lived up to its inaccurate caricature, in which there is an inflexible ‘hierarchy of evidence’ which dismisses too much useful knowledge and closes off the ability of practitioners to use their judgement.

In politics, this would be disastrous because there are many sources of legitimate knowledge and ‘the evidence’ cannot and should not become an alternative to political choice. And, of course, politicians must use their judgement, as – unlike medicine – there is no menu of possible answers to any problem.

Yet, modern forms of EBM – or, at least, sensible approaches to it – do not live up to this caricature. Instead, EBM began as a way to support individual decision-makers, and has evolved to reflect new ways of thinking about three main dilemmas. The answers to these dilemmas can help improve policymaking.

How to be more transparent

First, evidence-informed clinical practice guidelines lead the way in transparency. There’s a clear, transparent process to frame a problem, gather and assess evidence, and, through a deliberative discussion with relevant stakeholders, decide on clinical recommendations. Alongside other tools and processes, this demonstrates transparency which increases trust in the system.

How to balance research and practitioner knowledge

Second, dialogues in EBM help us understand how to balance research and practitioner knowledge. EBM has moved beyond the provision of research evidence, towards recognising and legitimising a negotiation between individual contexts, the expertise of decision-makers, and technical advice on interpreting research findings for different settings.

How to be more explicit about how you balance evidence, power, and values

Third, EBM helps us think about how to share power to co-produce policy and to think about how we combine evidence, values, and our ideas about who commands the most legitimate sources of power and accountability. We know that new structures for dialogue and decision-making can formalise and codify processes, but they do not necessarily lead to inclusion of a diverse set of voices. Power matters in dictating what knowledge is produced, for whom, and what is done with it. EBM has offered as many negative as positive lessons so far, particularly when sources of research expertise have been reluctant to let go enough to really co-produce knowledge or policy, but new studies and frameworks are at least keeping this debate alive.

Overall, our discussion of EBM challenges critics to identify its real-world application, not the old caricature. If so, it can help show us how one of the most active research agendas, on the relationship between high quality evidence and effective action, provides lessons for politics. In the main, the lesson is that our aim is not simply to maximise the use of evidence in policy, but to maximise the credibility of evidence and legitimacy of evidence advocates when so many other people have a legitimate claim to knowledge and authoritative action.

Leave a comment

Filed under Evidence Based Policymaking (EBPM)

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Speaking truth to power: a sometimes heroic but often counterproductive strategy

Our MPP class started talking about which Tom Cruise character policy analysts should be.

It started off as a point about who not to emulate: Tom Cruise in A Few Good Men. I used this character (inaccurately) to represent the archetype of someone ‘speaking truth to power’ (yes, I know TC actually said ‘I want the truth’ and JN said ‘you can’t handle the truth’).

The story of ‘speaking truth to power’ comes up frequently in discussions of the potentially heroic nature of researchers committed to (a) producing the best scientific evidence, (b) maximising the role of scientific evidence in policy, and (b) telling off policymakers if they don’t use evidence to inform their decisions. They can’t handle the truth.

Yet, as I argue in this article with Richard Kwiatkowski (for this series on evidence/policy) ‘without establishing legitimacy and building trust’ it can prove to be counterproductive. Relevant sections include:

This involves showing simple respect and seeking ways to secure their trust, rather than feeling egotistically pleased about ‘speaking truth to power’ without discernible progress. Effective engagement requires preparation, diplomacy, and good judgement as much as good evidence.

and

One solution [to obstacles associated with organizational psychology, discussed by Larrick] is ‘task conflict’ rather than ‘relationship conflict’, to encourage information sharing without major repercussions. It requires the trust and ‘psychological safety’ that comes with ‘team development’ … If successful, one can ‘speak truth to power’ … or be confident that your presentation of evidence, which challenges the status quo, is received positively.  Under such circumstances, a ‘battle of ideas’ can genuinely take place and new thinking can be possible. If these circumstances are not present, speaking truth to power may be disastrous.

The policy analyst would be better as the Tom Cruise character in Live, Die, Repeat. He exhibits a lot of relevant behaviour:

  • Engaging in trial and error to foster practical learning
  • Building up trust with, and learning from, key allies with more knowledge and skills
  • Forming part of, and putting faith in, a team of which he is a necessary but insufficient part

In The New Policy Sciences, Chris Weible and I put it this way:

focus on engagement for the long term to develop the resources necessary to maximize the impact of policy analysis and understand the context in which the information is used. Among the advantages of long-term engagement are learning the ‘rules of the game’ in organizations, forming networks built on trust and a track record of reliability, learning how to ‘soften’ policy solutions according to the beliefs of key policymakers and influencers, and spotting ‘windows of opportunity’ to bring together attention to a problem, a feasible solution, and the motive and opportunity of policymakers to select it …In short, the substance of your analysis only has meaning in relation to the context in which it is used. Further, generating trust in the messenger and knowing your audience may be more important to success than presenting the evidence.

I know TC was the hero, but he couldn’t have succeeded without training by Emily Blunt and help from that guy who used to be in Eastenders. To get that help, he had to stop being an arse when addressing thingy from Big Love.

In real world policymaking, individual scientists should not see themselves as heroes to be respected instantly and simply for their knowledge. They will only effective in several venues – from the lab to public and political arenas – if they are humble enough to learn from others and respect the knowledge and skills of other people. ‘Speaking truth to power’ is catchy and exciting but it doesn’t capture the sense of pragmatism we often need to be effective.

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling