Tag Archives: facts

Policy Analysis in 750 Words: Separating facts from values

This post begins by reproducing Can you separate the facts from your beliefs when making policy?(based on the 1st edition of Understanding Public Policy) …

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.

It continues with an edited excerpt from p59 of Understanding Public Policy, which explores the implications of bounded rationality for contemporary accounts of ‘evidence-based policymaking’:

‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.

Implications for policy analysis

Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.

A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.

Most relevant posts in the 750 series

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 

Carol Bacchi (2009) Analysing Policy: What’s the problem represented to be? 

Deborah Stone (2012) Policy Paradox

Who should be involved in the process of policy analysis?

William Riker (1986) The Art of Political Manipulation

Using Statistics and Explaining Risk (David Spiegelhalter and Gerd Gigerenzer)

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

See also

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

1 Comment

Filed under 750 word policy analysis, Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Understanding Public Policy 2nd edition

All going well, it will be out in November 2019. We are now at the proofing stage.

I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).

2nd ed cover

titlechapter 1chapter 2chapter 3chapter 4.JPG

chapter 5

chapter 6chapter 7.JPG

chapter 8

chapter 9

chapter 10

chapter 11

chapter 12

chapter 13

 

2 Comments

Filed under 1000 words, 500 words, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Taking lessons from policy theory into practice: 3 examples

Notes for ANZSOG/ ANU Crawford School/ UNSW Canberra workshop. Powerpoint here. The recording of the lecture (skip to 2m30) and Q&A is here (right click to download mp3 or dropbox link):

The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?

The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).

However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:

  1. Central government policymakers may accept the descriptive accuracy of policy theories emphasising limited central control, but not the recommendation that they should let go, share power, and describe their limits to the public.
  2. Scientists may accept key limitations to ‘evidence based policymaking’ but reject the idea that they should respond by becoming better storytellers or more manipulative operators.
  3. Researchers and practitioners struggle to resolve hard choices when combining evidence and ‘coproduction’ while ‘scaling up’ policy interventions. Evidence choice is political choice. Can we do more than merely encourage people to accept this point?

I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.

Example 1: the policy cycle endures despite its descriptive inaccuracy

cycle

The policy cycle does not describe and explain the policy process well:

  • If we insist on keeping the cycle metaphor, it is more accurate to see the process as a huge set of policy cycles that connect with each other in messy and unpredictable ways.
  • The cycle approach also links strongly to the idea of ‘comprehensive rationality’ in which a small group of policymakers and analysts are in full possession of the facts and full control of the policy process. They carry out their aims through a series of stages.

Policy theories provide more descriptive and explanatory usefulness. Their insights include:

  • Limited choice. Policymakers inherit organisations, rules, and choices. Most ‘new’ choice is a revision of the old.
  • Limited attention. Policymakers must ignore almost all of the policy problems for which they are formally responsible. They pay attention to some, and delegate most responsibility to civil servants. Bureaucrats rely on other actors for information and advice, and they build relationships on trust and information exchange.
  • Limited central control. Policy may appear to be made at the ‘top’ or in the ‘centre’, but in practice policymaking responsibility is spread across many levels and types of government (many ‘centres’). ‘Street level’ actors make policy as they deliver. Policy outcomes appear to ‘emerge’ locally despite central government attempts to control their fate.
  • Limited policy change. Most policy change is minor, made and influenced by actors who interpret new evidence through the lens of their beliefs. Well-established beliefs limit the opportunities of new solutions. Governments tend to rely on trial-and-error, based on previous agreements, rather than radical policy change based on a new agenda. New solutions succeed only during brief and infrequent windows of opportunity.

However, the cycle metaphor endures because:

  • It provides a simple model of policymaking with stages that map onto important policymaking functions.
  • It provides a way to project policymaking to the public. You know how we make policy, and that we are in charge, so you know who to hold to account.

In that context, we may want to be pragmatic about our advice:

  1. One option is via complexity theory, in which scholars generally encourage policymakers to accept and describe their limits:
  • Accept routine error, reduce short-term performance management, engage more in trial and error, and ‘let go’ to allow local actors the flexibility to adapt and respond to their context.
  • However, would a government in the Westminster tradition really embrace this advice? No. They need to balance (a) pragmatic policymaking, and (b) an image of governing competence.
  1. Another option is to try to help improve an existing approach.

Further reading (blog posts):

The language of complexity does not mix well with the language of Westminster-style accountability

Making Sense of Policymaking: why it’s always someone else’s fault and nothing ever changes

Two stories of British politics: the Westminster model versus Complex Government

Example 2: how to deal with a lack of ‘evidence based policymaking’

I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.

I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.

So, I wrote The Politics of Evidence-Based Policymaking and one zillion blog posts to help identify the ways in which policy theories could help explain the relationship between evidence and policy.

Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.

But how exactly do we turn empirical descriptions of policymaking into practical advice?

For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:

  1. they have a broader view on what counts as good evidence (see ANZSOG description)
  2. they have to ignore almost all information (a nice way into bounded rationality and policymaker psychology)
  3. they do not understand or control the process in which they seek to use evidence (a way into ‘the policy process’)

Cairney 2017 image of the policy process

We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?

Examples include:

  • ‘How to do it’ advice. I compare tips for individuals (from experienced practitioners) with tips based on policy concepts. They are quite similar-looking tips – e.g. find out where the action is, learn the rules, tell good stories, engage allies, seek windows of opportunity – but I describe mine as 5 impossible tasks!
  • Organisational reform. I describe work with the European Commission Joint Research Centre to identify 8 skills or functions of an organisation bringing together the supply/demand of knowledge.
  • Ethical dilemmas. I use key policy theories to ask people how far they want to go to privilege evidence in policy. It’s fun to talk about these things with the type of scientist who sees any form of storytelling as manipulation.

Further reading:

Is Evidence-Based Policymaking the same as good policymaking?

A 5-step strategy to make evidence count

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Principles of science advice to government: key problems and feasible solutions

Example 3: how to encourage realistic evidence-informed policy transfer

This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.

I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!

For me, the context is potentially overwhelming:

So, when we start to focus on sharing lessons, we will have three things to discover:

  1. What is the evidence for success, and from where does it come? Governments often project success without backing it up.
  2. What story do policymakers tell about the problem they are trying to solve, the solutions they produced, and why? Two different governments may be framing and trying to solve the same problem in very different ways.
  3. Was the policy introduced in a comparable policymaking system? People tend to focus on political system comparability (e.g. is it unitary or federal?), but I think the key is in policymaking system comparability (e.g. what are the rules and dominant ideas?).

To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.

 

The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018

I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), IMAJINE, Policy learning and transfer

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

https://twitter.com/MIWilliamauthor/status/912781964880510977

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

The role of evidence in UK policymaking after Brexit

We are launching a series of papers on evidence and policy in Palgrave Communications. Of course, we used Brexit as a hook, to tap into current attention to instability and major policy change. However, many of the issues we discuss are timeless and about surprising levels of stability and continuity in policy processes, despite periods of upheaval.

In my day, academics would build their careers on being annoying, and sometimes usefully annoying. This would involve developing counterintuitive insights, identifying gaps in analysis, and challenging a ‘common wisdom’ in political studies. Although not exactly common wisdom, the idea of ‘post truth’ politics, a reduction in respect for ‘experts’, and a belief that Brexit is a policymaking game-changer, are great candidates for some annoyingly contrary analysis.

In policy studies, many of us argue that things like elections, changes of government, and even constitutional changes are far less important than commonly portrayed. In media and social media accounts, we find hyperbole about the destabilising and changing impact of the latest events. In policy studies, we often stress stability and continuity.  My favourite old example regards the debates from the 1970s about electoral reform. While some were arguing that first-past-the-post was a disastrous electoral system since it produces swings of government, instability, and incoherent policy change, Richardson and Jordan would point out surprisingly high levels of stability and continuity.

Finer and Jordan Cairney

In part, this is because the state is huge, policymakers can only pay attention to a tiny part of it, and therefore most of it is processed as a low level of government, out of the public spotlight.

UPP p106

These insights still have profound relevance today, for two key reasons.

  1. The role of experts is more important than you think

This larger process provides far more opportunities for experts than we’d associate with ‘tip of the iceberg’ politics.

Some issues are salient. They command the interest of elected politicians, and those politicians often have firm beliefs that limit the ‘impact’ of any evidence that does not support their beliefs.

However, most issues are not salient. They command minimal interest, they are processed by other policymakers, and those policymakers are looking for information and advice from reliable experts.

Indeed, a lot of policy studies highlight the privileged status of certain experts, at the expense of most members of the public (which is a useful corrective to the story, associated with Brexit, that the public is too emotionally driven, too sceptical of experts, and too much in charge of the future of constitutional change).

So, Brexit will change the role of experts, but expect that change to relate to the venue in which they engage, and the networks of which they are a part, more than the practices of policymakers. Much policymaking is akin to an open door to government for people with useful information and a reputation for being reliable in their dealings with policymakers.

  1. Provide less evidence for more impact

If the problem is that policymakers can only pay attention to a tiny proportion of their responsibilities, the solution is not to bombard them with a huge amount of evidence. Instead, assume that they seek ways to ignore almost all information while still managing to make choices. The trick may be to provide just enough information to prompt demand for more, not oversupply evidence on the assumption that you have only one chance for influence.

With Richard Kwiatkoswki, I draw on policy and psychology studies to help us understand how to supply evidence to anyone using ‘rational’ and ‘irrational’ ways to limit their attention, information processing, and thought before making decisions.

Our working assumption is that policymakers need to gather information quickly and effectively, so they develop heuristics to allow them to make what they believe to be good choices. Their solutions often seem to be driven more by their emotions than a ‘rational’ analysis of the evidence, partly because we hold them to a standard that no human can reach. If so, and if they have high confidence in their heuristics, they will dismiss our criticism as biased and naïve. Under those circumstances, restating the need for ‘evidence-based policymaking’ is futile, and naively ‘speaking truth to power’ counterproductive.

Instead, try out these strategies:

  1. Develop ways to respond positively to ‘irrational’ policymaking

Instead of automatically bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond pragmatically, to pursue the kinds of evidence informed policymaking that is realistic in a complex and constantly changing policymaking environment.

  1. Tailor framing strategies to policymaker cognition

The usual advice is to minimise the cognitive burden of your presentation, and use strategies tailored to the ways in which people pay attention to, and remember information.

The less usual advice includes:

  • If policymakers are combining cognitive and emotive processes, combine facts with emotional appeals.
  • If policymakers are making quick choices based on their values and simple moral judgements, tell simple stories with a hero and a clear moral.
  • If policymakers are reflecting a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with the ‘lens’ through which actors in those coalitions understand the world.
  1. Identify the right time to influence individuals and processes

Understand what it means to find the right time to exploit ‘windows of opportunity’.

‘Timing’ can refer to the right time to influence an individual, which involves how open they are to, say, new arguments and evidence.

Or, timing refers to a ‘window of opportunity’ when political conditions are aligned. I discuss the latter in a separate paper on effective ‘policy entrepreneurs’.

  1. Adapt to real-world organisations rather than waiting for an orderly process to appear

Politicians may appear confident of policy and with a grasp of facts and details, but are (a) often vulnerable and therefore defensive or closed to challenging information, and/ or (b) inadequate in organisational politics, or unable to change the rules of their organisations.

So, develop pragmatic strategies: form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

  1. Recognise that the biases we ascribe to policymakers are present in ourselves and our own groups.

Identifying only the biases in our competitors may help mask academic/ scientific examples of group-think, and it may be counterproductive to use euphemistic terms like ‘low information’ to describe actors whose views we do not respect. This is a particular problem for scholars if they assume that most people do not live up to their own imagined standards of high-information-led action (often described as a ‘deficit model’ of engagement).

It may be more effective to recognise that: (a) people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves.; and, (b) a fundamental aspect of evolutionary psychology is that people need to get on with each other, so showing simple respect – or going further, to ‘mirror’ that person’s non-verbal signals – can be useful even if it looks facile.

This leaves open the ethical question of how far we should go to identify our biases, accept the need to work with people whose ways of thinking we do not share, and how far we should go to secure their trust without lying about one’s beliefs.

At the very least, we do not suggest these 5 strategies as a way to manipulate people for personal gain. They are better seen as ways to use psychology to communicate well. They are also likely to be as important to policy engagement regardless of Brexit. Venues may change quickly, but the ways in which people process information and make choices may not.

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, UK politics and policy

Writing a policy paper and blog post #POLU9UK

It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.

You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.

There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.

bardachs-8-steps

Be concise, be smart

So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?

It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.

Your aim is not to give a full account of a problem – it’s to get someone important to care about it.

Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.

Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.

Be creative

I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:

  1. Who is your audience? Our discussion of the limits to centralised policymaking suggest that your most influential audience will not necessarily be a UK government minister – but who else would it be?
  2. How manipulative should you be? Our discussions of ‘bounded rationality’ and ‘evidence-based policymaking’ suggest that policymakers combine ‘rational’ and ‘irrational’ shortcuts to gather information and make choices. So, do you appeal to their desire to set goals and gather a lot of scientific information and/or make an emotional and manipulative appeal?
  3. Are you an advocate or an ‘honest broker’? Contemporary discussions of science advice to government highlight unresolved debates about the role of unelected advisors: should you simply lay out some possible solutions or advocate one solution strongly?

Be reflective

For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.

You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:

  • Show how your understanding of policymaker psychology helped you decide how to present information on problems and solutions.
  • Extract insights from policy theories, such as from punctuated equilibrium theory on policymaker attention, multiple streams analysis on timing and feasibility, or the NPF on how to tell persuasive stories.
  • Explore the implications of the lack of ‘comprehensive rationality’ and absence of a ‘policy cycle’: feasibility is partly about identifying the extent to which a solution is ‘doable’ when central governments have limited powers. What ‘policy style’ or policy instruments would be appropriate for the solution you favour?

Be a blogger

With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.

The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.

Be reflective once again

Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.

This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!

See also: Professionalism online with social media

Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?

20161125_094112c

20161125_094131

20161125_094146

20161125_094203

7 Comments

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

A key theme of some of the early analysis of Brexit is that many voters followed their feelings rather than paying attention to facts*.

For some people, this is just a part of life: to describe decision-making as ‘rational’ is to deny the inevitable use of heuristics, gut feelings, emotions, and deeply held beliefs.

For others, it is indicative of a worrying ‘post-truth politics’, or a new world in which campaigners play fast and loose with evidence and say anything to win, while experts are mistrusted and ignored or excluded from debates, and voters don’t get the facts they need to make informed decisions.

One solution, proposed largely by academics (many of whom are highly critical of the campaigns) is largely institutional: let’s investigate the abuse of facts during the referendum to help us produce new rules of engagement.

Another is more pragmatic: let’s work out how to maximise the effectiveness of experts and evidence in political debate. So far, we know more about what doesn’t work. For example:

  • Don’t simply supply people with more information when you think they are not paying enough attention to it. Instead, try to work out how they think, to examine how they are likely to demand and interpret information.
  • Don’t just bemoan the tendency of people to accept simple stories that reinforce their biases. Instead, try to work out how to produce evidence-based stories that can compete for attention with those of campaigners.
  • Don’t stop at providing simpler and more accessible information. People might be more likely to read a blog post than a book or lengthy report, but most people are likely to remain blissfully unaware of most academic blogs.

I’m honestly not sure how to tell good stories to capture the public imagination (beyond that time I put the word ‘shite’ in a title) but, for example, we have a lot to learn from traditional media (and from some of the most effective academics who write for them) and from scholars who study story-telling and discourse (although, ironically, discourse analysis is often one of the most jargon-filled areas in the Academy).

We have been here before (in policy studies)

This issue of agenda setting is a key feature in current discussions of (the alleged lack of) evidence-based policymaking. Many academics, in areas such as health and environmental policy, bemoan the inevitability of ‘policy based evidence’. Some express the naïve view that policymakers should think like scientists and/ or that evidence-based policymaking should be more like the idea of evidence-based medicine in which there is a hierarchy of evidence. Others try to work out how they can improve the supply of evidence or set up new institutions to get policymakers to pay more attention to facts.

Yet, a more pragmatic solution is to work out how and why policymakers demand information, and the policymaking context in which they operate. Only then can we produce evidence-based strategies based on how the world works rather than how we would like it to work.

See also:

The Politics of Evidence Based Policymaking:3 messages

Evidence-based policymaking: lecture and Q&A

‘Evidence-based Policymaking’ and the Study of Public Policy

Paul Cairney (2016) The Politics of Evidence-based Policymaking (London: Palgrave Pivot) PDF

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

* Then, many people on twitter vented their negative feelings about other people expressing their feelings.

14 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Can you separate the facts from your beliefs when making policy?

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also draw on some embarrassing historic academic research which states that the evidence exists to show that men are more intelligent than women and some races are demonstrably superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our assumptions about how to measure intelligence or other measures of superiority.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are 449 words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your 449 words claim). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations based on values. For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic. We might argue that people only talk about ‘policy problems’ they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how they world should work.

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

9 Comments

Filed under Evidence Based Policymaking (EBPM), UK politics and policy

Fact: Social Construction is Briliant

dawkins tweet

 

A very short note on Richard Dawkins’ tweet because I am easily distracted. I’m going to call all these facts, which means that they are not facts (although we might call them self-evident):

1. Fact: no one is omniscient.

2. Fact: we can’t perceive or know everything at once (that’s point 1 said in a different way).

3. Fact: we pay attention to some facts and ignore others.

4. Fact: no two people have the same fingerprints (unless they do – it’s just an analogy).

5. Fact: no two people pay the same attention to the same facts.

6. Fact: so, each person perceives the world in a different way, based on the things to which they pay attention and ignore.

7. Fact: this prior knowledge informs their belief system which influences the ways in which they gather facts.

8. Fact: people generally gain this knowledge is particular settings, including schools. They are given facts and often asked to accept them on trust. You don’t go to school and the teacher says ‘what do you think about gravity?’. They say ‘this is a fact and you’ll damn well believe it, sonny/ missy’.

9. Fact: different groups present facts in different ways, so people learn in different ways and different groups end up perceiving the real world in all sorts of different ways. Then they go on twitter and often make an arse of themselves during aimless arguments.

10. Fact: you can make all these points without concluding that all opinions are equally valid. My belief is what counts, mate – socially constructed or not (fact: the assertion that I am right is an exercise of power to put the idiots in their place).

11. Fact: people who talk about social construction aren’t all arses using big words to look clever. Some of them are, but you would expect that with the law of averages (fact).

Leave a comment

Filed under Uncategorized