Category Archives: Psychology Based Policy Studies

Policy Analysis in 750 Words: Separating facts from values

This post begins by reproducing Can you separate the facts from your beliefs when making policy?(based on the 1st edition of Understanding Public Policy) …

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.

It continues with an edited excerpt from p59 of Understanding Public Policy, which explores the implications of bounded rationality for contemporary accounts of ‘evidence-based policymaking’:

‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.

Implications for policy analysis

Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.

A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.

Most relevant posts in the 750 series

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 

Carol Bacchi (2009) Analysing Policy: What’s the problem represented to be? 

Deborah Stone (2012) Policy Paradox

Who should be involved in the process of policy analysis?

William Riker (1986) The Art of Political Manipulation

Using Statistics and Explaining Risk (David Spiegelhalter and Gerd Gigerenzer)

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

See also

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

1 Comment

Filed under 750 word policy analysis, Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

The coronavirus and evidence-informed policy analysis (short version)

  • Paul Cairney (2020) ‘The UK Government’s COVID-19 policy: assessing evidence-informed policy analysis in real time’, British Politics https://rdcu.be/b9zAk (PDF)

The coronavirus feels like a new policy problem that requires new policy analysis. The analysis should be informed by (a) good evidence, translated into (b) good policy. However, don’t be fooled into thinking that either of those things are straightforward. There are simple-looking steps to go from defining a problem to making a recommendation, but this simplicity masks the profoundly political process that must take place. Each step in analysis involves political choices to prioritise some problems and solutions over others, and therefore prioritise some people’s lives at the expense of others.

My article in British Politics takes us through those steps in the UK, and situates them in a wider political and policymaking context. This post is shorter, and only scratches the surface of analysis.

5 steps to policy analysis

  1. Define the problem.

Perhaps we can sum up the initial UK government approach as: (a) the impact of this virus and illness will be a level of death and illness that could overwhelm the population and exceed the capacity of public services, so (b) we need to contain the virus enough to make sure it spreads in the right way at the right time, so (c) we need to encourage and make people change their behaviour (primarily via hygiene and social distancing). However, there are many ways to frame this problem to emphasise the importance of some populations over others, and some impacts over others.

  1. Identify technically and politically feasible solutions.

Solutions are not really solutions: they are policy instruments that address one aspect of the problem, including taxation and spending, delivering public services, funding research, giving advice to the population, and regulating or encouraging changes to social behaviour. Each new instrument contributes an existing mix, with unpredictable and unintended consequences. Some instruments seem technically feasible (they will work as intended if implemented), but will not be adopted unless politically feasible (enough people support their introduction). Or vice versa. From the UK government’s perspective, this dual requirement rules out a lot of responses.

  1. Use values and goals to compare solutions.

Typical judgements combine: (a) broad descriptions of values such as efficiency, fairness, freedom, security, and human dignity, (b) instrumental goals, such as sustainable policymaking (can we do it, and for how long?), and political feasibility (will people agree to it, and will it make me more or less popular or trusted?), and (c) the process to make choices, such as the extent to which a policy process involves citizens or stakeholders (alongside experts) in deliberation. They combine to help policymakers come to high profile choices (such as the balance between individual freedom and state coercion), and low profile but profound choices (to influence the level of public service capacity, and level of state intervention, and therefore who and how many people will die).

  1. Predict the outcome of each feasible solution.

It is difficult to envisage a way for the UK Government to publicise all of the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation. People often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic about who should live or die, or provide a frank account without unintended consequences for public trust or anxiety. If so, one aspect of government policy is to keep some choices implicit and avoid a lot of debate on trade-offs. Another is to make choices continuously without knowing what their impact will be (the most likely scenario right now).

  1. Make a choice, or recommendation to your client.

Your recommendation or choice would build on these four steps. Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem seemed to change. If you are writing your analysis, maybe keep it down to one sheet of paper (in other words, fewer words than in this post up to this point).

Policy analysis is not as simple as these steps suggest, and further analysis of the wider policymaking environment helps describe two profound limitations to simple analytical thought and action.

  1. Policymakers must ignore almost all evidence

The amount of policy relevant information is infinite, and capacity is finite. So, individuals and governments need ways to filter out almost all of it. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information. They include: define a problem and a feasible response, seek information that is available, understandable, and actionable, and identify credible sources of information and advice. In that context, the vague idea of trusting or not trusting experts is nonsense, and the larger post highlights the many flawed ways in which all people decide whose expertise counts.

  1. They do not control the policy process.

Policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome.

  • There are many policymakers and influencers spread across a political system. For example, consider the extent to which each government department, devolved governments, and public and private organisations are making their own choices that help or hinder the UK government approach.
  • Most choices in government are made in ‘subsystems’, with their own rules and networks, over which ministers have limited knowledge and influence.
  • The social and economic context, and events, are largely out of their control.

The take home messages (if you accept this line of thinking)

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results do not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing. No one is helping their government solve the problem by saying stupid shit on the internet (OK, that last bit was a message of despair).

Further reading:

The article (PDF) sets out these arguments in much more detail, with some links to further thoughts and developments.

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

The coronavirus and evidence-informed policy analysis (long version)

Final update 2.11.20. Don’t read this post. It became too long and unwieldy. I turned it into:

A published article https://rdcu.be/b9zAk (PDF)

A 25000 word version with more discussion and links Cairney UK coronavirus policy 25000 14.7.20 

This is the long version. It is long. Too long to call a blog post. Let’s call it a ‘living document’ that I update and amend as new developments arise (then start turning into a more organised paper). In most cases, I am adding tweets, so the date of the update is embedded. If I add a new section, I will add a date. If you seek specific topics (like ‘herd immunity’), it might be worth doing a search. The short version is shorter.

The coronavirus feels like a new policy problem. Governments already have policies for public health crises, but the level of uncertainty about the spread and impact of this virus seems to be taking it to a new level of policy, media, and public attention. The UK Government’s Prime Minister calls it ‘the worst public health crisis for a generation’.

As such, there is no shortage of opinions on what to do, but there is a shortage of well-considered opinions, producing little consensus. Many people are rushing to judgement and expressing remarkably firm opinions about the best solutions, but their contributions add up to contradictory evaluations, in which:

  • the government is doing precisely the right thing or the completely wrong thing,
  • we should listen to this expert saying one thing or another expert saying the opposite.

Lots of otherwise-sensible people are doing what they bemoan in politicians: rushing to judgement, largely accepting or sharing evidence only if it reinforces that judgement, and/or using their interpretation of any new development to settle scores with their opponents.

Yet, anyone who feels, without uncertainty, that they have the best definition of, and solution to, this problem is a fool. If people are also sharing bad information and advice, they are dangerous fools. Further, as Professor Madley puts it (in the video below), ‘anyone who tells you they know what’s going to happen over the next six months is lying’.

In that context, how can we make sense of public policy to address the coronavirus in a more systematic way?

Studies of policy analysis and policymaking do not solve a policy problem, but they at least give us a language to think it through.

  1. Let’s focus on the UK as an example, and use common steps in policy analysis, to help us think through the problem and how to try to manage it.
  • In each step, note how quickly it is possible to be overwhelmed by uncertainty and ambiguity, even when the issue seems so simple at first.
  • Note how difficult it is to move from Step 1, and to separate Step 1 from the others. It is difficult to define the problem without relating it to the solution (or to the ways in which we will evaluate each solution).
  1. Let’s relate that analysis to research on policymaking, to understand the wider context in which people pay attention to, and try to address, important problems that are largely out of their control.

Throughout, note that I am describing a thought process as simply as I can, not a full examination of relevant evidence. I am highlighting the problems that people face when ‘diagnosing’ policy problems, not trying to diagnose it myself. To do so, I draw initially on common advice from the key policy analysis texts (summaries of the texts that policy analysis students are most likely to read) that simplify the process a little too much. Still, the thought process that it encourages took me hours alone (spread over three days) to produce no real conclusion. Policymakers and advisers, in the thick of this problem, do not have that luxury of time or uncertainty.

See also: Boris Johnson’s address to the nation in full (23.3.20) and press conference transcripts

Step 1 Define the problem

Common advice in policy analysis texts:

  • Provide a diagnosis of a policy problem, using rhetoric and eye-catching data to generate attention.
  • Identify its severity, urgency, cause, and our ability to solve it. Don’t define the wrong problem, such as by oversimplifying.
  • Problem definition is a political act of framing, as part of a narrative to evaluate the nature, cause, size, and urgency of an issue.
  • Define the nature of a policy problem, and the role of government in solving it, while engaging with many stakeholders.
  • ‘Diagnose the undesirable condition’ and frame it as ‘a market or government failure (or maybe both)’.

Coronavirus as a physical problem is not the same as a coronavirus policy problem. To define the physical problem is to identify the nature, spread, and impact of a virus and illness on individuals and populations. To define a policy problem, we identify the physical problem and relate it (implicitly or explicitly) to what we think a government can, and should, do about it. Put more provocatively, it is only a policy problem if policymakers are willing and able to offer some kind of solution.

This point may seem semantic, but it raises a profound question about the capacity of any government to solve a problem like an epidemic, or for governments to cooperate to solve a pandemic. It is easy for an outsider to exhort a government to ‘do something!’ (or ‘ACT NOW!’) and express certainty about what would happen. However, policymakers inside government:

  1. Do not enjoy the same confidence that they know what is happening, or that their actions will have their intended consequences, and
  2. Will think twice about trying to regulate social behaviour under those circumstances, especially when they
  3. Know that any action or inaction will benefit some and punish others.

For example, can a government make people wash their hands? Or, if it restricts gatherings at large events, can it stop people gathering somewhere else, with worse impact? If it closes a school, can it stop children from going to their grandparents to be looked after until it reopens? There are 101 similar questions and, in each case, I reckon the answer is no. Maybe government action has some of the desired impact; maybe not. If you agree, then the question might be: what would it really take to force people to change their behaviour?

See also: Coronavirus has not suspended politics – it has revealed the nature of power (David Runciman)

The answer is: often too much for a government to consider (in a liberal democracy), particularly if policymakers are informed that it will not have the desired impact.

If so, the UK government’s definition of the policy problem will incorporate this implicit question: what can we do if we can influence, but not determine (or even predict well) how people behave?

Uncertainty about the coronavirus plus uncertainty about policy impact

Now, add that general uncertainty about the impact of government to this specific uncertainty about the likely nature and spread of the coronavirus:

A summary of this video suggests:

  • There will be an epidemic (a profound spread to many people in a short space of time), then the problem will be endemic (a long-term, regular feature of life) (see also UK policy on coronavirus COVID-19 assumes that the virus is here to stay).
  • In the absence of a vaccine, the only way to produce ‘herd immunity’ is for most people to be infected and recover

[Note: there is much debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation, based on levels of trust/distrust in the UK Government, its Prime Minister, and the Prime Minister’s special adviser. I discuss this point below under ‘trial and error policymaking’. See also Who can you trust during the coronavirus crisis? ]

  • The ideal spread involves all well people sharing the virus first, while all vulnerable people (e.g. older, and/or with existing health problems that affect their immune systems) protected in one isolated space, but it won’t happen like that; so, we are trying to minimise damage in the real world.
  • We mainly track the spread via deaths, with data showing a major spike appearing one month later, so the problem may only seem real to most people when it is too late to change behaviour

See also: Coronavirus: Government expert defends not closing UK schools (BBC, Sir Patrick Vallance 13th March 2020)

https://twitter.com/DrSamSims/status/1247445729439895555

  • The choice in theory is between a rapid epidemic with a high peak, or a slowed-down epidemic over a longer period, but ‘anyone who tells you they know what’s going to happen over the next six months is lying’.
  • Maybe this epidemic will be so memorable as to shift social behaviour, but so much depends on trying to predict (badly) if individuals will actually change (see also Spiegelhalter on communicating risk).

None of this account tells policymakers what to do, but at least it helps them clarify three key aspects of their policy problem:

  1. The impact of this virus and illness could overwhelm the population, to the extent that it causes mass deaths, causes a level of illness that exceeds the capacity of health services to treat, and contributes to an unpredictable amount of social and economic damage.
  2. We need to contain the virus enough to make sure it (a) spreads at the right speed and/or (b) peaks at the right time. The right speed seems to be: a level that allows most people to recover alone, while the most vulnerable are treated well in healthcare settings that have enough capacity. The right time seems to be the part of the year with the lowest demand on health services (e.g. summer is better than winter). In other words, (a) reduce the size of the peak by ‘flattening the curve’, and/or (b) find the right time of year to address the peak, while (c) anticipating more than one peak.

My impression is that the most frequently-expressed aim is (a) …

… while the UK Government’s Deputy Chief Medical Officer also seems to be describing (b):

  1. We need to encourage or coerce people to change their behaviour, to look after themselves (e.g. by handwashing) and forsake their individual preferences for the sake of public health (e.g. by self-isolating or avoiding vulnerable people). Perhaps we can foster social trust and empathy to encourage responsible individual action. Perhaps people will only protect others if obliged to do so (compare Stone; Ostrom; game theory).

See also: From across the Ditch: How Australia has to decide on the least worst option for COVID-19 (Prof Tony Blakely on three bad options: (1) the likelihood of ‘elimination’ of the virus before vaccination is low; (2) an 18-month lock-down will help ‘flatten the curve’; (3) ‘to prepare meticulously for allowing the pandemic to wash through society over a period of six or so months. To tool up the production of masks and medical supplies. To learn as quickly as possible which treatments of people sick with COVID-19 saves lives. To work out our strategies for protection of the elderly and those with a chronic condition (for whom the mortality from COVID-19 is much higher’).

From uncertainty to ambiguity

If you are still with me, I reckon you would have worded those aims slightly differently, right? There is some ambiguity about these broad intentions, partly because there is some uncertainty, and partly because policymakers need to set rather vague intentions to generate the highest possible support for them. However, vagueness is not our friend during a crisis involving such high anxiety. Further, they are only delaying the inevitable choices that people need to make to turn a complex multi-faceted problem into something simple enough to describe and manage. The problem may be complex, but our attention focuses only on a small number of aspects, at the expense of the rest. Examples that have arisen, so far, include to accentuate:

  1. The health of the whole population or people who would be affected disproportionately by the illness.
  • For example, the difference in emphasis affects the health advice for the relatively vulnerable (and the balance between exhortation and reassurance)
  1. Inequalities in relation to health, socio-economic status (e.g. income, gender, race, ethnicity), or the wider economy.
  • For example, restrictive measures may reduce the risk of harm to some, but increase the burden on people with no savings or reliable sources of income.
  • For example, some people are hoarding large quantities of home and medical supplies that (a) other people cannot afford, and (b) some people cannot access, despite having higher need.
  • For example, social distancing will limit the spread of the virus (see the nascent evidence), but also produce highly unequal forms of social isolation that increase the risk of domestic abuse (possibly exacerbated by school closures) and undermine wellbeing. Or, there will be major policy changes, such as to the rules to detain people under mental health legislation, regarding abortion, or in relation to asylum (note: some of these tweets are from the US, partly because I’m seeing more attention to race – and the consequence of systematic racism on the socioeconomic inequalities so important to COVID-19 mortality – than in the UK).

See also: COVID-19: how the UK’s economic model contributes towards a mismanagement of the crisis (Carolina Alves and Farwa Sial 30.3.20),

Economic downturn and wider NHS disruption likely to hit health hard – especially health of most vulnerable (Institute for Fiscal Studies 9.4.20),

Don’t be fooled: Britain’s coronavirus bailout will make the rich richer still (Christine Berry 13.4.20)

https://twitter.com/TimothyNoah1/status/1240375741809938433

 

https://twitter.com/povertyscholar/status/1246487621230092294

https://twitter.com/GKBhambra/status/1248874500764073989

cc

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/MarioLuisSmall/status/1239879542094925825

https://twitter.com/heytherehurley/status/1242113416103432195

  • For example, governments cannot ignore the impact of their actions on the economy, however much they emphasise mortality, health, and wellbeing. Most high-profile emphasis was initially on the fate of large and small businesses, and people with mortgages, but a long period of crisis will a tip the balance from low income to unsustainable poverty (even prompting Iain Duncan Smith to propose policy change), and why favour people who can afford a mortgage over people scraping the money together for rent?
  1. A need for more communication and exhortation, or for direct action to change behaviour.
  2. The short term (do everything possible now) or long term (manage behaviour over many months).
  1. How to maintain trust in the UK government when (a) people are more or less inclined to trust a the current part of government and general trust may be quite low, and (b) so many other governments are acting differently from the UK.
  • For example, note the visible presence of the Prime Minister, but also his unusually high deference to unelected experts such as (a) UK Government senior scientists providing direct advice to ministers and the public, and (b) scientists drawing on limited information to model behaviour and produce realistic scenarios (we can return to the idea of ‘evidence-based policymaking’ later). This approach is not uncommon with epidemics/ pandemics (LD was then the UK Government’s Chief Medical Officer):
  • For example, note how often people are second guessing and criticising the UK Government position (and questioning the motives of Conservative ministers).

See also: Coronavirus: meet the scientists who are now household names

  1. How policy in relation to the coronavirus relates to other priorities (e.g. Brexit, Scottish independence, trade, education, culture)

7. Who caused, or who is exacerbating, the problem? The answers to such questions helps determine which populations are most subject to policy intervention.

  • For example, people often try to lay blame for viruses on certain populations, based on their nationality, race, ethnicity, sexuality, or behaviour (e.g. with HIV).
  • For example, the (a) association between the coronavirus and China and Chinese people (e.g. restrict travel to/ from China; e.g. exacerbate racism), initially overshadowed (b) the general role of international travellers (e.g. place more general restrictions on behaviour), and (c) other ways to describe who might be responsible for exacerbating a crisis.

See also: ‘Othering the Virus‘ by Marius Meinhof

Under ‘normal’ policymaking circumstances, we would expect policymakers to resolve this ambiguity by exercising power to set the agenda and make choices that close off debate. Attention rises at first, a choice is made, and attention tends to move on to something else. With the coronavirus, attention to many different aspects of the problem has been lurching remarkably quickly. The definition of the policy problem often seems to be changing daily or hourly, and more quickly than the physical problem. It will also change many more times, particularly when attention to each personal story of illness or death prompts people to question government policy every hour. If the policy problem keeps changing in these ways, how could a government solve it?

Step 2 Identify technically and politically feasible solutions

Common advice in policy analysis texts:

  • Identify the relevant and feasible policy solutions that your audience/ client might consider.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Provide ‘plausible’ predictions about the future effects of current/ alternative policies.
  • Identify many possible solutions, then select the ‘most promising’ for further analysis.
  • Identify how governments have addressed comparable problems, and a previous policy’s impact.

Policy ‘solutions’ are better described as ‘tools’ or ‘instruments’, largely because (a) it is rare to expect them to solve a problem, and (b) governments use many instruments (in different ways, at different times) to make policy, including:

  1. Public expenditure (e.g. to boost spending for emergency care, crisis services, medical equipment)
  2. Economic incentives and disincentives (e.g. to reduce the cost of business or borrowing, or tax unhealthy products)
  3. Linking spending to entitlement or behaviour (e.g. social security benefits conditional on working or seeking work, perhaps with the rules modified during crises)
  4. Formal regulations versus voluntary agreements (e.g. making organisations close, or encouraging them to close)
  5. Public services: universal or targeted, free or with charges, delivered directly or via non-governmental organisations
  6. Legal sanctions (e.g. criminalising reckless behaviour)
  7. Public education or advertising (e.g. as paid adverts or via media and social media)
  8. Funding scientific research, and organisations to advise on policy
  9. Establishing or reforming policymaking units or departments
  10. Behavioural instruments, to ‘nudge’ behaviour (seemingly a big feature in the UK , such as on how to encourage handwashing).

As a result, what we call ‘policy’ is really a complex mix of instruments adopted by one or more governments. A truism in policy studies is that it is difficult to define or identify exactly what policy is because (a) each new instrument adds to a pile of existing measures (with often-unpredictable consequences), and (b) many instruments designed for individual sectors tend, in practice, to intersect in ways that we cannot always anticipate. When you think through any government response to the coronavirus, note how every measure is connected to many others.

Further, it is a truism in public policy that there is a gap between technical and political feasibility: the things that we think will be most likely to work as intended if implemented are often the things that would receive the least support or most opposition. For example:

  1. Redistributing income and wealth to reduce socio-economic inequalities (e.g. to allay fears about the impact of current events on low-income and poverty) seems to be less politically feasible than distributing public services to deal with the consequences of health inequalities.
  2. Providing information and exhortation seems more politically feasible than the direct regulation of behaviour. Indeed, compared to many other countries, the UK Government seems reluctant to introduce ‘quarantine’ style measures to restrict behaviour.

Under ‘normal’ circumstances, governments may be using these distinctions as simple heuristics to help them make modest policy changes while remaining sufficiently popular (or at least looking competent). If so, they are adding or modifying policy instruments during individual ‘windows of opportunity’ for specific action, or perhaps contributing to the sense of incremental change towards an ambitious goal.

Right now, we may be pushing the boundaries of what seems possible, since crises – and the need to address public anxiety – tend to change what seems politically feasible. However, many options that seem politically feasible may not be possible (e.g. to buy a lot of extra medical/ technology capacity quickly), or may not work as intended (e.g. to restrict the movement of people). Think of technical and political feasibility as necessary but insufficient on their own, which is a requirement that rules out a lot of responses.

Step 3 Use value-based criteria and political goals to compare solutions

Common advice in policy analysis texts:

  • Typical value judgements relate to efficiency, equity and fairness, the trade-off between individual freedom and collective action, and the extent to which a policy process involves citizens in deliberation.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions
  • ‘Specify the objectives to be attained in addressing the problem and the criteria  to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.
  • ‘Effectiveness, efficiency, fairness, and administrative efficiency’ are common.
  • Identify (a) the values to prioritise, such as ‘efficiency’, ‘equity’, and ‘human dignity’, and (b) ‘instrumental goals’, such as ‘sustainable public finance or political feasibility’, to generate support for solutions.
  • Instrumental questions may include: Will this intervention produce the intended outcomes? Is it easy to get agreement and maintain support? Will it make me popular, or diminish trust in me even further?

Step 3 is the most simple-looking but difficult task. Remember that it is a political, not technical, process. It is also a political process that most people would like to avoid doing (at least publicly) because it involves making explicit the ways in which we prioritise some people over others. Public policy is the choice to help some people and punish or refuse to help others (and includes the choice to do nothing).

Policy analysis texts describe a relatively simple procedure of identifying criteria and producing a table (with a solution in each row, and criteria in each column) to compare the trade-offs between each solution. However, these criteria are notoriously difficult to define, and people resolve that problem by exercising power to decide what each term means, and whose interests should be served when they resolve trade-offs. For example, see Stone on whose needs come first, who benefits from each definition of fairness, and how technical-looking processes such as ‘cost benefit analysis’ mask political choices.

Right now, the most obvious and visible trade-off, accentuated in the UK, is between individual freedom and collective action, or the balance between state, communal, and market/ individual solutions. In comparison with many countries (and China and Italy in particular), the UK Government seems to be favouring individual action over state quarantine measures. However, most trade-offs are difficult to categorise

  1. What should be the balance between efforts to minimise the deaths of some (generally in older populations) and maximise the wellbeing of others? This is partly about human dignity during crisis, how we treat different people fairly, and the balance of freedom and coercion.
  2. How much should a government spend to keep people alive using intensive case or expensive medicines, when the money could be spent improving the lives of far more people? This is partly about human dignity, the relative efficiency of policy measures, and fairness.

If you are like me, you don’t really want to answer such questions (indeed, even writing them looks callous). If so, one way to resolve them is to elect policymakers to make such choices on our behalf (perhaps aided by experts in moral philosophy, or with access to deliberative forums). To endure, this unusually high level of deference to elected ministers requires some kind of reciprocal act:

https://twitter.com/devisridhar/status/1240648925998178304

See also: We must all do everything in our power to protect lives (UK Secretary of State for Health and Social Care)

Still, I doubt that governments are making reportable daily choices with reference to a clear and explicit view of what the trade-offs and priorities should be, because their choices are about who will die, and their ability to predict outcomes is limited.

See also: Media experts despair at Boris Johnson’s coronavirus campaign (Sonia Sodha)

Step 4 Predict the outcome of each feasible solution.

Common advice in policy analysis texts:

  • Focus on the outcomes that key actors care about (such as value for money), and quantify and visualise your predictions if possible. Compare the pros and cons of each solution, such as how much of a bad service policymakers will accept to cut costs.
  • ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.
  • Estimate the cost of a new policy, in comparison with current policy, and in relation to factors such as savings to society or benefits to certain populations. Use your criteria and projections to compare each alternative in relation to their likely costs and benefits.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Short deadlines dictate that you use ‘logic and theory, rather than systematic empirical evidence’ to make predictions efficiently.
  • Monitoring is crucial because it is difficult to predict policy success, and unintended consequences are inevitable. Try to measure the outcomes of your solution, while noting that evaluations are contested.

It is difficult to envisage a way for the UK Government to publicise the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation, rather than a highly technical debate between a small number of academics:

Further, people often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic, or provide a frank account without unintended consequences for public trust or anxiety. If so, government policy involves (a) to keep some choices implicit to avoid a lot of debate on trade-offs, and (b) to make general statements about choices when they do not know what their impact will be.

Step 5 Make a recommendation to your client

Common advice in policy analysis texts:

  • Examine your case through the eyes of a policymaker. Keep it simple and concise.
  • Make a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Client-oriented advisors identify the beliefs of policymakers and tailor accordingly.
  • ‘Unless your client asks you not to do so, you should explicitly recommend one policy’

I now invite you to make a recommendation (step 5) based on our discussion so far (steps 1-4). Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem would seem to change. If you are writing your analysis, maybe keep it down to one sheet of paper (and certainly far fewer words than in this post). Better you than me.

Please now watch this video before I suggest that things are not so simple.

Would that policy analysis were so simple

Imagine writing policy analysis in an imaginary world, in which there is a single powerful ‘rational’ policymaker at the heart of government, making policy via an orderly series of stages.

cycle and cycle spirograph 18.2.20

Your audience would be easy to identify at each stage, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change (since the selection of a solution would lead to implementation).  You could adopt a simple 5 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

Studies of policy analysts describe how unrealistic this expectation tends to be (Radin, Brans, Thissen).

Table for coronavirus 750

For example, there are many policymakers, analysts, influencers, and experts spread across political systems, and engaging with 101 policy problems simultaneously, which suggests that it is not even clear how everyone fits together and interacts in what we call (for the sake of simplicity) ‘the policy process’.

Instead, we can describe real world policymaking with reference to two factors.

The wider policymaking environment: 1. Limiting the use of evidence

First, policymakers face ‘bounded rationality’, in which they only have the ability to pay attention to a tiny proportion of available facts, are unable to separate those facts from their values (since we use our beliefs to evaluate the meaning of facts), struggle to make clear and consistent choices, and do not know what impact they will have. The consequences can include:

  • Limited attention, and lurches of attention. Policymakers can only pay attention to a tiny proportion of their responsibilities, and policymaking organizations struggle to process all policy-relevant information. They prioritize some issues and information and ignore the rest.
  • Power and ideas. Some ways of understanding and describing the world dominate policy debate, helping some actors and marginalizing others.
  • Beliefs and coalitions. Policymakers see the world through the lens of their beliefs. They engage in politics to turn their beliefs into policy, form coalitions with people who share them, and compete with coalitions who don’t.
  • Dealing with complexity. They engage in ‘trial-and-error strategies’ to deal with uncertain and dynamic environments (see the new section on trial-and-error- at the end).
  • Framing and narratives. Policy audiences are vulnerable to manipulation when they rely on other actors to help them understand the world. People tell simple stories to persuade their audience to see a policy problem and its solution in a particular way.
  • The social construction of populations. Policymakers draw on quick emotional judgements, and social stereotypes, to propose benefits to some target populations and punishments for others.
  • Rules and norms. Institutions are the formal rules and informal understandings that represent a way to narrow information searches efficiently to make choices quickly.
  • Learning. Policy learning is a political process in which actors engage selectively with information, not a rational search for truth.

Evidence-based or expert-informed policymaking

Put simply, policymakers cannot oversee a simple process of ‘evidence-based policymaking’. Rather, to all intents and purposes:

  1. They need to find ways to ignore most evidence so that they can focus disproportionately on some. Otherwise, they will be unable to focus well enough to make choices. The cognitive and organisational shortcuts, described above, help them do it almost instantly.
  2. They also use their experience to help them decide – often very quickly – what evidence is policy-relevant under the circumstances. Relevance can include:
  • How it relates to the policy problem as they define it (Step 1).
  • If it relates to a feasible solution (Step 2).
  • If it is timely, available, understandable, and actionable.
  • If it seems credible, such as from groups representing wider populations, or from people they trust.
  1. They use a specific shortcut: relying on expertise.

However, the vague idea of trusting or not trusting experts is a nonsense, largely because it is virtually impossible to set a clear boundary between relevant/irrelevant experts and find a huge consensus on (exactly) what is happening and what to do. Instead, in political systems, we define the policy problem or find other ways to identify the most relevant expertise and exclude other sources of knowledge.

In the UK Government’s case, it appears to be relying primarily on expertise from its own general scientific advisers, medical and public health advisers, and – perhaps more controversially – advisers on behavioural public policy.

box 7.1

Right now, it is difficult to tell exactly how and why it relies on each expert (at least when the expert is not in a clearly defined role, in which case it would be irresponsible not to consider their advice). Further, there are regular calls on Twitter for ministers to be more open about their decisions.

See also: Coronavirus: do governments ever truly listen to ‘the science’?

However, don’t underestimate the problems of identifying why we make choices, then justifying one expert or another (while avoiding pointless arguments), or prioritising one form of advice over another. Look, for example, at the kind of short-cuts that intelligent people use, which seem sensible enough, but would receive much more intense scrutiny if presented in this way by governments:

  • Sophisticated speculation by experts in a particular field, shared widely (look at the RTs), but questioned by other experts in another field:
  • Experts in one field trusting certain experts in another field based on personal or professional interaction:
  • Experts in one field not trusting a government’s approach based on its use of one (of many) sources of advice:
  • Experts representing a community of experts, criticising another expert (Prof John Ashton), for misrepresenting the amount of expert scepticism of government experts (yes, I am trying to confuse you):
  • Expert debate on how well policymakers are making policy based on expert advice
  • Finding quite-sensible ways to trust certain experts over others, such as because they can be held to account in some way (and may be relatively worried about saying any old shit on the internet):

There are many more examples in which the shortcut to expertise is fine, but not particularly better than another shortcut (and likely to include a disproportionately high number of white men with STEM backgrounds).

Update: of course, they are better than the volume trumps expertise approach:

See also:

Further, in each case, we may be receiving this expert advice via many other people, and by the time it gets to us the meaning is lost or reversed (or there is some really sophisticated expert analysis of something rumoured – not demonstrated – to be true):

For what it’s worth, I tend to favour experts who:

(a) establish the boundaries of their knowledge, (b) admit to high uncertainty about the overall problem:

(c) (in this case) make it clear that they are working on scenarios, not simple prediction

(d) examine critically the too-simple ideas that float around, such as the idea that the UK Government should emulate ‘what works’ somewhere else

(e) situate their own position (in Prof Sridhar’s case, for mass testing) within a broader debate

See also:

See also: Prof Sir John Bell (4.3.20) on why an accurate antibody test is at least one month away and these exchanges on the problems with test ‘accuracy’:

(f) use their expertise on governance to highlight problems with thoughtless criticism

However, note that most of these experts are from a very narrow social background, and from very narrow scientific fields (first in modelling, then likely in testing), despite the policy problem being largely about (a) who, and how many people, a government should try to save, and (b) how far a government should go to change behaviour to do it (Update 2.4.20: I wrote that paragraph before adding so many people to the list). It is understandable to defer in this way during a crisis, but it also contributes to a form of ‘depoliticisation’ that masks profound choices that benefit some people and leave others vulnerable to harm.

See also: COVID-19: a living systematic map of the evidence

See also: To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review

See also: Covid-19: why is the UK government ignoring WHO’s advice? (British Medical Journal editorial)

See also: Coronavirus: just 2,000 NHS frontline workers tested so far

See also: ‘What’s important is social distancing’ coronavirus testing ‘is a side issue’, says Deputy Chief Medical Officer [Professor Jonathan Van-Tam talks about the important distinction between a currently available test to see if someone has contracted the virus (an antigen test) and a forthcoming test to see if someone has had and recovered from COVID-19 (an antibody test)]. The full interview is here (please feel free to ignore the editorialising of the uploader):

See also: Why is Germany able to test for coronavirus so much more than the UK? (which is mostly a focus on Germany’s innovation and partly on the UK (Public Health England) focus on making sure its test is reliable, in the context of ‘coronavirus tests produced at great speed which have later proven to be inaccurate’ (such as one with a below-30% accuracy rate, which is worse than not testing at all). Compare with The Coronavirus Hit Germany And The UK Just Days Apart But The Countries Have Responded Differently. Here’s How and the Opinion piece ‘A public inquiry into the UK’s coronavirus response would find a litany of failures

See also: Rights and responsibilities in the Coronavirus pandemic

See also: UK police warned against ‘overreach’ in use of virus lockdown powers (although note that there is no UK police force and that Scotland has its own legal system) and Coronavirus: extra police powers risk undermining public trust (Alex Oaten and Chris Allen)

See also (Calderwood resigned as CMO that night):

See also: Social Licensing of Privacy-Encroaching Policies to Address the COVID-19 Pandemic (U.K.) (research on public opinion)

The wider policymaking environment: 2. Limited control

Second, policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome. I normally use the following figure to think through the nature of a complex and unwieldy policymaking environment of which no ‘centre’ of government has full knowledge or control.

image policy process round 2 25.10.18

It helps us identify (further) the ways in which we can reject the idea that the UK Prime Minister and colleagues can fully understand and solve policy problems:

Actors. The environment contains many policymakers and influencers spread across many levels and types of government (‘venues’).

For example, consider how many key decisions that (a) have been made by organisations not in the UK central government, and (b) are more or less consistent with its advice, including:

  • Devolved governments announcing their own healthcare and public health responses (although the level of UK coordination seems more significant than the level of autonomy).
  • Public sector employers initiating or encouraging at-home working (and many Universities moving quickly from in-person to online teaching)
  • Private organisations cancelling cultural and sporting events.

Context and events. Policy solutions relate to socioeconomic context and events which can be impossible to ignore and out of the control of policymakers. The coronavirus, and its impact on so many aspects on population health and wellbeing, is an extreme example of this problem.

Networks, Institutions, and Ideas. Policymakers and influencers operate in subsystems (specialist parts of political systems). They form networks or coalitions built on the exchange of resources or facilitated by trust underpinned by shared beliefs or previous cooperation. Many different parts of government have practices driven by their own formal and informal rules. Formal rules are often written down or known widely. Informal rules are the unwritten rules, norms and practices that are difficult to understand, and may not even be understood in the same way by participants. Political actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so established as to be taken for granted. These dominant frames of reference establish the boundaries of the political feasibility of policy solutions.  These kinds of insights suggest that most policy decisions are considered, made, and delivered in the name of – but not in the full knowledge of – government ministers.

Trial and error policymaking in complex policymaking systems (17.3.20)

There are many ways to conceptualise this policymaking environment, but few theories provide specific advice on what to do, or how to engage effectively in it. One notable exception is the general advice that comes from complexity theory, including:

  • Law-like behaviour is difficult to identify – so a policy that was successful in one context may not have the same effect in another.
  • Policymaking systems are difficult to control; policy makers should not be surprised when their policy interventions do not have the desired effect.
  • Policy makers in the UK have been too driven by the idea of order, maintaining rigid hierarchies and producing top-down, centrally driven policy strategies.  An attachment to performance indicators, to monitor and control local actors, may simply result in policy failure and demoralised policymakers.
  • Policymaking systems or their environments change quickly. Therefore, organisations must adapt quickly and not rely on a single policy strategy.

On this basis, there is a tendency in the literature to encourage the delegation of decision-making to local actors:

  1. Rely less on central government driven targets, in favour of giving local organisations more freedom to learn from their experience and adapt to their rapidly-changing environment.
  2. To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly.
  3. Encourage better ways to deal with alleged failure by treating ‘errors’ as sources of learning (rather than a means to punish organisations) or setting more realistic parameters for success/ failure (although see this example and this comment).
  4. Encourage a greater understanding, within the public sector, of the implications of complex systems and terms such as ‘emergence’ or ‘feedback loops’.

In other words, this literature, when applied to policymaking, tends to encourage a movement from centrally driven targets and performance indicators towards a more flexible understanding of rules and targets by local actors who are more able to understand and adapt to rapidly-changing local circumstances.

[See also: Complex systems and systems thinking]

Now, just imagine the UK Government taking that advice right now. I think it is fair to say that it would be condemned continuously (even more so than right now). Maybe that is because it is the wrong way to make policy in times of crisis. Maybe it is because too few people are willing and able to accept that the role of a small group of people at the centre of government is necessarily limited, and that effective policymaking requires trial-and-error rather than a single, fixed, grand strategy to be communicated to the public. The former highlights policy that changes with new information and perspective. The latter highlights errors of judgement, incompetence, and U-turns. In either case, the advice is changing as estimates of the coronavirus’ impact change:

I think this tension, in the way that we understand UK government, helps explain some of the criticism that it faces when changing its advice to reflect changes in its data or advice. This criticism becomes intense when people also question the competence or motives of ministers (and even people reporting the news) more generally, leading to criticism that ranges from mild to outrageous:

For me, this casual reference to a government policy to ‘cull the heard of the weak’ is outrageous, but you can find much worse on Twitter. It reflects wider debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation of government statements, based on levels of trust/distrust in the UK Government, its Prime Minister and Secretaries of State, and the Prime Minister’s special adviser

However, I think that some of it is also about:

1. Wilful misinterpretation (particularly on Twitter). For example, in the early development and communication of policy, Boris Johnson was accused (in an irresponsibly misleading way) of advocating for herd immunity rather than restrictive measures.

See: Here is the transcript of what Boris Johnson said on This Morning about the new coronavirus (Full Fact)

full fact coronavirus

Below is one of the most misleading videos of its type. Look at how it cuts each segment into a narrative not provided by ministers or their advisors (see also this stinker):

See also:

2. The accentuation of a message not being emphasised by government spokespeople.

See for example this interview, described by Sky News (13.3.20) as: The government’s chief scientific adviser Sir Patrick Vallance has told Sky News that about 60% of people will need to become infected with coronavirus in order for the UK to enjoy “herd immunity”. You might be forgiven for thinking that he was on Sky extolling the virtues of a strategy to that end (and expressing sincere concerns on that basis). This was certainly the write-up in respected papers like the FT (UK’s chief scientific adviser defends ‘herd immunity’ strategy for coronavirus). Yet, he was saying nothing of the sort. Rather, when prompted, he discussed herd immunity in relation to the belief that COVID-19 will endure long enough to become as common as seasonal flu.

The same goes for Vallance’s interview on the same day (13.3.20) during Radio 4’s Today programme (transcribed by the Spectator, which calls Vallance the author, and gives it the headlineHow ‘herd immunity’ can help fight coronavirusas if it is his main message). The Today Programme also tweeted only 30 seconds to single out that brief exchange:

Yet, clearly his overall message – in this and other interviews – was that some interventions (e.g. staying at home; self-isolating with symptoms) would have bigger effects than others (e.g. school closures; prohibiting mass gatherings) during the ‘flattening of the peak’ strategy (‘What we don’t want is everybody to end up getting it in a short period of time so that we swamp and overwhelm NHS services’). Rather than describing ‘herd immunity’ as a strategy, he is really describing how to deal with its inevitability (‘Well, I think that we will end up with a number of people getting it’).

See also: British government wants UK to acquire coronavirus ‘herd immunity’, writes Robert Peston (12.3.20) and live debates (and reports grasping at straws) on whether or not ‘herd immunity’ was the goal of the UK government:

See also: Why weren’t we ready? (Harry Lambert) which is a good exemplar of the ‘U turn’ argument, and compare with the evidence to the Health and Social Care Committee (CMO Whitty, DCMO Harries) that it describes.

A more careful forensic analysis (such as this one) will try to relate each government choice to the ways in which key advisory bodies (such as the New and Emerging Respiratory Virus Threats Advisory Group, NERVTAG) received and described evidence on the current nature of the problem:

See also: Special Report: Johnson listened to his scientists about coronavirus – but they were slow to sound the alarm (Reuters)

Some aspects may also be clearer when there is systematic qualitative interview data on which to draw. Right now, there are bits and pieces of interviews sandwiched between whopping great editorial discussions (e.g. FT Alphaville Imperial’s Neil Ferguson: “We don’t have a clear exit strategy”; compare with the more useful Let’s flatten the coronavirus confusion curve) or confused accounts by people speaking to someone who has spoken to someone else (e.g. Buzzfeed Even The US Is Doing More Coronavirus Tests Than The UK. Here Are The Reasons Why).

See also: other rabbit holes are available

[OK, that proved to be a big departure from the trial-and-error discussion. Here we are, back again]

In some cases, maybe people are making the argument that trial-and-error is the best way to respond quickly, and adapt quickly, in a crisis but that the UK Government version is not what, say, the WHO thinks of as good kind of adaptive response. It is not possible to tell, at least from the general ways in which they justify acting quickly.

See also the BBC’s provocative question (which I expect to be replaced soon):

Compare with:

The take home messages

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results to not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing.

Further reading, until I can think of a better conclusion:

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

See also: Advisers, Governments and why blunders happen? (Colin Talbot)

See also: Why we might disagree about … Covid-19 (Ruth Dixon and Christopher Hood)

See also: Pandemic Science and Politics (Daniel Sarewitz)

See also: We knew this would happen. So why weren’t we ready? (Steve Bloomfield)

See also: Europe’s coronavirus lockdown measures compared (Politico)

.

.

.

.

.

7 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

Policy Analysis in 750 words: Deborah Stone (2012) Policy Paradox

Please see the Policy Analysis in 750 words series overview before reading the summary. This post is 750 words plus a bonus 750 words plus some further reading that doesn’t count in the word count even though it does.

Stone policy paradox 3rd ed cover

Deborah Stone (2012) Policy Paradox: The Art of Political Decision Making 3rd edition (Norton)

‘Whether you are a policy analyst, a policy researcher, a policy advocate, a policy maker, or an engaged citizen, my hope for Policy Paradox is that it helps you to go beyond your job description and the tasks you are given – to think hard about your own core values, to deliberate with others, and to make the world a better place’ (Stone, 2012: 15)

Stone (2012: 379-85) rejects the image of policy analysis as a ‘rationalist’ project, driven by scientific and technical rules, and separable from politics. Rather, every policy analyst’s choice is a political choice – to define a problem and solution, and in doing so choosing how to categorise people and behaviour – backed by strategic persuasion and storytelling.

The Policy Paradox: people entertain multiple, contradictory, beliefs and aims

Stone (2012: 2-3) describes the ways in which policy actors compete to define policy problems and public policy responses. The ‘paradox’ is that it is possible to define the same policies in contradictory ways.

‘Paradoxes are nothing but trouble. They violate the most elementary principle of logic: something can’t be two different things at once. Two contradictory interpretations can’t both be true. A paradox is just such an impossible situation, and political life is full of them’ (Stone, 2012: 2).

This paradox does not refer simply to a competition between different actors to define policy problems and the success or failure of solutions. Rather:

  • The same actor can entertain very different ways to understand problems, and can juggle many criteria to decide that a policy outcome was a success and a failure (2012: 3).
  • Surveys of the same population can report contradictory views – encouraging a specific policy response and its complete opposite – when asked different questions in the same poll (2012: 4; compare with Riker)

Policy analysts: you don’t solve the Policy Paradox with a ‘rationality project’

Like many posts in this series (Smith, Bacchi, Hindess), Stone (2010: 9-11) rejects the misguided notion of objective scientists using scientific methods to produce one correct answer (compare with Spiegelhalter and Weimer & Vining). A policy paradox cannot be solved by ‘rational, analytical, and scientific methods’ because:

Further, Stone (2012: 10-11) rejects the over-reliance, in policy analysis, on the misleading claim that:

  • policymakers are engaging primarily with markets rather than communities (see 2012: 35 on the comparison between a ‘market model’ and ‘polis model’),
  • economic models can sum up political life, and
  • cost-benefit-analysis can reduce a complex problem into the sum of individual preferences using a single unambiguous measure.

Rather, many factors undermine such simplicity:

  1. People do not simply act in their own individual interest. Nor can they rank-order their preferences in a straightforward manner according to their values and self-interest.
  • Instead, they maintain a contradictory mix of objectives, which can change according to context and their way of thinking – combining cognition and emotion – when processing information (2012: 12; 30-4).
  1. People are social actors. Politics is characterised by ‘a model of community where individuals live in a dense web of relationships, dependencies, and loyalties’ and exercise power with reference to ideas as much as material interests (2012: 10; 20-36; compare with Ostrom, more Ostrom, and Lubell).
  2. Morals and emotions matter. If people juggle contradictory aims and measures of success, then a story infused with ‘metaphor and analogy’, and appealing to values and emotions, prompts people ‘to see a situation as one thing rather than another’ and therefore draw attention to one aim at the expense of the others (2012: 11; compare with Gigerenzer).

Policy analysis reconsidered: the ambiguity of values and policy goals

Stone (2012: 14) identifies the ambiguity of the criteria for success used in 5-step policy analyses. They do not form part of a solely technical or apolitical process to identify trade-offs between well-defined goals (compare Bardach, Weimer and Vining, and Mintrom). Rather, ‘behind every policy issue lurks a contest over conflicting, though equally plausible, conceptions of the same abstract goal or value’ (2012: 14). Examples of competing interpretations of valence issues include definitions of:

  1. Equity, according to: (a) which groups should be included, how to assess merit, how to identify key social groups, if we should rank populations within social groups, how to define need and account for different people placing different values on a good or service, (b) which method of distribution to use (competition, lottery, election), and (c) how to balance individual, communal, and state-based interventions (2012: 39-62).
  2. Efficiency, to use the least resources to produce the same objective, according to: (a) who determines the main goal and how to balance multiple objectives, (a) who benefits from such actions, and (c) how to define resources while balancing equity and efficiency – for example, does a public sector job and a social security payment represent a sunk cost to the state or a social investment in people? (2012: 63-84).
  3. Welfare or Need, according to factors including (a) the material and symbolic value of goods, (b) short term support versus a long term investment in people, (c) measures of absolute poverty or relative inequality, and (d) debates on ‘moral hazard’ or the effect of social security on individual motivation (2012: 85-106)
  4. Liberty, according to (a) a general balancing of freedom from coercion and freedom from the harm caused by others, (b) debates on individual and state responsibilities, and (c) decisions on whose behaviour to change to reduce harm to what populations (2012: 107-28)
  5. Security, according to (a) our ability to measure risk scientifically (see Spiegelhalter and Gigerenzer), (b) perceptions of threat and experiences of harm, (c) debates on how much risk to safety to tolerate before intervening, (d) who to target and imprison, and (e) the effect of surveillance on perceptions of democracy (2012: 129-53).

Policy analysis as storytelling for collective action

Actors use policy-relevant stories to influence the ways in which their audience understands (a) the nature of policy problems and feasibility of solutions, within (b) a wider context of policymaking in which people contest the proper balance between state, community, and market action. Stories can influence key aspects of collective action, including:

  1. Defining interests and mobilising actors, by drawing attention to – and framing – issues with reference to an imagined social group and its competition (e.g. the people versus the elite; the strivers versus the skivers) (2012: 229-47)
  2. Making decisions, by framing problems and solutions (2012: 248-68). Stone (2012: 260) contrasts the ‘rational-analytic model’ with real-world processes in which actors deliberately frame issues ambiguously, shift goals, keep feasible solutions off the agenda, and manipulate analyses to make their preferred solution seem the most efficient and popular.
  3. Defining the role and intended impact of policies, such as when balancing punishments versus incentives to change behaviour, or individual versus collective behaviour (2012: 271-88).
  4. Setting and enforcing rules (see institutions), in a complex policymaking system where a multiplicity of rules interact to produce uncertain outcomes, and a powerful narrative can draw attention to the need to enforce some rules at the expense of others (2012: 289-310).
  5. Persuasion, drawing on reason, facts, and indoctrination. Stone (2012: 311-30) highlights the context in which actors construct stories to persuade: people engage emotionally with information, people take certain situations for granted even though they produce unequal outcomes, facts are socially constructed, and there is unequal access to resources – held in particular by government and business – to gather and disseminate evidence.
  6. Defining human and legal rights, when (a) there are multiple, ambiguous, and intersecting rights (in relation to their source, enforcement, and the populations they serve) (b) actors compete to make sure that theirs are enforced, (c) inevitably at the expense of others, because the enforcement of rights requires a disproportionate share of limited resources (such as policymaker attention and court time) (2012: 331-53)
  7. Influencing debate on the powers of each potential policymaking venue – in relation to factors including (a) the legitimate role of the state in market, community, family, and individual life, (b) how to select leaders, (c) the distribution of power between levels and types of government – and who to hold to account for policy outcomes (2012: 354-77).

Key elements of storytelling include:

  1. Symbols, which sum up an issue or an action in a single picture or word (2012:157-8)
  2. Characters, such as heroes or villain, who symbolise the cause of a problem or source of solution (2012:159)
  3. Narrative arcs, such as a battle by your hero to overcome adversity (2012:160-8)
  4. Synecdoche, to highlight one example of an alleged problem to sum up its whole (2012: 168-71; compare the ‘welfare queen’ example with SCPD)
  5. Metaphor, to create an association between a problem and something relatable, such as a virus or disease, a natural occurrence (e.g. earthquake), something broken, something about to burst if overburdened, or war (2012: 171-78; e.g. is crime a virus or a beast?)
  6. Ambiguity, to give people different reasons to support the same thing (2012: 178-82)
  7. Using numbers to tell a story, based on political choices about how to: categorise people and practices, select the measures to use, interpret the figures to evaluate or predict the results, project the sense that complex problems can be reduced to numbers, and assign authority to the counters (2012:183-205; compare with Speigelhalter)
  8. Assigning Causation, in relation to categories including accidental or natural, ‘mechanical’ or automatic (or in relation to institutions or systems), and human-guided causes that have intended or unintended consequences (such as malicious intent versus recklessness)
  • ‘Causal strategies’ include to: emphasise a natural versus human cause, relate it to ‘bad apples’ rather than systemic failure, and suggest that the problem was too complex to anticipate or influence
  • Actors use these arguments to influence rules, assign blame, identify ‘fixers’, and generate alliances among victims or potential supporters of change (2012: 206-28).

Wider Context and Further Reading: 1. Policy analysis

This post connects to several other 750 Words posts, which suggest that facts don’t speak for themselves. Rather, effective analysis requires you to ‘tell your story’, in a concise way, tailored to your audience.

For example, consider two ways to establish cause and effect in policy analysis:

One is to conduct and review multiple randomised control trials.

Another is to use a story of a hero or a villain (perhaps to mobilise actors in an advocacy coalition).

  1. Evidence-based policymaking

Stone (2012: 10) argues that analysts who try to impose one worldview on policymaking will find that ‘politics looks messy, foolish, erratic, and inexplicable’. For analysts, who are more open-minded, politics opens up possibilities for creativity and cooperation (2012: 10).

This point is directly applicable to the ‘politics of evidence based policymaking’. A common question to arise from this worldview is ‘why don’t policymakers listen to my evidence?’ and one answer is ‘you are asking the wrong question’.

  1. Policy theories highlight the value of stories (to policy analysts and academics)

Policy problems and solutions necessarily involve ambiguity:

  1. There are many ways to interpret problems, and we resolve such ambiguity by exercising power to attract attention to one way to frame a policy problem at the expense of others (in other words, not with reference to one superior way to establish knowledge).
  1. Policy is actually a collection of – often contradictory – policy instruments and institutions, interacting in complex systems or environments, to produce unclear messages and outcomes. As such, what we call ‘public policy’ (for the sake of simplicity) is subject to interpretation and manipulation as it is made and delivered, and we struggle to conceptualise and measure policy change. Indeed, it makes more sense to describe competing narratives of policy change.

box 13.1 2nd ed UPP

  1. Policy theories and storytelling

People communicate meaning via stories. Stories help us turn (a) a complex world, which provides a potentially overwhelming amount of information, into (b) something manageable, by identifying its most relevant elements and guiding action (compare with Gigerenzer on heuristics).

The Narrative Policy Framework identifies the storytelling strategies of actors seeking to exploit other actors’ cognitive shortcuts, using a particular format – containing the setting, characters, plot, and moral – to focus on some beliefs over others, and reinforce someone’s beliefs enough to encourage them to act.

Compare with Tuckett and Nicolic on the stories that people tell to themselves.

 

18 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Evidence-informed policymaking: context is everything

I thank James Georgalakis for inviting me to speak at the inaugural event of IDS’ new Evidence into Policy and Practice Series, and the audience for giving extra meaning to my story about the politics of ‘evidence-based based policymaking’. The talk (using powerpoint) and Q&A is here:

 

James invited me to respond to some of the challenges raised to my talk – in his summary of the event – so here it is.

I’m working on a ‘show, don’t tell’ approach, leaving some of the story open to interpretation. As a result, much of the meaning of this story – and, in particular, the focus on limiting participation – depends on the audience.

For example, consider the impact of the same story on audiences primarily focused on (a) scientific evidence and policy, or (b) participation and power.

Normally, when I talk about evidence and policy, my audience is mostly people with scientific or public health backgrounds asking why do policymakers ignore scientific evidence? I am usually invited to ruffle feathers, mostly by challenging a – remarkably prevalent – narrative that goes like this:

  • We know what the best evidence is, since we have produced it with the best research methods (the ‘hierarchy of evidence’ argument).
  • We have evidence on the nature of the problem and the most effective solutions (the ‘what works’ argument).
  • Policymakers seems to be ignoring our evidence or failing to act proportionately (the ‘evidence-policy barriers’ argument).
  • Or, they cherry-pick evidence to suit their agenda (the ‘policy based evidence’ argument).

In that context, I suggest that there are many claims to policy-relevant knowledge, policymakers have to ignore most information before making choices, and they are not in control of the policy process for which they are ostensibly in charge.

Limiting participation as a strategic aim

Then, I say to my audience that – if they are truly committed to maximising the use of scientific evidence in policy – they will need to consider how far they will go to get what they want. I use the metaphor of an ethical ladder in which each rung offers more influence in exchange for dirtier hands: tell stories and wait for opportunities, or demonise your opponents, limit participation, and humour politicians when they cherry-pick to reinforce emotional choices.

It’s ‘show don’t tell’ but I hope that the take-home point for most of the audience is that they shouldn’t focus so much on one aim – maximising the use of scientific evidence – to the detriment of other important aims, such as wider participation in politics beyond a reliance on a small number of experts. I say ‘keep your eyes on the prize’ but invite the audience to reflect on which prizes they should seek, and the trade-offs between them.

Limited participation – and ‘windows of opportunity’ – as an empirical finding

NASA launch

I did suggest that most policymaking happens away from the sphere of ‘exciting’ and ‘unruly’ politics. Put simply, people have to ignore almost every issue almost all of the time. Each time they focus their attention on one major issue, they must – by necessity – ignore almost all of the others.

For me, the political science story is largely about the pervasiveness of policy communities and policymaking out of the public spotlight.

The logic is as follows. Elected policymakers can only pay attention to a tiny proportion of their responsibilities. They delegate the rest to bureaucrats at lower levels of government. Bureaucrats lack specialist knowledge, and rely on other actors for information and advice. Those actors trade information for access. In many cases, they develop effective relationships based on trust and a shared understanding of the policy problem.

Trust often comes from a sense that everyone has proven to be reliable. For example, they follow norms or the ‘rules of the game’. One classic rule is to contain disputes within the policy community when actors don’t get what they want: if you complain in public, you draw external attention and internal disapproval; if not, you are more likely to get what you want next time.

For me, this is key context in which to describe common strategic concerns:

  • Should you wait for a ‘window of opportunity’ for policy change? Maybe. Or, maybe it will never come because policymaking is largely insulated from view and very few issues reach the top of the policy agenda.
  • Should you juggle insider and outsider strategies? Yes, some groups seem to do it well and it is possible for governments and groups to be in a major standoff in one field but close contact in another. However, each group must consider why they would do so, and the trade-offs between each strategy. For example, groups excluded from one venue may engage (perhaps successfully) in ‘venue shopping’ to get attention from another. Or, they become discredited within many venues if seen as too zealous and unwilling to compromise. Insider/outsider may seem like a false dichotomy to experienced and well-resourced groups, who engage continuously, and are able to experiment with many approaches and use trial-and-error learning. It is a more pressing choice for actors who may have only one chance to get it right and do not know what to expect.

Where is the power analysis in all of this?

image policy process round 2 25.10.18

I rarely use the word power directly, partly because – like ‘politics’ or ‘democracy’ – it is an ambiguous term with many interpretations (see Box 3.1). People often use it without agreeing its meaning and, if it means everything, maybe it means nothing.

However, you can find many aspects of power within our discussion. For example, insider and outsider strategies relate closely to Schattschneider’s classic discussion in which powerful groups try to ‘privatise’ issues and less powerful groups try to ‘socialise’ them. Agenda setting is about using resources to make sure issues do, or do not, reach the top of the policy agenda, and most do not.

These aspects of power sometimes play out in public, when:

  • Actors engage in politics to turn their beliefs into policy. They form coalitions with actors who share their beliefs, and often romanticise their own cause and demonise their opponents.
  • Actors mobilise their resources to encourage policymakers to prioritise some forms of knowledge or evidence over others (such as by valuing scientific evidence over experiential knowledge).
  • They compete to identify the issues most worthy of our attention, telling stories to frame or define policy problems in ways that generate demand for their evidence.

However, they are no less important when they play out routinely:

  • Governments have standard operating procedures – or institutions – to prioritise some forms of evidence and some issues routinely.
  • Many policy networks operate routinely with few active members.
  • Certain ideas, or ways of understanding the world and the nature of policy problems within it, becomes so dominant that they are unspoken and taken for granted as deeply held beliefs. Still, they constrain or facilitate the success of new ‘evidence based’ policy solutions.

In other words, the word ‘power’ is often hidden because the most profound forms of power often seem to be hidden.

In the context of our discussion, power comes from the ability to define some evidence as essential and other evidence as low quality or irrelevant, and therefore define some people as essential or irrelevant. It comes from defining some issues as exciting and worthy of our attention, or humdrum, specialist and only relevant to experts. It is about the subtle, unseen, and sometimes thoughtless ways in which we exercise power to harness people’s existing beliefs and dominate their attention as much as the transparent ways in which we mobilise resources to publicise issues. Therefore, to ‘maximise the use of evidence’ sounds like an innocuous collective endeavour, but it is a highly political and often hidden use of power.

See also:

I discussed these issues at a storytelling workshop organised by the OSF:

listening-new-york-1-11-16

See also:

Policy in 500 Words: Power and Knowledge

The politics of evidence-based policymaking

Palgrave Communications: The politics of evidence-based policymaking

Using evidence to influence policy: Oxfam’s experience

The UK government’s imaginative use of evidence to make policy

 

6 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Evidence-based policymaking and the ‘new policy sciences’

image policy process round 2 25.10.18

[I wasn’t happy with the first version, so this is version 2, to be enjoyed with the ppt MP3 ]

In the ‘new policy sciences’, Chris Weible and I advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

However, there is a lot of policy theory out there, and we can’t put policy theory together like Lego to produce consistent insights to inform policy analysis.

Rather, each concept in my image of the policy process represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events.

What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process.

However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

On that basis, I’d encourage you to think of these attempts to synthesise as stories. I tell these stories a lot, but someone else could describe theory very differently (perhaps by relying on fewer male authors or US-derived theories in which there is a very specific reference points and positivism is represented well).

The example of EBPM

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

Further, policy theories/ studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

As described, this focus on the new policy sciences and synthesising insights helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

From one story to many?

However, I tell these stories without my audience having the time to look further into each theory and its individual insights. If they do have a little more time, I go into the possible contribution of individual insights to debate.

For example, they adapt insights from psychology in different ways …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… even though the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

They also present different conceptions of the policymaking environment in which actors make choices. See this post for more on this discussion in relation to EBPM.

My not-brilliant conclusion is that:

  1. Policy theory/ policy studies has a lot to offer other disciplines and professions, particularly in field like EBPM in which we need to account for politics and, more importantly, policymaking systems, but
  2. Beware any policy theory story that presents the source literature as coherent and consistent.
  3. Rather, any story of the field involves a series of choices about what counts as a good theory and good insight.
  4. In other words, the exhortation to think more about what counts as ‘good evidence’ applies just as much to political science as any other.

Postscript: well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one and see it as a sequel to this one!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

*I welcome suggestions on another word to describe almost-impossibly-hard

5 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy

Evidence-based policymaking and the ‘new policy sciences’

Circle image policy process 24.10.18

I have given a series of talks to explain why we should think of ‘evidence-based policymaking’ as a myth or political slogan, not an ideal scenario or something to expect from policymaking in the real world. They usually involve encouraging framing and storytelling rather than expecting evidence to speak for itself, and rejecting the value of simple models like the policy cycle. I then put up an image of my own and encourage people to think about the implications of each concept:

SLIDE simple advice from hexagon image policy process 24.10.18

I describe the advice as simple-sounding and feasible at first glance, but actually a series of Herculean* tasks:

  • There are many policymakers and influencers spread across government, so find out where the action is, or the key venues in which people are making authoritative decisions.
  • Each venue has its own ‘institutions’ – the formal and written, or informal and unwritten rules of policymaking – so learn the rules of each venue in which you engage.
  • Each venue is guided by a fundamental set of ideas – as paradigms, core beliefs, monopolies of understanding – so learn that language.
  • Each venue has its own networks – the relationships between policy makers and influencers – so build trust and form alliances within networks.
  • Policymaking attention is often driven by changes in socioeconomic factors, or routine/ non-routine events, so be prepared to exploit the ‘windows of opportunity’ to present your solution during heightened attention to a policy problem.

In most cases, we don’t have time to discuss a more fundamental issue (at least for researchers using policy theory and political science concepts):

From where did these concepts come, and how well do we know them?

To cut a long story short, each concept represents its own literature: see these short explainers on the psychology of policymaking, actors spread across multi-level governance, institutions, networks, ideas, and socioeconomic factors/ events. What the explainers don’t really project is the sense of debate within the literature about how best to conceptualise each concept. You can pick up their meaning in a few minutes but would need a few years to appreciate the detail and often-fundamental debate.

Ideally, we would put all of the concepts together to help explain policymaker choice within a complex policymaking environment (how else could I put up the image and present is as one source of accumulated wisdom from policy studies?). Peter John describes such accounts as ‘synthetic’. I have also co-authored work with Tanya Heikkila – in 2014 and 2017 to compare the different ways in which ‘synthetic’ theories conceptualise the policy process. However, note the difficulty of putting together a large collection of separate and diverse literatures into one simple model (e.g. while doing a PhD).

The new policy sciences

More recently, in the ‘new policy sciences’, Chris Weible and I present a more provocative story of these efforts, in which we advocate:

  • a return to Lasswell’s vision of combining policy analysis (to recommend policy change) and policy theory (to explain policy change), but
  • focusing on a far larger collection of actors (beyond a small group at the centre),
  • recognising new developments in studies of the psychology of policymaker choice, and
  • building into policy analysis the recognition that any policy solution is introduced in a complex policymaking environment over which no-one has control.

This focus on psychology is not new …

  • PET shows the overall effect of policymaker psychology on policy change: they combine cognition and emotion to pay disproportionate attention to a small number of issues (contributing to major change) and ignore the rest (contributing to ‘hyperincremental’ change).
  • The IAD focuses partly on the rules and practices that actors develop to build up trust in each other.
  • The ACF describes actors going into politics to turn their beliefs into policy, forming coalitions with people who share their beliefs, then often romanticising their own cause and demonising their opponents.
  • The NPF describes the relative impact of stories on audiences who use cognitive shortcuts to (for example) identify with a hero and draw a simple moral.
  • SCPD describes policymakers drawing on gut feeling to identify good and bad target populations.
  • Policy learning involves using cognition and emotion to acquire new knowledge and skills.

… but the pace of change in psychological research often seems faster than the ways in which policy studies can incorporate new and reliable insights.

Perhaps more importantly, policy studies help us understand the context in which people make such choices. For example, consider the story that Kathryn Oliver and I tell about the role of evidence in policymaking environments:

If there are so many potential authoritative venues, devote considerable energy to finding where the ‘action’ is (and someone specific to talk to). Even if you find the right venue, you will not know the unwritten rules unless you study them intensely. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour some sources of evidence. Research advocates can be privileged insiders in some venues and excluded completely in others. If your evidence challenges an existing paradigm, you need a persuasion strategy good enough to prompt a shift of attention to a policy problem and a willingness to understand that problem in a new way. You can try to find the right time to use evidence to exploit a crisis leading to major policy change, but the opportunities are few and chances of success low.  In that context, policy studies recommend investing your time over the long term – to build up alliances, trust in the messenger, knowledge of the system, and to seek ‘windows of opportunity’ for policy change – but offer no assurances that any of this investment will ever pay off

Then, have a look at this discussion of ‘synthetic’ policy theories, designed to prompt people to consider how far they would go to get their evidence into policy.

Theory-driven policy analysis

As described, this focus on the new policy sciences helps explain why ‘the politics of evidence-based policymaking’ is equally important to civil servants (my occasional audience) as researchers (my usual audience).

To engage in skilled policy analysis, and give good advice, is to recognise the ways in which policymakers combine cognition/emotion to engage with evidence, and must navigate a complex policymaking environment when designing or selecting technically and politically feasible solutions. To give good advice is to recognise what you want policymakers to do, but also that they are not in control of the consequences.

Epilogue

Well, that is the last of the posts for my ANZOG talks. If I’ve done this properly, there should now be a loop of talks. It should be possible to go back to the first one in Auckland and see it as a sequel to this one in Brisbane!

Or, for more on theory-informed policy analysis – in other words, where the ‘new policy sciences’ article is taking us – here is how I describe it to students doing a policy analysis paper (often for the first time).

Or, have a look at the earlier discussion of images of the policy process. You may have noticed that there is a different image in this post (knocked up in my shed at the weekend). It’s because I am experimenting with shapes. Does the image with circles look more relaxing? Does the hexagonal structure look complicated even though it is designed to simplify? Does it matter? I think so. People engage emotionally with images. They share them. They remember them. So, I need an image more memorable than the policy cycle.

 

Paul Cairney Brisbane EBPM New Policy Sciences 25.10.18

 

 

*I welcome suggestions on another word to describe almost-impossibly-hard

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, Psychology Based Policy Studies, public policy, Storytelling

Emotion and reason in politics: the rational/ irrational distinction

In ‘How to communicate effectively with policymakers’, Richard Kwiatkowski and I use the distinction between ‘rational’ and ‘irrational’ cognitive shortcuts ‘provocatively’. I sort of wish we had been more direct, because I have come to realise that:

  1. My attempts to communicate with sarcasm and facial gestures may only ever appeal to a niche audience, and
  2. even if you use the scare quotes – around a word like ‘irrational’ – to denote the word’s questionable use, it’s not always clear what I’m questioning, because
  3. you need to know the story behind someone’s discussion to know what they are questioning.*

So, here are some of the reference points I’m using when I tell a story about ‘irrationality’:

1. I’m often invited to be the type of guest speaker that challenges the audience, it is usually a scientific audience, and the topic is usually evidence based policymaking.

So, when I say ‘irrational’, I’m speaking to (some) scientists who think of themselves as rational and policymakers as irrational, and use this problematic distinction to complain about policy-based evidence, post-truth politics, and perhaps even the irrationality of voters for Brexit. Action based on this way of thinking would be counterproductive. In that context, I use the word ‘irrational’ as a way into some more nuanced discussions including:

  • all humans combine cognition and emotion to make choices; and,
  • emotions are one of many sources of ‘fast and frugal heuristics’ that help us make some decisions very quickly and often very well.

In other words, it is silly to complain that some people are irrational, when we are all making choices this way, and such decision-making is often a good thing.

2. This focus on scientific rationality is part of a wider discussion of what counts as good evidence or valuable knowledge. Examples include:

  • Policy debates on the value of bringing together many people with different knowledge claims – such as through user and practitioner experience – to ‘co-produce’ evidence.
  • Wider debates on the ‘decolonization of knowledge’ in which narrow ‘Western’ scientific principles help exclude the voices of many populations by undermining their claims to knowledge.

3. A focus on rationality versus irrationality is still used to maintain sexist and racist caricatures or stereotypes, and therefore dismiss people based on a misrepresentation of their behaviour.

I thought that, by now, we’d be done with dismissing women as emotional or hysterical, but apparently not. Indeed, as some recent racist and sexist coverage of Serena Williams demonstrates, the idea that black women are not rational is still tolerated in mainstream discussion.

4. Part of the reason that we can only conclude that people combine cognition and emotion, without being able to separate their effects in a satisfying way, is that the distinction is problematic.

It is difficult to demonstrate empirically. It is also difficult to assign some behaviours to one camp or the other, such as when we consider moral reasoning based on values and logic.

To sum up, I’ve been using the rational/irrational distinction explicitly to make a simple point that is relevant to the study of politics and policymaking:

  • All people use cognitive shortcuts to help them ignore almost all information about the world, to help them make decisions efficiently.
  • If you don’t understand and act on this simple insight, you’ll waste your time by trying to argue someone into submission or giving them a 500-page irrelevant report when they are looking for one page written in a way that makes sense to them.

Most of the rest has been mostly implicit, and communicated non-verbally, which is great when you want to keep a presentation brief and light, but not if you want to acknowledge nuance and more serious issues.

 

 

 

 

*which is why I’m increasingly interested in Riker’s idea of heresthetics, in which the starting point of a story is crucial. We can come to very different conclusions about a problem and its solution by choosing different starting points, to accentuate one aspect of a problem and downplay another, even when our beliefs and preferences remain basically the same.

 

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

What can you do when policymakers ignore your evidence?

The first post in this series asks: Why don’t policymakers listen to your evidence? It is based on talks that I have been giving since 2016, mostly to tap into a common story told by people in my audience (and the ‘science community’ more generally) about a new era in politics: policymakers do not pay sufficient respect to expertise or attention to good quality evidence.

It’s not my story, but I think it’s important to respect my audience members enough to (a) try to engage with their question, before (b) inviting them to think differently about how to ask it, and (c) provide different types of solutions according to the changing nature of the question.

Instead of a really long post for (b) and (c), I’ve made it a bit like Ceefax in which you can choose which question to ask or answer:

  • Why don’t policymakers listen to your evidence? (go to page 154)
  • What can you do when policymakers ignore your evidence? Tips from the ‘how to’ literature from the science community (go to page 650)
  • What can you do when policymakers ignore your evidence? Encourage ‘knowledge management for policy’ (go to page 568)
  • How else can we describe and seek to fill the evidence-policy gap? (go to page 400)
  • How far should you go to privilege evidence? 1. Evidence and governance principles (go to page 101)
  • How far should you go to privilege evidence? 2. Policy theories, scenarios, and ethical dilemmas (go to page 526)
  • How far should you go to privilege evidence? 3. Use psychological insights to manipulate policymakers (go to page 300 then scroll down to point 3)

Some of this material will appear in work with Dr Kathryn Oliver and with my co-authors on a forthcoming report for Enlightenment 2.0 

See also:

Evidence-based policymaking: political strategies for scientists living in the real world

The Science of Evidence-based Policymaking: How to Be Heard

Evidence based policymaking: 7 key themes

I also do slides, such as:

Paul Cairney FUSE May 2018 

Paul Cairney Victoria May 2018

The Politics of Evidence-Based Policymaking: ANZSOG talks 

Talks and blogs: ANZSOG and beyond

This is me presenting those slides in Cambridge while being very Scottish, enjoying a too-heavy cold, and sucking a lozenge. Please note that I tend to smile a lot and make many sarcastic jokes while presenting, partly to apologise indirectly for all the self-publicity.

12 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions (see this post on the meaning of ‘irrational’). Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Now go to:

What can you do when policymakers ignore your evidence?

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes how to be a ‘policy entrepreneur’.

See also the 750 Words series to explore the implications for policy analysis

19 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

What do we need to know about the politics of evidence-based policymaking?

Today, I’m helping to deliver a new course – Engaging Policymakers Training Programme – piloted by the Alliance for Useful Evidence and the UCL. Right now, it’s for UCL staff (and mostly early career researchers). My bit is about how we can better understand the policy process so that we can engage in it more effectively.  I have reproduced the brief guide below (for my two 2-hour sessions as part of a wider block). If anyone else is delivering something similar, please let me know. We could compare notes. 

This module will be delivered in two parts to combine theory and practice

Part 1: What do we need to know about the politics of evidence-based policymaking?

Policy theories provide a wealth of knowledge about the role of evidence in policymaking systems. They prompt us to understand and respond to two key dynamics:

  1. Policymaker psychology. Policymakers combine rational and irrational shortcuts to gather information and make good enough decisions quickly. To appeal to rational shortcuts and minimise cognitive load, we reduce uncertainty by providing syntheses of the available evidence. To appeal to irrational shortcuts and engage emotional interest, we reduce ambiguity by telling stories or framing problems in specific ways.
  2. Complex policymaking environments. These processes take place in the context of a policy environment out of the control of individual policymakers. Environments consist of: many actors in many levels and types of government; engaging with institutions and networks, each with their own informal and formal rules; responding to socioeconomic conditions and events; and, learning how to engage with dominant ideas or beliefs about the nature of the policy problem. In other words, there is no policy cycle or obvious stage in which to get involved.

In this seminar, we discuss how to respond effectively to these dynamics. We focus on unresolved issues:

  1. Effective engagement with policymakers requires storytelling skills, but do we possess them?
  2. It requires a combination of evidence and emotional appeals, but is it ethical to do more than describe the evidence?
  3. The absence of a policy cycle, and presence of an ever-shifting context, requires us to engage for the long term, to form alliances, learn the rules, and build up trust in the messenger. However, do we have and how should we invest the time?

The format will be relatively informal. Cairney will begin by making some introductory points (not a powerpoint driven lecture) and encourage participants to relate the three questions to their research and engagement experience.

Gateway to further reading:

  • Paul Cairney and Richard Kwiatkowski (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x
  • Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early View (forthcoming) DOI:10.1111/puar.12555 PDF

Part 2: How can we respond pragmatically and effectively to the politics of EBPM?

In this seminar, we move from abstract theory and general advice to concrete examples and specific strategies. Each participant should come prepared to speak about their research and present a theoretically informed policy analysis in 3 minutes (without the aid of powerpoint). Their analysis should address:

  1. What policy problem does my research highlight?
  2. What are the most technically and politically feasible solutions?
  3. How should I engage in the policy process to highlight these problems and solutions?

After each presentation, each participant should be prepared to ask questions about the problem raised and the strategy to engage. Finally, to encourage learning, we will reflect on the memorability and impact of presentations.

Powerpoint: Paul Cairney A4UE UCL 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Speaking truth to power: a sometimes heroic but often counterproductive strategy

Our MPP class started talking about which Tom Cruise character policy analysts should be.

It started off as a point about who not to emulate: Tom Cruise in A Few Good Men. I used this character (inaccurately) to represent the archetype of someone ‘speaking truth to power’ (yes, I know TC actually said ‘I want the truth’ and JN said ‘you can’t handle the truth’).

The story of ‘speaking truth to power’ comes up frequently in discussions of the potentially heroic nature of researchers committed to (a) producing the best scientific evidence, (b) maximising the role of scientific evidence in policy, and (b) telling off policymakers if they don’t use evidence to inform their decisions. They can’t handle the truth.

Yet, as I argue in this article with Richard Kwiatkowski (for this series on evidence/policy) ‘without establishing legitimacy and building trust’ it can prove to be counterproductive. Relevant sections include:

This involves showing simple respect and seeking ways to secure their trust, rather than feeling egotistically pleased about ‘speaking truth to power’ without discernible progress. Effective engagement requires preparation, diplomacy, and good judgement as much as good evidence.

and

One solution [to obstacles associated with organizational psychology, discussed by Larrick] is ‘task conflict’ rather than ‘relationship conflict’, to encourage information sharing without major repercussions. It requires the trust and ‘psychological safety’ that comes with ‘team development’ … If successful, one can ‘speak truth to power’ … or be confident that your presentation of evidence, which challenges the status quo, is received positively.  Under such circumstances, a ‘battle of ideas’ can genuinely take place and new thinking can be possible. If these circumstances are not present, speaking truth to power may be disastrous.

The policy analyst would be better as the Tom Cruise character in Live, Die, Repeat. He exhibits a lot of relevant behaviour:

  • Engaging in trial and error to foster practical learning
  • Building up trust with, and learning from, key allies with more knowledge and skills
  • Forming part of, and putting faith in, a team of which he is a necessary but insufficient part

In The New Policy Sciences, Chris Weible and I put it this way:

focus on engagement for the long term to develop the resources necessary to maximize the impact of policy analysis and understand the context in which the information is used. Among the advantages of long-term engagement are learning the ‘rules of the game’ in organizations, forming networks built on trust and a track record of reliability, learning how to ‘soften’ policy solutions according to the beliefs of key policymakers and influencers, and spotting ‘windows of opportunity’ to bring together attention to a problem, a feasible solution, and the motive and opportunity of policymakers to select it …In short, the substance of your analysis only has meaning in relation to the context in which it is used. Further, generating trust in the messenger and knowing your audience may be more important to success than presenting the evidence.

I know TC was the hero, but he couldn’t have succeeded without training by Emily Blunt and help from that guy who used to be in Eastenders. To get that help, he had to stop being an arse when addressing thingy from Big Love.

In real world policymaking, individual scientists should not see themselves as heroes to be respected instantly and simply for their knowledge. They will only effective in several venues – from the lab to public and political arenas – if they are humble enough to learn from others and respect the knowledge and skills of other people. ‘Speaking truth to power’ is catchy and exciting but it doesn’t capture the sense of pragmatism we often need to be effective.

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

https://twitter.com/MIWilliamauthor/status/912781964880510977

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Policy Concepts in 500 Words: Social Construction and Policy Design

Why would a democratic political system produce ‘degenerative’ policy that undermines democracy? Social Construction and Policy Design (SCPD) describes two main ways in which policymaking alienates many citizens:

1. The Social Construction of Target Populations

High profile politics and electoral competition can cause alienation:

  1. Political actors compete to tell ‘stories’ to assign praise or blame to groups of people. For example, politicians describe value judgements about who should be rewarded or punished by government. They base them on stereotypes of ‘target populations’, by (a) exploiting the ways in which many people think about groups, or (b) making emotional and superficial judgements, backed up with selective use of facts.
  2. These judgements have a ‘feed-forward’ effect: they are reproduced in policies, practices, and institutions. Such ‘policy designs’ can endure for years or decades. The distribution of rewards and sanctions is cumulative and difficult to overcome.
  3. Policy design has an impact on citizens, who participate in politics according to how they are characterised by government. Many know they will be treated badly; their engagement will be dispiriting.

Some groups have the power to challenge the way they are described by policymakers (and the media and public), and receive benefits behind the scenes despite their poor image. However, many people feel powerless, become disenchanted with politics, and do not engage in the democratic process.

SCTP depicts this dynamic with a 2-by-2 table in which target populations are described positively/ negatively and more or less able to respond:

SCPD 500 words 2 by 2

2. Bureaucratic and expert politics

Most policy issues are not salient and politicised in this way. Yet, low salience can exacerbate problems of citizen exclusion. Policies dominated by bureaucratic interests often alienate citizens receiving services. Or a small elite dominates policymaking when there is high acceptance that (a) the best policy is ‘evidence based’, and (b) the evidence should come from experts.

Overall, SCPD describes a political system with major potential to diminish democracy, containing key actors (a) politicising issues to reward or punish populations or (b) depoliticising issues with reference to science and objectivity. In both cases, policy design is not informed by routine citizen participation.

Take home message for students: SCPD began as Schneider and Ingram’s description of the US political system’s failure to solve major problems including inequality, poverty, crime, racism, sexism, and effective universal healthcare and education. Think about how its key drivers apply elsewhere: (1) some people make and exploit quick and emotional judgements for political gain, and others refer to expertise to limit debate; (2) these judgements inform policy design; and, (3) policy design sends signals to citizens which can diminish or boost their incentive to engage in politics.

For more, see the 1000-word and 5000-word versions. The latter has a detailed guide to further reading.

 

 

 

 

22 Comments

Filed under 500 words, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Stop Treating Cognitive Science Like a Disease

ssloman-lg

At the beginning is a guest post by Professor Steven Sloman, responding to Professor Daniel Sarewitz’s post in the Guardian called Stop treating science denial like a disease.  At the end is Dan Sarewitz’s reply. If you are wondering why this debate is now playing out on my website, there is a connection of sorts, in: (a) the work of the European Commission’s JRC, with Sloman speaking at its annual conference EU4Facts, and (b) the work of INGSA on government-science advice, in which Sarewitz plays a key role.  

Modern science has its problems. As reviewed in a recent editorial by Daniel Sarewitz, many branches of science have been suffering from a replication crisis. Scientists are under tremendous pressure to publish, and cutting scientific corners has, in some fields, become normal. This, he thinks, justifies a kind of science denialism, one that recognizes that not every word expressed by a scientist should be taken on faith.

Sarewitz is right on a couple of counts: Not every branch of science has equal authority. And in many areas, too much of too little value is being published. Some of it does not pass even weak tests of scientific care and rigor. But his wild claim in favor of denialism is bluster: Science is making faster progress today than at any time in history.

Sarewitz’s intended victim in his piece is cognitive science. He argues that cognitive science appeals to a deficit model (my term) to explain science denialism. People are ignorant, in Sarewitz’s parody of cognitive science, and therefore they fail to understand science. If only they were smarter, or taught the truth about science, they wouldn’t deny it, but rather use it as a guide to truth, justice, and all things good.

This is a position in cognitive science, especially cognitive science of the 70’s and 80’s. But even cognitive science makes progress and today it is a minority view. What does modern cognitive science actually suggest about our understanding of science denial? The answer is detailed in our book The Knowledge Illusion that Sarewitz takes issue with. He would have done well to read it before reviewing it because what we say is diametrically opposed to his report, and largely consistent with his view, though a whole lot more nuanced.

The deficit model applies to one form of reasoning, what we call intuition. The human brain generates beliefs based on naïve causal models about how the world works. These are often sketchy and flawed (consider racists’ understanding of people of other races). Individuals are quite ignorant about how the world works, not because people are stupid, but because the world is so complex. The chaotic, uncertain nature of the universe means that everything we encounter is a tangle of enormous numbers of elements and interactions, far more than any individual could comprehend, never mind retain. As we show in our book, even the lowly ballpoint pen represents untold complexity. The source of ignorance is not so much about the biology of the individual; it’s about the complexity of the world that the individual lives in.

Despite their ignorance, humans have accomplished amazing things, from creating symphonies to laptops. How? In large part by relying on a second form of human reasoning, deliberation. Deliberation is not constrained wholly by biology because it extends beyond the individual. Deliberative thought uses the body to remember for us and even to compute. That’s why emotions are critical for good decision making and why children use their fingers to count. Thinking also uses the world. We compute whether it’s safe to cross the street by looking to see if a car is coming, and we use the presence of dirty dishes on the counter to tell us whether the dishes need doing.

But more than anything, deliberation uses other people. Whether we’re getting our dishwasher fixed, our spiritual lives developed, or our political opinions formed, we are guided by those we deem experts and those we respect in our communities. To a large extent, people are not the rational processors of information that some enlightenment philosophers dreamed about; we are shills for our communities.

The positive side of this is that people are built to collaborate; we are social entities in the most fundamental way, as thinkers. The negative side is that we can subscribe to ideologies that are perpetuated to pursue the self-interest of community leaders, ideologies that have no rational basis. Indeed, the most fervent adherents of a view tend to know the least about it. Fortunately, we have found (not just assumed as Sarewitz says) that when people are asked to explain the consequences of the policies they adhere to, they become less extremist as they discover they don’t really understand.

Scientists live in communities too, and science is certainly vulnerable to these same social forces. That’s why the scientific method was developed: to put ideas to the test and let the cream rise to the top. This takes time, but because science reports eventually to the truth inherent in nature, human foible and peer review can only steer it off course temporarily.

Cognitive science has historically bought into the deficit model, treating failures of science literacy as a kind of disease. But Sarewitz should practice the care and rigor that he preaches by reporting correctly: Cognitive science, like many forms of science, is slowly getting it right.

Reply by Dan Sarewitz

Normally I don’t respond to this kind of thing but a couple points demand rebuttal.

First:  I actually did read their book, cover-to-cover.  Neither the Guardian piece, nor the longer talk from which it draws, are book reviews, they are critiques of the larger intellectual program that The Knowledge Illusion positions itself within.

Second, the idea that deliberative and collaborative activities are a powerful sources of human creativity that overcome the cognitive limits of the individual is an entirely familiar one that has been well-recognized for centuries.  As Professor Sloman indicates, it occupies much of his book, and much of his comment above.  But it was not relevant to my concerns, which were how Sloman and co-author Fernbach position human cognitive limits as a source of so much difficulty in today’s world.  They write:  “Because we confuse the knowledge in our heads with the knowledge we have access to, we are largely unaware of how little we understand.  We live with the belief that we understand more than we do.  As we will explore in the rest of the book, many of society’s most pressing problems stem from this illusion.” [p. 129, my italics]  They wrote this, I didn’t.

Third, having read their book carefully, I am indeed well aware that Sloman and Fernbach understand the limits of the deficit model.  But as they make clear in a subsection of chapter 8, entitled “Filling the Deficit,” they still believe that IF ONLY people understood more about science, then “many of the societies most pressing problems” could be more effectively addressed:  “And the knowledge illusion means that we don’t check our understanding often or deeply enough.  This is a recipe for antiscientific thinking.” [p. 169] A page later they write: “[P]erhaps it is too soon to give up on the deficit model.”

This is where my posting sought to engage with Professor Sloman’s book.  I don’t think that people’s understanding of scientific knowledge has much at all to do with “many of society’s most pressing problems,” for reasons that I point toward in the Guardian piece, and have written extensively about in many other forums.  Professor Sloman may not agree with this position, but his comments above fail to indicate that he actually recognizes or understands it.

Finally, Professor Sloman writes, both tendentiously and with an apparently tone-deaf ear, that my “wild claim in favor of denialism is bluster: Science is making faster progress today than at any time in history.”  The progress of science is irrelevant to my argument, which addresses the intersection of politics and a scientific enterprise that is always pushing into the realm of the uncertain, the unknown, the unknowable, the contestable, the contingent—even as it also, sometimes and in some directions, makes magnificent progress.

Perhaps there is a valuable discussion to be had about whether poor understanding of science by the public is relevant to “many of society’s most pressing problems.”  My view is that this is an overblown, distracting, and to some extent dangerous belief on the part of some scientists, as I indicate in the Guardian post and in many other writings.  Professor Sloman may disagree, but his complaints here are about something else entirely, something that I didn’t write.

1 Comment

Filed under Psychology Based Policy Studies, Uncategorized

Three ways to communicate more effectively with policymakers

By Paul Cairney and Richard Kwiatkowski

Use psychological insights to inform communication strategies

Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).

They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.

Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.

If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.

We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.

To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.

Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.

On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:

  1. Tailor framing strategies to policymaker bias. If people are cognitive misers, minimise the cognitive burden of your presentation. If policymakers combine cognitive and emotive processes, combine facts with emotional appeals. If policymakers make quick choices based on their values and simple moral judgements, tell simple stories with a hero and moral. If policymakers reflect a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with those beliefs.
  2. Identify ‘windows of opportunity’ to influence individuals and processes. ‘Timing’ can refer to the right time to influence an individual, depending on their current way of thinking, or to act while political conditions are aligned.
  3. Adapt to real-world ‘dysfunctional’ organisations rather than waiting for an orderly process to appear. Form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.


This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.

Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

The role of evidence in UK policymaking after Brexit

We are launching a series of papers on evidence and policy in Palgrave Communications. Of course, we used Brexit as a hook, to tap into current attention to instability and major policy change. However, many of the issues we discuss are timeless and about surprising levels of stability and continuity in policy processes, despite periods of upheaval.

In my day, academics would build their careers on being annoying, and sometimes usefully annoying. This would involve developing counterintuitive insights, identifying gaps in analysis, and challenging a ‘common wisdom’ in political studies. Although not exactly common wisdom, the idea of ‘post truth’ politics, a reduction in respect for ‘experts’, and a belief that Brexit is a policymaking game-changer, are great candidates for some annoyingly contrary analysis.

In policy studies, many of us argue that things like elections, changes of government, and even constitutional changes are far less important than commonly portrayed. In media and social media accounts, we find hyperbole about the destabilising and changing impact of the latest events. In policy studies, we often stress stability and continuity.  My favourite old example regards the debates from the 1970s about electoral reform. While some were arguing that first-past-the-post was a disastrous electoral system since it produces swings of government, instability, and incoherent policy change, Richardson and Jordan would point out surprisingly high levels of stability and continuity.

Finer and Jordan Cairney

In part, this is because the state is huge, policymakers can only pay attention to a tiny part of it, and therefore most of it is processed as a low level of government, out of the public spotlight.

UPP p106

These insights still have profound relevance today, for two key reasons.

  1. The role of experts is more important than you think

This larger process provides far more opportunities for experts than we’d associate with ‘tip of the iceberg’ politics.

Some issues are salient. They command the interest of elected politicians, and those politicians often have firm beliefs that limit the ‘impact’ of any evidence that does not support their beliefs.

However, most issues are not salient. They command minimal interest, they are processed by other policymakers, and those policymakers are looking for information and advice from reliable experts.

Indeed, a lot of policy studies highlight the privileged status of certain experts, at the expense of most members of the public (which is a useful corrective to the story, associated with Brexit, that the public is too emotionally driven, too sceptical of experts, and too much in charge of the future of constitutional change).

So, Brexit will change the role of experts, but expect that change to relate to the venue in which they engage, and the networks of which they are a part, more than the practices of policymakers. Much policymaking is akin to an open door to government for people with useful information and a reputation for being reliable in their dealings with policymakers.

  1. Provide less evidence for more impact

If the problem is that policymakers can only pay attention to a tiny proportion of their responsibilities, the solution is not to bombard them with a huge amount of evidence. Instead, assume that they seek ways to ignore almost all information while still managing to make choices. The trick may be to provide just enough information to prompt demand for more, not oversupply evidence on the assumption that you have only one chance for influence.

With Richard Kwiatkoswki, I draw on policy and psychology studies to help us understand how to supply evidence to anyone using ‘rational’ and ‘irrational’ ways to limit their attention, information processing, and thought before making decisions.

Our working assumption is that policymakers need to gather information quickly and effectively, so they develop heuristics to allow them to make what they believe to be good choices. Their solutions often seem to be driven more by their emotions than a ‘rational’ analysis of the evidence, partly because we hold them to a standard that no human can reach. If so, and if they have high confidence in their heuristics, they will dismiss our criticism as biased and naïve. Under those circumstances, restating the need for ‘evidence-based policymaking’ is futile, and naively ‘speaking truth to power’ counterproductive.

Instead, try out these strategies:

  1. Develop ways to respond positively to ‘irrational’ policymaking

Instead of automatically bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond pragmatically, to pursue the kinds of evidence informed policymaking that is realistic in a complex and constantly changing policymaking environment.

  1. Tailor framing strategies to policymaker cognition

The usual advice is to minimise the cognitive burden of your presentation, and use strategies tailored to the ways in which people pay attention to, and remember information.

The less usual advice includes:

  • If policymakers are combining cognitive and emotive processes, combine facts with emotional appeals.
  • If policymakers are making quick choices based on their values and simple moral judgements, tell simple stories with a hero and a clear moral.
  • If policymakers are reflecting a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with the ‘lens’ through which actors in those coalitions understand the world.
  1. Identify the right time to influence individuals and processes

Understand what it means to find the right time to exploit ‘windows of opportunity’.

‘Timing’ can refer to the right time to influence an individual, which involves how open they are to, say, new arguments and evidence.

Or, timing refers to a ‘window of opportunity’ when political conditions are aligned. I discuss the latter in a separate paper on effective ‘policy entrepreneurs’.

  1. Adapt to real-world organisations rather than waiting for an orderly process to appear

Politicians may appear confident of policy and with a grasp of facts and details, but are (a) often vulnerable and therefore defensive or closed to challenging information, and/ or (b) inadequate in organisational politics, or unable to change the rules of their organisations.

So, develop pragmatic strategies: form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

  1. Recognise that the biases we ascribe to policymakers are present in ourselves and our own groups.

Identifying only the biases in our competitors may help mask academic/ scientific examples of group-think, and it may be counterproductive to use euphemistic terms like ‘low information’ to describe actors whose views we do not respect. This is a particular problem for scholars if they assume that most people do not live up to their own imagined standards of high-information-led action (often described as a ‘deficit model’ of engagement).

It may be more effective to recognise that: (a) people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves.; and, (b) a fundamental aspect of evolutionary psychology is that people need to get on with each other, so showing simple respect – or going further, to ‘mirror’ that person’s non-verbal signals – can be useful even if it looks facile.

This leaves open the ethical question of how far we should go to identify our biases, accept the need to work with people whose ways of thinking we do not share, and how far we should go to secure their trust without lying about one’s beliefs.

At the very least, we do not suggest these 5 strategies as a way to manipulate people for personal gain. They are better seen as ways to use psychology to communicate well. They are also likely to be as important to policy engagement regardless of Brexit. Venues may change quickly, but the ways in which people process information and make choices may not.

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, UK politics and policy

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

 

 

5 Comments

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

Richard Kwiatkowski and I combine policy studies and psychology to (a) take forward ‘Psychology Based Policy Studies’, and (b) produce practical advice for actors engaged in the policy process.

Cairney Kwiatkowski abstract

Most policy studies, built on policy theory, explain policy processes without identifying practical lessons. They identify how and why people make decisions, and situate this process of choice within complex systems of environments in which there are many actors at multiple levels of government, subject to rules, norms, and group influences, forming networks, and responding to socioeconomic dynamics. This approach helps generate demand for more evidence of the role of psychology in these areas:

  1. To do more than ‘psychoanalyse’ a small number of key actors at the ‘centre’ of government.
  2. To consider how and why actors identify, understand, follow, reproduce, or seek to shape or challenge, rules within their organisations or networks.
  3. To identify the role of network formation and maintenance, and the extent to which it is built on heuristics to establish trust and the regular flow of information and advice.
  4. To examine the extent to which persuasion can be used to prompt actors to rethink their beliefs – such as when new evidence or a proposed new solution challenges the way that a problem is framed, how much attention it receives, and how it is solved.
  5. To consider (a) the effect of events such as elections on the ways in which policymakers process evidence (e.g. does it encourage short-term and vote-driven calculations?), and (b) what prompts them to pay attention to some contextual factors and not others.

This literature highlights the use of evidence by actors who anticipate or respond to lurches of attention, moral choices, and coalition formation built on bolstering one’s own position, demonising competitors, and discrediting (some) evidence. Although this aspect of choice should not be caricatured – it is not useful simply to bemoan ‘post-truth’ politics and policymaking ‘irrationality’ – it provides a useful corrective to the fantasy of a linear policy process in which evidence can be directed to a single moment of authoritative and ‘comprehensively rational’ choice based only on cognition. Political systems and human psychology combine to create a policy process characterised by many actors competing to influence continuous policy choice built on cognition and emotion.

What are the practical implications?

Few studies consider how those seeking to influence policy should act in such environments or give advice about how they can engage effectively in the policy process. Of course context is important, and advice needs to be tailored and nuanced, but that is not necessarily a reason to side-step the issue of moving beyond description. Further, policymakers and influencers do not have this luxury. They need to gather information quickly and effectively to make good choices. They have to take the risk of action.

To influence this process we need to understand it, and to understand it more we need to study how scientists try to influence it. Psychology-based policy studies can provide important insights to help actors begin to measure and improve the effectiveness of their engagement in policy by: taking into account cognitive and emotional factors and the effect of identity on possible thought; and, considering how political actors are ‘embodied’ and situated in time, place, and social systems.

5 tentative suggestions

However, few psychological insights have been developed from direct studies of policymaking, and there is a limited evidence base. So, we provide preliminary advice by identifying the most relevant avenues of conceptual research and deriving some helpful ‘tools’ to those seeking to influence policy.

Our working assumption is that policymakers need to gather information quickly and effectively, so they develop heuristics to allow them to make what they believe to be good choices. Their solutions often seem to be driven more by their emotions than a ‘rational’ analysis of the evidence, partly because we hold them to a standard that no human can reach. If so, and if they have high confidence in their heuristics, they will dismiss our criticism as biased and naïve. Under those circumstances, restating the need for ‘evidence-based policymaking’ is futile, and naively ‘speaking truth to power’ counterproductive.

For us, heuristics represent simple alternative strategies, built on psychological insights to use psychological insights in policy practice. They are broad prompts towards certain ways of thinking and acting, not specific blueprints for action in all circumstances:

  1. Develop ways to respond positively to ‘irrational’ policymaking

Instead of automatically bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond in a ‘fast and frugal’ way, to pursue the kinds of evidence informed policymaking that is realistic in a complex and constantly changing policymaking environment.

  1. Tailor framing strategies to policymaker bias

The usual advice is to minimise the cognitive burden of your presentation, and use strategies tailored to the ways in which people pay attention to, and remember information (at the beginning and end of statements, with repetition, and using concrete and immediate reference points).

What is the less usual advice? If policymakers are combining cognitive and emotive processes, combine facts with emotional appeals. If policymakers are making quick choices based on their values and simple moral judgements, tell simple stories with a hero and a clear moral. If policymakers are reflecting a group emotion, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with the ‘lens’ through which actors in those coalitions understand the world.

 

  1. Identify the right time to influence individuals and processes

Understand what it means to find the right time to exploit ‘windows of opportunity’. ‘Timing’ can refer to the right time to influence an individual, which is relatively difficult to identify but with the possibility of direct influence, or to act while several political conditions are aligned, which presents less chance for you to make a direct impact.

  1. Adapt to real-world dysfunctional organisations rather than waiting for an orderly process to appear

Politicians may appear confident of policy and with a grasp of facts and details, but are (a) often vulnerable and defensive, and closed to challenging information, and/ or (b) inadequate in organisational politics, or unable to change the rules of their organisations. In the absence of institutional reforms, and presence of ‘dysfunctional’ processes, develop pragmatic strategies: form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

  1. Recognise that the biases we ascribe to policymakers are present in ourselves and our own groups.

Identifying only the biases in our competitors may help mask academic/ scientific examples of group-think, and it may be counterproductive to use euphemistic terms like ‘low information’ to describe actors whose views we do not respect. This is a particular problem for scholars if they assume that most people do not live up to their own imagined standards of high-information-led action.

It may be more effective to recognise that: (a) people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves.; and, (b) a fundamental aspect of evolutionary psychology is that people need to get on with each other, so showing simple respect – or going further, to ‘mirror’ that person’s non-verbal signals – can be useful even if it looks facile.

This leaves open the ethical question of how far we should go to identify our biases, accept the need to work with people whose ways of thinking we do not share, and how far we should go to secure their trust without lying about one’s beliefs.

4 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy