Category Archives: Evidence Based Policymaking (EBPM)

Teaching policy analysis with a blog

Below is the draft introduction to a paper that I am writing for a Special Issue paper on Teaching Policy Analysis for Gestión y Análisis de Políticas Públicas (GAPP) (here is the version with the references if you want to sing along).

When we teach policy analysis, we can focus on how to be a policy analyst or how to situate the act of policy analysis within a much wider policymaking context. Ideally, we would teach and learn about both. Indeed, this aim is central to Lasswell’s vision for the policy sciences, in which the analysis of policy (and policymaking) informs analysis for policy, and both are essential to the pursuit of equality and dignity.

There is the potential to achieve this vision for the policy sciences. Policy analysis texts focus on the individual and professional skills required to act efficiently and effectively in a time-pressured political environment. Further, they are supported by the study of policy analysts to reflect on how analysis takes place, and policy is made, in the real world.

The next step would be to harness the wealth of policy concept- and theory-informed studies to help understand how real world contexts inform policy analysis insights. First, for example, almost all mainstream studies assume or demonstrate that there is no such thing as a policy cycle with clearly-defined and well-ordered stages of policymaking, from defining problems and generating solutions to evaluating their effect. If so, how can policy analysts understand their far more complex policymaking environment, and what skills and strategies do they need to develop to engage effectively? Indeed, these discussions may be essential to preventing the demoralisation of analysts: if they do not learn in advance about the processes and factors than can minimise their influence, how can they generate realistic expectations? Second, if the wider aim is human equality and dignity, insights from critical policy analysis are essential. They help analysts think about what those concepts mean, how to identify and support marginalised populations, and how policy analysis skills and techniques relate to those aims. In particular, they warn against treating policy analysis as a technocratic profession devoid of politics, which may contribute to exclusive research gathering practices, producing too-narrow definitions of problems, insufficient consideration of feasible solutions, and recommendations made about target populations without engaging with the people they claim to serve.

However, this aim is much easier described than achieved. Policy analysis texts, focusing on how to do it, often use insights from policy studies.  However, they do so without fully explaining key concepts and theories or exploring their implications. There is simply not enough time and space to do justice to every element, from the technical tools of policy analysis (including cost-benefit analysis) to the empirical findings from policy theories and normative insights from critical policy analysis approaches (e.g. Weimer and Vining, 2017 is already 500 pages long). Policy process research, focusing on what actually happens, may have practical implications for analysts. However, they are often hidden behind layers of concepts and jargon, and – with notable exceptions – their authors seem uninterested in describing the normative importance of, or practical lessons from, theory-informed empirical studies. Further, the cumulative size of this research is overwhelming and beyond the full understanding of experienced specialist scholars. Indeed, it is even difficult to recommend a small number of texts to sum up each approach, which makes it difficult to predict how much time and energy it would take to understand this field, and therefore to demonstrate the payoff from that investment. Further, critical policy analysis is essential, but often ignored in policy analysis texts, and the potential for meaningful conversations with mainstream policy scholars remains largely untapped or resisted .

In that context, policy analysis students embody the problem of ‘bounded rationality’ described famously by Simon. Indeed, Simon’s phrase ‘to satisfice’ sums up a goal-oriented response to bounded rationality: faced with the inability to identify, process, or understand all relevant information, they must seek ways to gather enough information to inform ‘good enough’ choices. Further, since then, policy studies have sought to incorporate insights from individual human, social, and organisational psychology to understand (1) the other cognitive shortcuts that humans use, including gut-level instinct, habit, familiarity with an issue, deeply-held beliefs, and emotions, and (2) their organisations equivalents (since organisations also use rules and standard operating procedures to close off information searches and limit analysis). Human cognitive shortcuts can be described negatively as cognitive biases or more positively as ‘fast and frugal heuristics’. However, the basic point remains: if people draw on allegedly ‘rational’ and ‘irrational’ shortcuts to information, we need to find ways to adapt to their ways of thinking, rather than holding onto an idealised version of humans (and policymaking organisations) that do not exist in the real world .

While these insights focus generally on policymakers, they are also essential to engaging with students. Gone – I hope – are the days of lecturers giving students an overwhelmingly huge reading list and expecting them to devour every source before each class, which may help some students but demoralise many others (especially since it seems inevitable that most students’ first engagement with specialist texts and technical jargon will already induce fears about their own ignorance). In their place should be a thoughtful exploration of how much students can actually learn about the wider policy analysis context, focusing on (1) the knowledge and skills they already possess, (2) the time they have to learn, and (3) how new knowledge or skills would relate to their ambitions. If students are seeking fast and frugal heuristics to learn about policy analysis, how should we help them, and what should we teach?

To help answer such questions, first I describe the rationale for the blog that I developed in tandem with teaching public policy, initially at an undergraduate level as part of a wider politics programme, before developing a Master of Public Policy and contributing to shorter executive courses or one-off workshops. This range matters, since the answer to the question ‘what can students learn?’ will vary according to their existing knowledge and time. Second, I describe some examples of the valuable intersection between policy analysis, policy process research, and critical policy analysis to demonstrate the potential payoffs to wider insights. Third, I summarise the rationale for the coursework that I use to foster public policy knowledge and policy analysis skills, including skills in critical thinking and reflection to accompany more specialist analytical skills.

See also:

500

750

1000

The finished paper will be translated into Spanish

1 Comment

Filed under 1000 words, 500 words, 750 word policy analysis, Evidence Based Policymaking (EBPM), MPP, PhD, public policy

The politics of policy design

This post summarizes the conclusion of ‘The politics of policy design’ for a Design and Policy Network workshop (15th June).  My contribution to this interdisciplinary academic-practitioner discussion is to present insights from political science and policy process research, which required me to define some terms (background) before identifying three cautionary messages.

Background

A broad definition of policy design as an activity is to (1) define policy aims, and (2) identify the tools to deliver those aims (compare with policy analysis).

However, note the verb/noun distinction, and common architectural metaphor, to distinguish between the (a) act of design, and (b) the output (e.g. the blueprints).

In terms of the outputs, tools can be defined narrowly as policy instruments – including tax/spending, regulations, staff and other resources for delivery, information sharing, ‘nudging’, etc. – or more widely to include the processes involved in their formulation (such as participatory and deliberative). Therefore, we could be describing:

  • A highly centralized process, involving very few people, to produce the equivalent of a blueprint.
  • A decentralized, and perhaps uncoordinated, process involving many people, built on the principle that to seek a blueprint would be to miss the point of participation and deliberation.

Policymaking research tends to focus on

(1) measuring policy change with reference to the ‘policy mix’ of these tools/ instruments, and generally showing that most policy change is minor (and some is major) (link1, link2, link3, link4), and/ or

(2) how to understand the complex policymaking systems or environments in which policy design processes take place.

These studies are the source of my messages of doom.

Three cautionary messages about new policy design

There is a major gap between the act of policy design and actual policies and policy processes. This issue led to the decline of old policy design studies in the 1980s.

While ‘new policy design’ scholars seek to reinvigorate the field, the old issues serve as a cautionary tale, reminding us that (1) policy design is not new, and (2) its decline did not relate to the lack of sophisticated skills or insights among policy designers.

In other words, these old problems will not simply be solved by modern scientific, methodological, or policy design advances. Rather, I encourage policy designers to pay particular attention to:

1. The gap between functional requirements and real world policymaking.

Policy analysts and designers often focus on what they need, or require to get their job done or produce the outcomes they seek.

Policy process researchers identify the major, inevitable, gaps between those requirements and actual policy processes (to the extent that the link between design and policy is often difficult to identify).

2. The strong rationale for the policy processes that undermine policy design.

Policy processes – and their contribution to policy mixes – may seem incoherent from a design perspective. However, they make sense to the participants involved.

Some relate to choice, including to share responsibility for instruments across many levels or types of government (without focusing on how those responsibilities will connect or be coordinated).

Some result from necessity, to delegate responsibility to many policy communities spread across government, each with their own ways to define and address problems (without the ability to know how those responsibilities will be connected).

3. The policy analysis and design dilemmas that cannot be solved by design methods alone.

When seen from the ‘top down’, design problems often relate to the perceived lack of delivery or follow-through in relation to agreed high level design outputs (great design, poor delivery).

When seen from the ‘bottom up’, they represent legitimate ways to incorporate local stakeholder and citizen perspectives. This process will inevitably produce a gap between different sources and outputs of design, making it difficult to separate poor delivery (bad?) from deviation (good?).

Such dynamics are solved via political choice rather than design processes and  techniques.

Notes on the workshop discussion

The workshop discussion prompted us initially to consider how many different people would define it. The range of responses included seeing policy design as:

  • a specific process with specific tools to produce a well-defined output (applied to specific areas conducive to design methods)
  • a more general philosophy or way of thinking about things like policy issues (compare with systems thinking)
  • a means to encourage experimentation (such as to produce a prototype policy instrument, use it, and reflect or learn about its impact) or change completely how people think about an issue
  • the production of a policy solution, or one part of a large policy mix
  • a niche activity in one unit of government, or something mainstreamed across governments
  • something done in government, or inside and outside of government
  • producing something new (like writing on a blank sheet of paper), adding to a pile of solutions, or redesigning what exists
  • primarily a means to empower people to tell their story, or as a means to improve policy advocacy (as in discussions of narrative/ storytelling)
  • something done with authoritative policymakers like government ministers (in other words, people with the power to make policy changes after they participate in design processes) or given to them (in other words, the same people but as the audience for the outcomes of design)

These definitions matter since they have very different implications for policy and practice. Take, for example, the link – made by Professor Liz Richardson – between policy design and the idea of evidence-based policymaking, to consider two very different scenarios:

  1. A minister is directly involved in policy design processes. They use design thinking to revisit how they think about a policy problem (and target populations), seek to foster participation and deliberation, and use that process – perhaps continuously – to consider how to reconcile very different sources of evidence (including, say, new data from randomized control trials and powerful stories from citizens, stakeholders, service users). I reckon that this kind of scenario would be in the minds of people who describe policy design optimistically.
  2. A minister is the intended audience of a report on the outcomes of policy design. You assume that their thoughts on a policy problem are well-established. There is no obvious way for them to reconcile different sources of policy-relevant evidence. Crucially, the fruits of your efforts have made a profound impact on the people involved but, for the minister, the outcome is just one of too-many sources of information (likely produced too soon before or after they want to consider the issue).

The second scenario is closer to the process that I describe in the main post, although policy studies would warn against seeing someone like a government minister as authoritative in the sense that they reside in the centre of government. Rather, studies of multi-centric policymaking remind us that there are many possible centres spread across political systems. If so, policy design – according to approaches like the IAD – is about ways to envisage a much bigger context in which design success depends on the participation and agreement of a large number of influential actors (who have limited or no ability to oblige others to cooperate).

Further Reading

Paul Cairney (2022) ‘The politics of policy design’, EURO Journal on Decision Processes  https://doi.org/10.1016/j.ejdp.2021.100002

Paul Cairney, Tanya Heikkila, and Matthew Wood (2019) Making Policy in a Complex World (Cambridge Elements) PDF Blog

Complex systems and systems thinking (part of a series of thematic posts on policy analysis)

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Why is there high support for, but low likelihood of, drug consumption rooms in Scotland?

This is my interpretation of this new article:

James Nicholls, Wulf Livingston, Andy Perkins, Beth Cairns, Rebecca Foster, Kirsten M. A. Trayner, Harry R. Sumnall, Tracey Price, Paul Cairney, Josh Dumbrell, and Tessa Parkes (2022) ‘Drug Consumption Rooms and Public Health Policy: Perspectives of Scottish Strategic Decision-Makers’, International Journal of Environmental Research and Public Health, 19(11), 6575; https://doi.org/10.3390/ijerph19116575

Q: if stakeholders in Scotland express high support for drug consumption rooms, and many policymakers in Scotland seem sympathetic, why is there so little prospect of policy change?

My summary of the article’s answer is as follows:

  1. Although stakeholders support DCRs almost unanimously, they do not support them energetically.

They see this solution as one part of a much larger package rather than a magic bullet. They are not sure of the cost-effectiveness in relation to other solutions, and can envisage some potential users not using them.

The existing evidence on their effectiveness is not persuasive for people who (1) adhere to a hierarchy of evidence which prioritizes evidence from randomized control trials or (2) advocate alternative ways to use evidence.

There are competing ways to frame this policy solution. It suggests that there are some unresolved issues among stakeholders which have not yet come to the fore (since the lack of need to implement something specific reduces the need to engage with a more concrete problem definition).

2. A common way to deal with such uncertainty in Scotland is to use ‘improvement science’ or the ‘improvement method’.

This method invites local policymakers and practitioners to try out new solutions, work with stakeholders and service users during delivery, reflect on the results, and use this learning to design the next iteration. This is a pragmatic, small-scale, approach that appeals to the (small-c conservative) Scottish Government, which uses pilots to delay major policy changes, and is keen on its image as not too centralist and quite collaboration minded.

3. This approach is not politically feasible in this case.

Some factors suggest that the general argument has almost been won, including positive informal feedback from policymakers, and increasingly sympathetic media coverage (albeit using problematic ways to describe drug use).

However, this level of support is not enough to support experimentation. Drug consumption rooms would need a far stronger steer from the Scottish Government.

In this case, it can’t experiment now and decide later. It needs to make a strong choice (with inevitable negative blowback) and stay the course, knowing that one failed political experiment could set back progress for years.

4. The multi-level policymaking system is not conducive to overcoming these obstacles.

The issue of drugs policy is often described as a public health – and therefore devolved – issue politically (and in policy circles)

However, the legal/ formal division of responsibilities suggests that UK government consent is necessary and not forthcoming.

It is possible that the Scottish Government could take a chance and act alone. Indeed, the example of smoking in public places showed that it shifted its position after a slow start (it described the issue as reserved to the UK took charge of its own legislation, albeit with UK support).

However, the Scottish Government seems unwilling to take that chance, partly because it has been stung by legal challenges in other areas, and is reluctant to engage in more of the same (see minimum unit pricing for alcohol).

Local policymakers could experiment on their own, but they won’t do it without proper authority from a central government.

This experience is part of a more general issue: people may describe multi-level policymaking as a source of venues for experimentation (‘laboratories of democracy’) to encourage policy learning and collaboration. However, this case, and cases like fracking, show that they can actually be sites of multiple veto points and multi-level reluctance.

If so, the remaining question for reflection is: what would it take to overcome these obstacles? The election of a Labour UK government? Scottish independence? Or, is there some other way to make it happen in the current context?

See also:

What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, Scottish independence, Scottish politics, tobacco policy, UK politics and policy

Policy analysis in 750 words: WORDLE and trial and error policymaking

I apologise for every word in this post, and the capitalised 5-letter words in particular.

WORDLE is a SIMPLE word game (in US English). The aim is to identify a 5-letter word correctly in 6 guesses or fewer. Each guess has to be a real word, and you receive informative feedback each time: GREEN means you have the letter RIGHT and in the right position; yellow means the right letter in the wrong position; grey MEANS the letter does not appear in the word.

One strategy involves trial-and-error learning via 3 or 4 simple steps:

1. Use your initial knowledge of the English language to inform initial guesses, such as guessing a word with common vowels (I go for E and A) and consonants (e.g. S, T).

2. Learn from feedback on your correct and incorrect estimates.

3. Use your new information and deduction (e.g. about which combinations work when you exclude many options) to make informed guesses.

4. Do so while avoiding unhelpful heuristics, such as assuming that each letter will only appear once (or that the spelling is in UK English).

At least, that is how I play it. I get it in 3 just over half the time, and 4 or 5 in the rest. I make 2-4 ‘errors’ then succeed. In the context of the game’s rules, that is consistent success, RIGHT?

[insert crowbar GIF to try to get away with the segue]

That is the spirit of the idea of trial-and-error learning.

It is informed by previous knowledge, but also a recognition of the benefits of trying things out to generate new information, update your knowledge and skills (the definition of learning), and try again.

A positive normative account of this approach can be found in classic discussions of incrementalism and modern discussions of policymaking informed by complex systems insights:

‘To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly’.

Advocates of such approaches also suggest that we change how we describe them, replacing the language of policy failure with ERROR, at least when part of a process of continuous policy learning in the face of uncertainty.

At the heart of such advice are two guiding principles:

1. Recognise the limits to centralism when giving policy advice. There is no powerful centre of government, able to carry out all of its aims successfully, so do not build policy advice on that assumption.

2. Recognise the limits to our knowledge. Policymakers must make and learn from choices in the face of uncertainty, so do not kid yourself that one piece of analysis and action will do.

Much like the first two WORDLE guesses, your existing knowledge alone does not tell you how to proceed (regardless of the number of times that people repeat the slogan of ‘evidence-based policymaking’).

Political problems with trial and error

The main political problem with this approach is that many political systems – including adversarial and/or Westminster systems – are not conducive to learning from error. You may think that adapting continuously to uncertainty is crucial, but also be wary of recommending it to:

1. Politicians who will be held to account for failure. A government’s apparent failure to deliver on promises represents a resource for its opposition.

2. Organisations subject to government targets. Failure to meet strict statutory requirements is not seen as a learning experience.  

More generally, your audience may face criticism whenever errors are associated with negative policy consequences (with COVID-19 policy representing a vivid, extreme example).

These limitations produce a major dilemma in policy analysis, in which you believe that you will not learn how to make good policy without trial-and-error but recognise that this approach will not be politically feasible. In many political systems, policymakers need to pretend to their audience that they know what the problem is and that they have the knowledge and power to solve it. You may not be too popular if you encourage open-minded experimentation. This limitation should not warn you against trial-and-error recommendations completely, but rather remind you to relate good-looking ideas to your policymaking context.

Please note that I missed my train stop while writing this post, despite many opportunities to learn from the other times it happened.

Leave a comment

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: power and knowledge

This post adapts Policy in 500 Words: Power and Knowledge (the body of this post) to inform the Policy Analysis in 750 words series (the top and tails).

One take home message from the 750 Words series is to avoid seeing policy analysis simply as a technical (and ‘evidence-based’) exercise. Mainstream policy analysis texts break down the process into technical-looking steps, but also show how each step relates to a wider political context. Critical policy analysis texts focus more intensely on the role of politics in the everyday choices that we might otherwise take for granted or consider to be innocuous. The latter connect strongly to wider studies of the links between power and knowledge.

Power and ideas

Classic studies suggest that the most profound and worrying kinds of power are the hardest to observe. We often witness highly visible political battles and can use pluralist methods to identify who has material resources, how they use them, and who wins. However, key forms of power ensure that many such battles do not take place. Actors often use their resources to reinforce social attitudes and policymakers’ beliefs, to establish which issues are policy problems worthy of attention and which populations deserve government support or punishment. Key battles may not arise because not enough people think they are worthy of debate. Attention and support for debate may rise, only to be crowded out of a political agenda in which policymakers can only debate a small number of issues.

Studies of power relate these processes to the manipulation of ideas or shared beliefs under conditions of bounded rationality (see for example the NPF). Manipulation might describe some people getting other people to do things they would not otherwise do. They exploit the beliefs of people who do not know enough about the world, or themselves, to know how to identify and pursue their best interests. Or, they encourage social norms – in which we describe some behaviour as acceptable and some as deviant – which are enforced by (1) the state (for example, via criminal justice and mental health policy), (2) social groups, and (3) individuals who govern their own behaviour with reference to what they feel is expected of them (and the consequences of not living up to expectations).

Such beliefs, norms, and rules are profoundly important because they often remain unspoken and taken for granted. Indeed, some studies equate them with the social structures that appear to close off some action. If so, we may not need to identify manipulation to find unequal power relationships: strong and enduring social practices help some people win at the expense of others, by luck or design.

Relating power to policy analysis: whose knowledge matters?

The concept of‘epistemic violence’ is one way todescribe the act of dismissing an individual, social group, or population by undermining the value of their knowledge or claim to knowledge. Specific discussions include: (a) the colonial West’s subjugation of colonized populations, diminishing the voice of the subaltern; (b) privileging scientific knowledge and dismissing knowledge claims via personal or shared experience; and (c) erasing the voices of women of colour from the history of women’s activism and intellectual history.

It is in this context that we can understand ‘critical’ research designed to ‘produce social change that will empower, enlighten, and emancipate’ (p51). Powerlessness can relate to the visible lack of economic material resources and factors such as the lack of opportunity to mobilise and be heard.

750 Words posts examining this link between power and knowledge

Some posts focus on the role of power in research and/ or policy analysis:

These posts ask questions such as: who decides what evidence will be policy-relevant, whose knowledge matters, and who benefits from this selective use of evidence? They help to (1) identify the exercise of power to maintain evidential hierarchies (or prioritise scientific methods over other forms of knowledge gathering and sharing), and (2) situate this action within a wider context (such as when focusing on colonisation and minoritization). They reflect on how (and why) analysts should respect a wider range of knowledge sources, and how to produce more ethical research with an explicit emancipatory role. As such, they challenge the – naïve or cynical – argument that science and scientists are objective and that science-informed analysis is simply a technical exercise (see also Separating facts from values).

Many posts incorporate these discussions into many policy analysis themes.

See also

Policy Concepts in 1000 Words: Power and Ideas

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization. It discusses studies of education policy (many draw on critical policy analysis)

There are also many EBPM posts that slip this discussion of power and politics into discussions of evidence and policy. They don’t always use the word ‘power’ though (see Evidence-informed policymaking: context is everything)

Leave a comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: Separating facts from values

This post begins by reproducing Can you separate the facts from your beliefs when making policy?(based on the 1st edition of Understanding Public Policy) …

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.

It continues with an edited excerpt from p59 of Understanding Public Policy, which explores the implications of bounded rationality for contemporary accounts of ‘evidence-based policymaking’:

‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.

Implications for policy analysis

Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.

A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.

Most relevant posts in the 750 series

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 

Carol Bacchi (2009) Analysing Policy: What’s the problem represented to be? 

Deborah Stone (2012) Policy Paradox

Who should be involved in the process of policy analysis?

William Riker (1986) The Art of Political Manipulation

Using Statistics and Explaining Risk (David Spiegelhalter and Gerd Gigerenzer)

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

See also

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

1 Comment

Filed under 750 word policy analysis, Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Policy Analysis in 750 Words: How to communicate effectively with policymakers

This post forms one part of the Policy Analysis in 750 words series overview. The title comes from this article by Cairney and Kwiatkowski on ‘psychology based policy studies’.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts. How might we combine insights to think about effective communication?

1. Insights from policy analysis texts

Most texts in this series relate communication to understanding your audience (or client) and the political context. Your audience has limited attention or time to consider problems. They may have a good antennae for the political feasibility of any solution, but less knowledge of (or interest in) the technical details. In that context, your aim is to help them treat the problem as worthy of their energy (e.g. as urgent and important) and the solution as doable. Examples include:

  • Bardach: communicating with a client requires coherence, clarity, brevity, and minimal jargon.
  • Dunn: argumentation involves defining the size and urgency of a problem, assessing the claims made for each solution, synthesising information from many sources into a concise and coherent summary, and tailoring reports to your audience.
  • Smith: your audience makes a quick judgement on whether or not to read your analysis. Ask yourself questions including: how do I frame the problem to make it relevant, what should my audience learn, and how does each solution relate to what has been done before? Maximise interest by keeping communication concise, polite, and tailored to a policymaker’s values and interests.

2. Insights from studies of policymaker psychology

These insights emerged from the study of bounded rationality: policymakers do not have the time, resources, or cognitive ability to consider all information, possibilities, solutions, or consequences of their actions. They use two types of informational shortcut associated with concepts such as cognition and emotion, thinking ‘fast and slow’, ‘fast and frugal heuristics’, or, if you like more provocative terms:

  • ‘Rational’ shortcuts. Goal-oriented reasoning based on prioritizing trusted sources of information.
  • ‘Irrational’ shortcuts. Emotional thinking, or thought fuelled by gut feelings, deeply held beliefs, or habits.

We can use such distinctions to examine the role of evidence-informed communication, to reduce:

  • Uncertainty, or a lack of policy-relevant knowledge. Focus on generating ‘good’ evidence and concise communication as you collate and synthesise information.
  • Ambiguity, or the ability to entertain more than one interpretation of a policy problem. Focus on argumentation and framing as you try to maximise attention to (a) one way of defining a problem, and (b) your preferred solution.

Many policy theories describe the latter, in which actors: combine facts with emotional appeals, appeal to people who share their beliefs, tell stories to appeal to the biases of their audience, and exploit dominant ways of thinking or social stereotypes to generate attention and support. These possibilities produce ethical dilemmas for policy analysts.

3. Insights from studies of complex policymaking environments

None of this advice matters if it is untethered from reality.

Policy analysis texts focus on political reality to note that even a perfectly communicated solution is worthless if technically feasible but politically unfeasible.

Policy process texts focus on policymaking reality: showing that ideal-types such as the policy cycle do not guide real-world action, and describing more accurate ways to guide policy analysts.

For example, they help us rethink the ‘know your audience’ mantra by:

Identifying a tendency for most policy to be processed in policy communities or subsystems:

Showing that many policymaking ‘centres’ create the instruments that produce policy change

Gone are the mythical days of a small number of analysts communicating to a single core executive (and of the heroic researcher changing the world by speaking truth to power). Instead, we have many analysts engaging with many centres, creating a need to not only (a) tailor arguments to different audiences, but also (b) develop wider analytical skills (such as to foster collaboration and the use of ‘design principles’).

How to communicate effectively with policymakers

In that context, we argue that effective communication requires analysts to:

1. Understand your audience and tailor your response (using insights from psychology)

2. Identify ‘windows of opportunity’ for influence (while noting that these windows are outside of anyone’s control)

3. Engage with real world policymaking rather than waiting for a ‘rational’ and orderly process to appear (using insights from policy studies).

See also:

Why don’t policymakers listen to your evidence?

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

Entrepreneurial policy analysis

1 Comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

The future of equity policy in education and health: will intersectoral action be the solution?

This post was first published by NORRAG. It summarises key points from two qualitative systematic reviews of peer-reviewed research on health equity policy (Cairney, St Denny, Mitchell) and education equity policy (Cairney, Kippin) for the European Research Council funded IMAJINE project. Our focus on comparing strategies within sectors supplements a wider focus on spatial justice (and cross-sectoral gender equity) strategies. It is published in conjunction with a GHC and NORRAG joint event “The Future of Equity Policy in Education and Health: Will Intersectoral Action be the Solution?” scheduled for 02 November at 17:00-18:30 CET/Geneva, which will discuss the opportunities and challenges to intersectoral research, practice and policy in education and health. Register for the event here

Many governments, international organisations, practitioners, and researchers express high rhetorical support for more equitable policy outcomes. However, the meaning of equity is vague, the choice of policy solutions is highly contested, and approaches to equity policy vary markedly in different policy sectors. 

In that context, it is common for policymakers to back up this equity policy rhetoric with a commitment to intersectoral action and collaboration inside and outside of government, described with terms such as holisticjoined-up, collaborative, or systems approaches to governance. At the same time, it is common for research on policymaking to highlight the ever-present and systemic obstacles to the achievement of such admirable but vague aims.

Our reviews of equity policy and policymaking in two different sectors – health and education – highlights these obstacles in different ways.

In health, the global equity strategy Health in All Policies (HiAP) describes a coherent and convincing rationale for intersectoral action and collaboration inside and outside of government:

  1. Health is a human right to be fostered and protected by all governments.
  2. Most determinants of health inequalities are social – relating to income, wealth, education, housing, social, and physical environments – and we should focus less on individual choices and healthcare.
  3. Policies to address social determinants are not in the gift of health sectors, so we need intersectoral action to foster policy changes, such as in relation to tax and spending, education, and housing. 
  4. Effective collaborative strategies foster win-win solutions and the co-production of policy, and avoid the perception of ‘health imperialism’ or interference in the work of other professions. 

Yet, our review of HiAP articles suggests that very few projects deliver on these aims. In some cases, authors express frustration that people in other sectors do not take their health aims seriously enough. Or, those actors make sense of HiAP aims in different ways, turning a social determinants approach into projects focusing more on individual lifestyles. These experiences highlight governance dilemmas, in which the need to avoid ‘health imperialism’ leads to minimal challenges to the status quo, or HiAP advocates seek contradictory approaches such as to formalize HiAP strategies from the top-down (to ensure high-level commitment to reform) and encourage collaborative ‘bottom-up’ approaches (to let go of those reforms to foster creative and locally tailored solutions). 

In education, it is more difficult to identify a single coherent rationale for wider intersectoral action. Within ‘social justice’ approaches, there is some focus on the ‘out of school’ factors crucial to learning and attainment processes and outcomes, particularly when describing the marginalization and minoritization of social groups. There are also some studies of systems-based approaches to education. However, there is a more general tendency to focus on sector-specific activities and solutions, including reforms to education systems and school governance. Further, agenda setting organizations such as the OECD foster the sense that investment in early years education, well governed schools and education systems, and reallocations of resources to boost capacity in schools in deprived areas, can address problems of unequal attainment. 

In other words, in both sectors we can often find a convincing rationale for practitioners in one sector to seek cooperation with other sectors. However, no study describes an effective way to do it, or even progress towards new ways of thinking. Indeed, perhaps the most striking proxy indicator of meaningful intersectoral action comes from the bibliographies of these articles. It is clear from the reading lists of each sector that they are not reading each other’s work. The literature on intersectoral action comes with a narrow sectoral lens. 

In sum, intersectoral action and collaboration remains a functional requirement – and a nice idea – rather than a routine activity.

2 Comments

Filed under education policy, Evidence Based Policymaking (EBPM), Policy learning and transfer, Prevention policy, Public health

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization

By Paul Cairney and Sean Kippin

This post summarizes a key section of our review of education equity policymaking [see the full article for references to the studies summarized here].

One of the main themes is that many governments present a misleading image of their education policies. There are many variations on this theme, in which policymakers:

  1. Describe the energetic pursuit of equity, and use the right language, as a way to hide limited progress.
  2. Pursue ‘equity for all’ initiatives that ignore or downplay the specific importance of marginalization and minoritization, such as in relation to race and racism, immigration, ethnic minorities, and indigenous populations.
  3. Pursue narrow definitions of equity in terms of access to schools, at the expense of definitions that pay attention to ‘out of school’ factors and social justice.

Minoritization is a strong theme in US studies in particular. US experiences help us categorise multiple modes of marginalisation in relation to race and migration, driven by witting and unwitting action and explicit and implicit bias:

  • The social construction of students and parents. Examples include: framing white students as ‘gifted’ and more deserving of merit-based education (or victims of equity initiatives); framing non-white students as less intelligent, more in need of special needs or remedial classes, and having cultural or other learning ‘deficits’ that undermine them and disrupt white students; and, describing migrant parents as unable to participate until they learn English.
  • Maintaining or failing to challenge inequitable policies. Examples include higher funding for schools and colleges with higher white populations, and tracking (segregating students according to perceived ability), which benefit white students disproportionately.
  • Ignoring social determinants or ‘out of school’ factors.
  • Creating the illusion of equity with measures that exacerbate inequalities. For example, promoting school choice policies while knowing that the rules restrict access to sought-after schools.
  • Promoting initiatives to ignore race, including so-called ‘color blind’ or ‘equity for all’ initiatives.
  • Prioritizing initiatives at the expense of racial or socio-economic equity, such as measures to boost overall national performance at the expense of targeted measures.
  • Game playing and policy subversion, including school and college selection rules to restrict access and improve metrics.

The wider international – primarily Global North – experience suggests that minoritization and marginalization in relation to race, ethnicity, and migration is a routine impediment to equity strategies, albeit with some uncertainty about which policies would have the most impact.

Other country studies describe the poor treatment of citizens in relation to immigration status or ethnicity, often while presenting the image of a more equitable system. Until recently, Finland’s global reputation for education equity built on universalism and comprehensive schools has contrasted with its historic ‘othering’ of immigrant populations. Japan’s reputation for containing a homogeneous population, allowing its governments to present an image of classless egalitarianism and harmonious society, contrasts with its discrimination against foreign students. Multiple studies of Canadian provinces provide the strongest accounts of the symbolic and cynical use of multiculturalism for political gains and economic ends:

As in the US, many countries use ‘special needs’ categories to segregate immigrant and ethnic minority populations. Mainstreaming versus special needs debates have a clear racial and ethnic dimension when (1) some groups are more likely to be categorised as having learning disabilities or behavioural disorders, and (2) language and cultural barriers are listed as disabilities in many countries. Further, ‘commonwealth’ country studies identify the marginalisation of indigenous populations in ways comparable to the US marginalisation of students of colour.

Overall, these studies generate the sense that the frequently used language of education equity policy can signal a range of possibilities, from (1) high energy and sincere commitment to social justice, to (2) the cynical use of rhetoric and symbolism to protect historic inequalities.

Examples:

  • Turner, E.O., and Spain, A.K., (2020) ‘The Multiple Meanings of (In)Equity: Remaking School District Tracking Policy in an Era of Budget Cuts and Accountability’, Urban Education, 55, 5, 783-812 https://doi.org/10.1177%2F0042085916674060
  • Thorius, K.A. and Maxcy, B.D. (2015) ‘Critical Practice Analysis of Special Education Policy: An RTI Example’, Remedial and Special Education, 36, 2, 116-124 https://doi.org/10.1177%2F0741932514550812
  • Felix, E.R. and Trinidad, A. (2020) ‘The decentralization of race: tracing the dilution of racial equity in educational policy’, International Journal of Qualitative Studies in Education, 33, 4, 465-490 https://doi.org/10.1080/09518398.2019.1681538
  • Alexiadou, N. (2019) ‘Framing education policies and transitions of Roma students in Europe’, Comparative Education, 55, 3,  https://doi.org/10.1080/03050068.2019.1619334

See also: https://paulcairney.wordpress.com/2017/09/09/policy-concepts-in-500-words-social-construction-and-policy-design/

2 Comments

Filed under education policy, Evidence Based Policymaking (EBPM), Policy learning and transfer, Prevention policy, public policy

Perspectives on academic impact and expert advice to policymakers

A blog post prompted by this fascinating post by Dr Christiane Gerblinger: Are experts complicit in making their advice easy for politicians to ignore?

There is a lot of advice out there for people seeking to make an ‘impact’ on policy with their research, but some kinds of advice must seem like they are a million miles apart.

For the sake of brevity, here are some exemplars of the kinds of discussion that you might find:

Advice from former policymakers

Here is what you could have done to influence my choices when I was in office. Almost none of you did it.

(for a nicer punchline see How can we demonstrate the public value of evidence-based policy making when government ministers declare that the people ‘have had enough of experts’?)

Advice from former civil servants

If you don’t know and follow the rules here, people will ignore your research. We despair when you just email your articles.

(for nicer advice see Creating and communicating social research for policymakers in government)

Advice from training courses on communication

Be concise and engaging.

Advice from training courses on policy impact

Find out where the action is, learn the rules, build up relationships and networks, become a trusted guide, be in the right place at the right time to exploit opportunities, give advice rather than sitting on the fence.

(see for example Knowledge management for policy impact: the case of the European Commission’s Joint Research Centre)

Advice from researchers with some experience of engagement

Do great research, make it relevant and readable, understand your policymaking context, decide how far you want to go have an impact, be accessible, build relationships, be entrepreneurial.

(see Beware the well-intentioned advice of unusually successful academics)

Advice from academic-practitioner exchanges

Note the different practices and incentives that undermine routine and fruitful exchanges between academics, practitioners, and policymakers.

(see Theory and Practice: How to Communicate Policy Research beyond the Academy and ANZOG Wellington).

Advice extrapolated from policy studies

Your audience decides if your research will have impact; policymakers will necessarily ignore almost all of it; a window of opportunity may never arise; and, your best shot may be to tailor your research findings to policymakers whose beliefs you may think are abhorrent.

(discussed in how much impact can you expect from your analysis? and book The Politics of Policy Analysis)

Inference from my study of UK COVID-19 policy

Very few expert advisers had a continuous impact on policy, some had decent access, but almost all were peripheral players or outsiders by choice.

(see The UK government’s COVID-19 policy: what does ‘guided by the science’ mean in practice? and COVID-19 page)

Inference from Dr Gerblinger

Experts ensure that they ignored when: ‘focussing extensively on one strand of enquiry while sidestepping the wider context; expunging complexity; and routinely raising the presence of inconclusiveness’.

What can we make of all of this advice?

One way to navigate all of this material is to make some basic distinctions between:

Sensible basic advice to early career researchers

Know your audience, and tailor your communication accordingly; see academic-practitioner exchange as two-way conversation rather than one-way knowledge transfer.

Take home message: here are some sensible ways to share experiences with people who might find your research useful.

Reflections from people with experience

It will likely not reflect your position or experience (but might be useful sometimes).

Take home message: I think this stuff worked for me, but I am not really sure, and I doubt you will have the same resources.

Reflections from studies of academic-practitioner exchange

It tends to find minimal evidence that people are (a) evaluating research engagement projects, and (b) finding tangible evidence of success (see Research engagement with government: insights from research on policy analysis and policymaking)

Take home message: there is a lot of ‘impact’ work going on, but no one is sure what it all adds up to.

Policy initiatives such as the UK Research Excellence Framework, which requires case studies of policy (or other) impact to arise directly from published research.

Take home message: I have my own thoughts, but see Rethinking policy ‘impact’: four models of research-policy relations

Reflections from people like me

Policy studies can be quite dispiriting. It often looks like I am saying that none of these activities will make much of a difference to policy or policymaking. Rather, I am saying to beware the temptation to turn (a) studies that describe policymaking complexity (e.g. 500 Words) into an agent-centred story of heroically impactful researchers (see for example the Discussion section of this article on health equity policy).

Take home message: don’t confuse studies of policymaking with advice for policy participants.

In other words, identify what you are after before you start to process all of this advice. If you want to engage more with policymakers, you will find some sensible practical advice. If you want to be responsible for a fundamental change of public policy in your field, I doubt any of the available advice will help (unless you seek an explanation for failure).

Leave a comment

Filed under Academic innovation or navel gazing, Evidence Based Policymaking (EBPM), public policy

The future of public health policymaking after COVID-19: lessons from Health in All Policies

Paul Cairney, Emily St Denny, Heather Mitchell 

This post summarises new research on the health equity strategy Health in All Policies. As our previous post suggests, it is common to hope that a major event will create a ‘window of opportunity’ for such strategies to flourish, but the current COVID-19 experience suggests otherwise. If so, what do HIAP studies tell us about how to respond, and do they offer any hope for future strategies? The full report is on Open Research Europe, accompanied by a brief interview on its contribution to the Horizon 2020 project – IMAJINE – on spatial justice.

COVID-19 should have prompted governments to treat health improvement as fundamental to public policy

Many had made strong rhetorical commitments to public health strategies focused on preventing a pandemic of non-communicable diseases (NCDs). To do so, they would address the ‘social determinants’ of health and health inequalities, defined by the WHO as ‘the unfair and avoidable differences in health status’ that are ‘shaped by the distribution of money, power and resources’ and ‘the conditions in which people are born, grow, live, work and age’.

COVID-19 reinforces the impact of the social determinants of health. Health inequalities result from factors such as income and social and environmental conditions, which influence people’s ability to protect and improve their health. COVID-19 had a visibly disproportionate impact on people with (a) underlying health conditions associated with NCDs, and (b) less ability to live and work safely.

Yet, the opposite happened. The COVID-19 response side-lined health improvement

Health departments postponed health improvement strategies and moved resources to health protection.

This experience shows that the evidence does not speak for itself

The evidence on social determinants is clear to public health specialists, but the idea of social determinants is less well known or convincing to policymakers.

It also challenges the idea that the logic of health improvement is irresistible

Health in All Policies (HIAP) is the main vehicle for health improvement policymaking, underpinned by: a commitment to health equity by addressing the social determinants of health; the recognition that the most useful health policies are not controlled by health departments; the need for collaboration across (and outside) government; and, the search for high level political commitment to health improvement.

Its logic is undeniable to HIAP advocates, but not policymakers. A government’s public commitment to HIAP does not lead inevitably to the roll-out of a fully-formed HIAP model. There is a major gap between the idea of HIAP and its implementation. It is difficult to generate HIAP momentum, and it can be lost at any time.

Instead, we need to generate more realistic lessons from health improvement and promotion policy

However, most HIAP research does not provide these lessons. Most HIAP research combines:

  1. functional logic (here is what we need)
  2. programme logic (here is what we think we need to do to achieve it), and
  3. hope.

Policy theory-informed empirical studies of policymaking could help produce a more realistic agenda, but very few HIAP studies seem to exploit their insights.

To that end, this review identifies lessons from studies of HIAP and policymaking

It summarises a systematic qualitative review of HIAP research. It includes 113 articles (2011-2020) that refer to policymaking theories or concepts while discussing HIAP.

We produced these conclusions from pre-COVID-19 studies of HIAP and policymaking, but our new policymaking context – and its ironic impact on HIAP – is impossible to ignore.

It suggests that HIAP advocates produced a 7-point playbook for the wrong game

The seven most common pieces of advice add up to a plausible but incomplete strategy:

  1. adopt a HIAP model and toolkit
  2. raise HIAP awareness and support in government
  3. seek win-win solutions with partners
  4. avoid the perception of ‘health imperialism’ when fostering intersectoral action
  5. find HIAP policy champions and entrepreneurs
  6. use HIAP to support the use of health impact assessments (HIAs)
  7. challenge the traditional cost-benefit analysis approach to valuing HIAP.

Yet, two emerging pieces of advice highlight the limits to the current playbook and the search for its replacement:

  1. treat HIAP as a continuous commitment to collaboration and health equity, not a uniform model; and,
  2. address the contradictions between HIAP aims.

As a result, most country studies report a major, unexpected, and disappointing gap between HIAP commitment and actual outcomes

These general findings are apparent in almost all relevant studies. They stand out in the ‘best case’ examples where: (a) there is high political commitment and strategic action (such as South Australia), or (b) political and economic conditions are conducive to HIAP (such as Nordic countries).

These studies show that the HIAP playbook has unanticipated results, such as when the win-win strategy leads to  HIAP advocates giving ground but receiving little in return.

HIAP strategies to challenge the status quo are also overshadowed by more important factors, including (a) a far higher commitment to existing healthcare policies and the core business of government, and (b) state retrenchment. Additional studies of decentralised HIAP models find major gaps between (a) national strategic commitment (backed by national legislation) and (b) municipal government progress.

Some studies acknowledge the need to use policymaking research to produce new ways to encourage and evaluate HIAP success

Studies of South Australia situate HIAP in a complex policymaking system in which the link between policy activity and outcomes is not linear.  

Studies of Nordic HIAP show that a commitment to municipal responsibility and stakeholder collaboration rules out the adoption of a national uniform HIAP model.

However, most studies do not use policymaking research effectively or appropriately

Almost all HIAP studies only scratch the surface of policymaking research (while some try to synthesise its insights, but at the cost of clarity).

Most HIAP studies use policy theories to:

  1. produce practical advice (such as to learn from ‘policy entrepreneurs’), or
  2. supplement their programme logic (to describe what they think causes policy change and better health outcomes).

Most policy theories were not designed for this purpose.

Policymaking research helps primarily to explain the HIAP ‘implementation gap’

Its main lesson is that policy outcomes are beyond the control of policymakers and HIAP advocates. This explanation does not show how to close implementation gaps.

Its practical lessons come from critical reflection on dilemmas and politics, not the reinvention of a playbook

It prompts advocates to:

  • Treat HIAP as a political project, not a technical exercise or puzzle to be solved.
  • Re-examine the likely impact of a focus on intersectoral action and collaboration, to recognise the impact of imbalances of power and the logic of policy specialisation.
  • Revisit the meaning-in-practice of the vague aims that they take for granted without explaining, such as co-production, policy learning, and organisational learning.
  • Engage with key trade-offs, such as between a desire for uniform outcomes (to produce health equity) but acceptance of major variations in HIAP policy and policymaking.
  • Avoid reinventing phrases or strategies when facing obstacles to health improvement.

We describe these points in more detail here:

Our Open Research Europe article (peer reviewed) The future of public health policymaking… (europa.eu)

Paul summarises the key points as part of a HIAP panel: Health in All Policies in times of COVID-19

ORE blog on the wider context of this work: forthcoming

10 Comments

Filed under agenda setting, COVID-19, Evidence Based Policymaking (EBPM), Public health, public policy

What have we learned so far from the UK government’s COVID-19 policy?

This post first appeared on LSE British Politics and Policy (27.11.20) and is based on this article in British Politics.

Paul Cairney assesses government policy in the first half of 2020. He identifies the intense criticism of its response so far, encouraging more systematic assessments grounded in policy research.

In March 2020, COVID-19 prompted policy change in the UK at a speed and scale only seen during wartime. According to the UK government, policy was informed heavily by science advice. Prime Minister Boris Johnson argued that, ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’. Further, key scientific advisers such as Sir Patrick Vallance emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term.

Both ministers and advisors emphasised the need for individual behavioural change, supplemented by government action, in a liberal democracy in which direct imposition is unusual and unsustainable. However, for its critics, the government experience has quickly become an exemplar of policy failure.

Initial criticisms include that ministers did not take COVID-19 seriously enough in relation to existing evidence, when its devastating effect was apparent in China in January and Italy from February; act as quickly as other countries to test for infection to limit its spread; or introduce swift-enough measures to close schools, businesses, and major social events. Subsequent criticisms highlight problems in securing personal protective equipment (PPE), testing capacity, and an effective test-trace-and-isolate system. Some suggest that the UK government was responding to the ‘wrong pandemic’, assuming that COVID-19 could be treated like influenza. Others blame ministers for not pursuing an elimination strategy to minimise its spread until a vaccine could be developed. Some criticise their over-reliance on models which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown. Many describe these problems and delays as the contributors to the UK’s internationally high number of excess deaths.

How can we hold ministers to account in a meaningful way?

I argue that these debates are often fruitless and too narrow because they do not involve systematic policy analysis, take into account what policymakers can actually do, or widen debate to consider whose lives matter to policymakers. Drawing on three policy analysis perspectives, I explore the questions that we should ask to hold ministers to account in a way that encourages meaningful learning from early experience.

These questions include:

Was the government’s definition of the problem appropriate?
Much analysis of UK government competence relates to specific deficiencies in preparation (such as shortages in PPE), immediate action (such as to discharge people from hospitals to care homes without testing them for COVID-19), and implementation (such as an imperfect test-trace-and-isolate system). The broader issue relates to its focus on intervening in late March to protect healthcare capacity during a peak of infection, rather than taking a quicker and more precautionary approach. This judgment relates largely to its definition of the policy problem which underpins every subsequent policy intervention.

Did the government select the right policy mix at the right time? Who benefits most from its choices?

Most debates focus on the ‘lock down or not?’ question without exploring fully the unequal impact of any action. The government initially relied on exhortation, based on voluntarism and an appeal to social responsibility. Initial policy inaction had unequal consequences on social groups, including people with underlying health conditions, black and ethnic minority populations more susceptible to mortality at work or discrimination by public services, care home residents, disabled people unable to receive services, non-UK citizens obliged to pay more to live and work while less able to access public funds, and populations (such as prisoners and drug users) that receive minimal public sympathy. Then, in March, its ‘stay at home’ requirement initiated a major new policy and different unequal impacts in relation to the income, employment, and wellbeing of different groups. These inequalities are list in more general discussions of impacts on the whole population.

Did the UK government make the right choices on the trade-offs between values, and what impacts could the government have reasonably predicted?

Initially, the most high-profile value judgment related to freedom from state coercion to reduce infection versus freedom from the harm of infection caused by others. Then, values underpinned choices on the equitable distribution of measures to mitigate the economic and wellbeing consequences of lockdown. A tendency for the UK government to project centralised and ‘guided by the science’ policymaking has undermined public deliberation on these trade-offs between policies. The latter will be crucial to ongoing debates on the trade-offs associated with national and regional lockdowns.

Did the UK government combine good policy with good policymaking?

A problem like COVID-19 requires trial-and-error policymaking on a scale that seems incomparable to previous experiences. It requires further reflection on how to foster transparent and adaptive policymaking and widespread public ownership for unprecedented policy measures, in a political system characterised by (a) accountability focused incorrectly on strong central government control and (b) adversarial politics that is not conducive to consensus seeking and cooperation.

These additional perspectives and questions show that too-narrow questions – such as was the UK government ‘following the science’ – do not help us understand the longer term development and wider consequences of UK COVID-19 policy. Indeed, such a narrow focus on science marginalises wider discussions of values and the populations that are most disadvantaged by government policy.

_____________________

2 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, Public health, public policy, UK politics and policy

Policy learning to reduce inequalities: a practical framework

This post first appeared on LSE BPP on 16.11.2020 and it describes the authors’ published work in Territory, Politics, Governance (for IMAJINE)

While policymakers often want to learn how other governments have responded to certain policies, policy learning is characterized by contestation. Policymakers compete to define the problem, set the parameters for learning, and determine which governments should take the lead. Emily St.DennyPaul Cairney, and Sean Kippin discuss a framework that would encourage policy learning in multilevel systems.

Governments face similar policy problems and there is great potential for mutual learning and policy transfer. Yet, most policy research highlights the political obstacles to learning and the weak link between research and transfer. One solution may be to combine academic insights from policy research with practical insights from people with experience of learning in political environments. In that context, our role is to work with policy actors to produce pragmatic strategies to encourage realistic research-informed learning.

Pragmatic policy learning

Producing concepts, research questions, and methods that are interesting to both academics and practitioners is challenging. It requires balancing different approaches to gathering and considering ‘evidence’ when seeking to solve a policy problem. Practitioners need to gather evidence quickly, focusing on ‘what works’ or positive experiences from a small number of relevant countries. Policy scholars may seek more comprehensive research and warn against simple solutions. Further, they may do so without offering a feasible alternative to their audience.

To bridge these differences and facilitate policy learning, we encourage a pragmatic approach to policy learning that requires:

  • Seeing policy learning through the eyes of participants, to understand how they define and seek to solve this problem;
  • Incorporating insights from policy research to construct a feasible approach;
  • Reflecting on this experience to inform research.

Our aim is not ‘evidence-based policymaking’. Rather, it is to incorporate the fact that researchers and evidence form only one small component of a policymaking system characterized by complexity. Additionally, policy actors enjoy less control over these systems than we might like to admit. Learning is therefore best understood as a contested process in which actors combine evidence and beliefs to define policy problems, identify technically and politically feasible solutions, and negotiate who should be responsible for their adoption and delivery in multilevel policymaking systems. Taking seriously the contested, context-specific, and political nature of policymaking is crucial for producing effective advice from which to learn.

Policy learning to reduce inequalities

We apply these insights as part of the EU Horizon 2020 project Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe (IMAJINE). Its overall aim is to research how national and territorial governments across the European Union pursue ‘spatial justice’ and try to reduce inequalities.

Our role is to facilitate policy learning and consider the transfer of policy solutions from successful experiences. Yet, we are confronted by the usual challenges. They include the need to: identify appropriate exemplars from where to draw lessons; help policy practitioners control for differences in context; and translate between academic and practitioner communities.

Additionally, we work on an issue – inequality – which is notoriously ambiguous and contested. It involves not only scientific information about the lives and experiences of people, but also political disagreement about the legitimate role of the state in intervening in people’s lives or redistributing of resources. Developing a policy learning framework that is able to generate practically useful insights for policy actors is difficult but key to ensuring policy effectiveness and coherence.

Drawing on work we carried out for the Scottish Government’s National Advisory Council on Women and Girls on approaches to reducing inequalities in relation to gender mainstreaming, we apply the IMAJINE framework to support policy learning. The IMAJINE framework guides such academic–practitioner analysis in four steps:

Step 1: Define the nature of policy learning in political systems.

Preparing for learning requires taking into account the interaction between:

  • Politics, in which actors contest the nature of problems and the feasibility of solutions;
  • Bounded rationality, which requires actors to use organizational and cognitive shortcuts to gather and use evidence;
  • ‘Multi-centric’ policymaking systems, which limit a single central government’s control over choices and outcomes.

These dynamics play out in different ways in each territory, which means that the importers and exporters of lessons are operating in different contexts and addressing inequalities in different ways. Therefore, we must ask how the importers and exporters of lessons: define the problem, decide what policies are feasible, establish which level of government should be responsible for policy and identify criteria to evaluate policy success.

Step 2: Map policymaking responsibilities for the selection of policy instruments.

The Council of Europe defines gender mainstreaming as ‘the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages’.

Such definitions help explain why mainstreaming approaches often appear to be incoherent. To map the sheer weight of possible measures, and the spread of responsibility across many levels of government (such as local, Scottish, UK and EU), is to identify a potentially overwhelming scale of policymaking ambition. Further, governments tend to address this potential by breaking policymaking into manageable sectors. Each sector has its own rules and logics, producing coherent policymaking in each ‘silo’ but a sense of incoherence overall, particularly if the overarching aim is a low priority in government. Mapping these dynamics and responsibilities is necessary to ensure lessons learned can be effectively applied in similarly complex domestic systems.

Step 3: Learn from experience.

Policy actors want to draw lessons from the most relevant exemplars. Often, they will have implicit or explicit ideas concerning which countries they would like to learn more from. Negotiating which cases to explore, so that it takes into consideration both policy actors’ interests and the need to generate appropriate and useful lessons, is vital.

In the case of mainstreaming, we focused on three exemplar approaches, selected by members of our audience according to perceived levels of ambition: maximal (Sweden), medial (Canada) and minimal (the UK, which controls aspects of Scottish policy). These cases were also justified with reference to the academic literature which often uses these countries as exemplars of different approaches to policy design and implementation.

Step 4: Deliberate and reflect.

Work directly with policy participants to reflect on the implications for policy in their context. Research has many important insights on the challenges to and limitations of policy learning in complex systems. In particular, it suggests that learning cannot be comprehensive and does not lead to the importation of a well-defined package of measures. Bringing these sorts of insights to bear on policy actors’ practical discussions of how lessons can be drawn and applied from elsewhere is necessary, though ultimately insufficient. In our experience so far, step 4 is the biggest obstacle to our impact.

___________________

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), feminism, IMAJINE, Policy learning and transfer, public policy

The UK government’s lack of control of public policy

This post first appeared as Who controls public policy? on the UK in a Changing Europe website. There is also a 1-minute video, but you would need to be a completist to want to watch it.

Most coverage of British politics focuses on the powers of a small group of people at the heart of government. In contrast, my research on public policy highlights two major limits to those powers, related to the enormous number of problems that policymakers face, and to the sheer size of the government machine.

First, elected policymakers simply do not have the ability to properly understand, let alone solve, the many complex policy problems they face. They deal with this limitation by paying unusually high attention to a small number of problems and effectively ignoring the rest.

Second, policymakers rely on a huge government machine and network of organisations (containing over 5 million public employees) essential to policy delivery, and oversee a statute book which they could not possibly understand.

In other words, they have limited knowledge and even less control of the state, and have to make choices without knowing how they relate to existing policies (or even what happens next).

These limits to ministerial powers should prompt us to think differently about how to hold them to account. If they only have the ability to influence a small proportion of government business, should we blame them for everything that happens in their name?

My approach is to apply these general insights to specific problems in British politics. Three examples help to illustrate their ability to inform British politics in new ways.

First, policymaking can never be ‘evidence based’. Some scientists cling to the idea that the ‘best’ evidence should always catch the attention of policymakers, and assume that ‘speaking truth to power’ helps evidence win the day.

As such, researchers in fields like public health and climate change wonder why policymakers seem to ignore their evidence.

The truth is that policymakers only have the capacity to consider a tiny proportion of all available information. Therefore, they must find efficient ways to ignore almost all evidence to make timely choices.

They do so by setting goals and identifying trusted sources of evidence, but also using their gut instinct and beliefs to rule out most evidence as irrelevant to their aims.

Second, the UK government cannot ‘take back control’ of policy following Brexit simply because it was not in control of policy before the UK joined. The idea of control is built on the false image of a powerful centre of government led by a small number of elected policymakers.

This way of thinking assumes that sharing power is simply a choice. However, sharing power and responsibility is borne of necessity because the British state is too large to be manageable.

Governments manage this complexity by breaking down their responsibilities into many government departments. Still, ministers can only pay attention to a tiny proportion of issues managed by each department. They delegate most of their responsibilities to civil servants, agencies, and other parts of the public sector.

In turn, those organisations rely on interest groups and experts to provide information and advice.

As a result, most public policy is conducted through small and specialist ‘policy communities’ that operate out of the public spotlight and with minimal elected policymaker involvement.

The logical conclusion is that senior elected politicians are less important than people think. While we like to think of ministers sitting in Whitehall and taking crucial decisions, most of these decisions are taken in their name but without their intervention.

Third, the current pandemic underlines all too clearly the limits of government power. Of course people are pondering the degree to which we can blame UK government ministers for poor choices in relation to Covid-19, or learn from their mistakes to inform better policy.

Many focus on the extent to which ministers were ‘guided by the science’. However, at the onset of a new crisis, government scientists face the same uncertainty about the nature of the policy problem, and ministers are not really able to tell if a Covid-19 policy would work as intended or receive enough public support.

Some examples from the UK experience expose the limited extent to which policymakers can understand, far less control, an emerging crisis.

Prior to the lockdown, neither scientists nor ministers knew how many people were infected, nor when levels of infection would peak.

They had limited capacity to test. They did not know how often (and how well) people wash their hands. They did not expect people to accept and follow strict lockdown rules so readily, and did not know which combination of measures would have the biggest impact.

When supporting businesses and workers during ‘furlough’, they did not know who would be affected and therefore how much the scheme would cost.

In short, while Covid-19 has prompted policy change and state intervention on a scale not witnessed outside of wartime, the government has never really known what impact its measures would have.

Overall, the take-home message is that the UK narrative of strong central government control is damaging to political debate and undermines policy learning. It suggests that every poor outcome is simply the consequence of bad choices by powerful leaders. If so, we are unable to distinguish between the limited competence of some leaders and the limited powers of them all.

2 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, public policy, UK politics and policy

The UK Government’s COVID-19 policy: assessing evidence-informed policy analysis in real time

abstract 25k words

On the 23rd March 2020, the UK Government’s Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of COVID-19 , including new regulations on behaviour, police powers to support public health, budgetary measures to support businesses and workers during their economic inactivity, the almost-complete closure of schools, and the major expansion of healthcare capacity via investment in technology, discharge to care homes, and a consolidation of national, private, and new health service capacity (note that many of these measures relate only to England, with devolved governments responsible for public health in Northern Ireland, Scotland, and Wales). Overall, the coronavirus prompted almost-unprecedented policy change, towards state intervention, at a speed and magnitude that seemed unimaginable before 2020.

Yet, many have criticised the UK government’s response as slow and insufficient. Criticisms include that UK ministers and their advisors did not:

  • take the coronavirus seriously enough in relation to existing evidence (when its devastating effect was increasingly apparent in China in January and Italy from February)
  • act as quickly as some countries to test for infection to limit its spread, and/ or introduce swift measures to close schools, businesses, and major social events, and regulate social behaviour (such as in Taiwan, South Korea, or New Zealand)
  • introduce strict-enough measures to stop people coming into contact with each other at events and in public transport.

They blame UK ministers for pursuing a ‘mitigation’ strategy, allegedly based on reducing the rate of infection and impact of COVID-19 until the population developed ‘herd immunity’, rather than an elimination strategy to minimise its spread until a vaccine or antiviral could be developed. Or, they criticise the over-reliance on specific models, which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown.

Many cite this delay, compounded by insufficient personal protective equipment (PPE) in hospitals and fatal errors in the treatment of care homes, as the biggest contributor to the UK’s unusually high number of excess deaths (Campbell et al, 2020; Burn-Murdoch and Giles, 2020; Scally et al, 2020; Mason, 2020; Ball, 2020; compare with Freedman, 2020a; 2020b and Snowden, 2020).

In contrast, scientific advisers to UK ministers have emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term (e.g. Vallance). Throughout, they emphasised the need for individual behavioural change (hand washing and social distancing), supplemented by government action, in a liberal democracy in which direct imposition is unusual and, according to UK ministers, unsustainable in the long term.

We can relate these debates to the general limits to policymaking identified in policy studies (summarised in Cairney, 2016; 2020a; Cairney et al, 2019) and underpinning the ‘governance thesis’ that dominates the study of British policymaking (Kerr and Kettell, 2006: 11; Jordan and Cairney, 2013: 234).

First, policymakers must ignore almost all evidence. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information.

Second, policymakers have a limited understanding, and even less control, of their policymaking environments. No single centre of government has the power to control policy outcomes. Rather, there are many policymakers and influencers spread across a political system, and most choices in government are made in subsystems, with their own rules and networks, over which ministers have limited knowledge and influence. Further, the social and economic context, and events such as a pandemic, often appear to be largely out of their control.

Third, even though they lack full knowledge and control, governments must still make choices. Therefore, their choices are necessarily flawed.

Fourth, their choices produce unequal impacts on different social groups.

Overall, the idea that policy is controlled by a small number of UK government ministers, with the power to solve major policy problems, is still popular in media and public debate, but dismissed in policy research .

Hold the UK government to account via systematic analysis, not trials by social media

To make more sense of current developments in the UK, we need to understand how UK policymakers address these limitations in practice, and widen the scope of debate to consider the impact of policy on inequalities.

A policy theory-informed and real-time account helps us avoid after-the-fact wisdom and bad-faith trials by social media.

UK government action has been deficient in important ways, but we need careful and systematic analysis to help us separate (a) well-informed criticism to foster policy learning and hold ministers to account, from (a) a naïve and partisan rush to judgement that undermines learning and helps let ministers off the hook.

To that end, I combine insights from policy analysis guides, policy theories, and critical policy analysis to analyse the UK government’s initial coronavirus policy. I use the lens of 5-step policy analysis models to identify what analysts and policymakers need to do, the limits to their ability to do it, and the distributional consequences of their choices.

I focus on sources in the public record, including oral evidence to the House of Commons Health and Social Care committee, and the minutes and meeting papers of the UK Government’s Scientific Advisory Group for Emergencies (SAGE) (and NERVTAG), transcripts of TV press conferences and radio interviews, and reports by professional bodies and think tanks.

The short version is here. The long version – containing a huge list of sources and ongoing debates – is here. Both are on the COVID-19 page.

Leave a comment

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

This post is part 8 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The table is too big to reproduce here, so you have the following options:

Table 2 in PDF

Table 2 as a word document

Or, if you prefer not to read the posts individually:

The whole thing in PDF

The whole thing as a Word document

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

4 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 3. Communicating to the public

This post is part 7 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE’s emphasis on uncertainty and limited knowledge extended to the evidence on how to influence behaviour via communication:

‘there is limited evidence on the best phrasing of messages, the barriers and stressors that people will encounter when trying to follow guidance, the attitudes of the public to the interventions, or the best strategies to promote adherence in the long-term’ (SPI-B Meeting paper 3.3.20: 2)

Early on, SAGE minutes described continuously the potential problems of communicating risk and encouraging behavioural change through communication (in other words, based on low expectations for the types of quarantine measures associated with China and South Korea).

  • It sought ‘behavioural science input on public communication’ and ‘agreed on the importance of behavioural science informing policy – and on the importance of public trust in HMG’s approach’ (28.1.20: 2).
  • It worried about how the public might interpret ‘case fatality rate’, given the different ways to describe and interpret frequencies and risks (4.2.20: 3).
  • It stated that ‘Epidemiological terms need to be made clearer in the planning documents to avoid ambiguity’ (11.2.20: 3).
  • Its extensive discussion of behavioural science (13.2.20: 2-3) includes: there will be public scepticism and inaction until first deaths are confirmed; the main aim is to motivate people by relating behavioural change to their lives; messaging should stress ‘personal responsibility and responsibility to others’ and be clear on which measures are effective’, and ‘National messaging should be clear and definitive: if such messaging is presented as both precautionary and sufficient, it will reduce the likelihood of the public adopting further unnecessary or contradictory behaviours’ (13.2.20: 2-3)
  • Banning large public events could signal the need to change behaviour more generally, but evidence for its likely impact is unavailable (SPI-M-O, 11.2.20: 1).

Generally speaking, the assumption underpinning communication is that behavioural change will come largely from communication (encouragement and exhortation) rather than imposition. Hence, for example, the SPI-B (25.2.20: 2) recommendation on limiting the ‘risk of public disorder’:

  • ‘Provide clear and transparent reasons for different strategies: The public need to understand the purpose of the Government’s policy, why the UK approach differs to other countries and how resources are being allocated. SPI-B agreed that government should prioritise messaging that explains clearly why certain actions are being taken, ahead of messaging designed solely for reassuring the public.
  • This should also set clear expectations on how the response will develop, g. ensuring the public understands what they can expect as the outbreak evolves and what will happen when large numbers of people present at hospitals. The use of early messaging will help, as a) individuals are likely to be more receptive to messages before an issue becomes controversial and b) it will promote a sense the Government is following a plan.
  • Promote a sense of collectivism: All messaging should reinforce a sense of community, that “we are all in this together.” This will avoid increasing tensions between different groups (including between responding agencies and the public); promote social norms around behaviours; and lead to self-policing within communities around important behaviours’.

The underpinning assumption is that the government should treat people as ‘rational actors’: explain risk and how to reduce it, support existing measures by the public to socially distance, be transparent, explain if UK is doing things differently to other countries, and recognise that these measures are easier for some more than others (13.3.20: 3).

In that context, SPI-B Meeting paper 22.3.20 describes how to enable social distancing with reference to the ‘behaviour change wheel’ (Michie et al, 2011): ‘There are nine broad ways of achieving behaviour change: Education, Persuasion, Incentivisation, Coercion, Enablement, Training, Restriction, Environmental restructuring, and Modelling’ and many could reinforce each other (22.3.20: 1). The paper comments on current policy in relation to 5 elements:

  1. Education – clarify guidance (generally, and for shielding), e.g. through interactive website, tailored to many audiences
  2. Persuasion – increase perceived threat among ‘those who are complacent, using hard-hitting emotional messaging’ while providing clarity and positive messaging (tailored to your audience’s motivation) on what action to take (22.3.20: 1-2).
  3. Incentivisation – emphasise social approval as a reward for behaviour change
  4. Coercion – ‘Consideration should be given to enacting legislation, with community involvement, to compel key social distancing measures’ (combined with encouraging ‘social disapproval but with a strong caveat around unwanted negative consequences’ (22.3.20: 2)
  5. Enablement – make sure that people have alternative access to social contact, food, and other resources for people feeling the unequal impact of lockdown (particularly for vulnerable people shielding, aided by community support).

Apparently, section 3 of SPI-B’s meeting paper (1.4.20b: 2) had been redacted because it was critical of a UK Government ‘Framework; with 4 new proposals for greater compliance: ‘17) increasing the financial penalties imposed; 18) introducing self-validation for movements; 19) reducing exercise and/or shopping; 20) reducing non-home working’. On 17, it suggests that the evidence base for (e.g.) fining someone exercising more than 1km from their home could contribute to lower support for policy overall. On 17-19, it suggests that most people are already complying, so there is no evidence to support more targeted measures. It is more positive about 20, since it could reduce non-home working (especially if financially supported). Generally, it suggests that ministers should ‘also consider the role of rewards and facilitations in improving adherence’ and use organisational changes, such as staggered work hours and new use of space, rather than simply focusing on individuals.

Communication after the lockdown

SAGE suggests that communication problems are more complicated during the release of lockdown measures (in other words, without the ability to present the relatively-low-ambiguity message ‘stay at home’). Examples (mostly from SPI-B and its contributors) include:

  • Address potential confusion, causing false concern or reassurance, regarding antigen and antibody tests (meeting papers 1.4.20c: 3; 13.4.20b: 1-4; 22.4.20b: 1-5; 29.4.20a: 1-4)
  • When notifying people about the need to self-isolate, address the trade-offs between symptom versus positive test based notifications (meeting paper 29.4.20a: 1-4; 5.5.20: 1-8)
  • If you are worried about public ‘disorder’, focus on clear, effective, tailored communication, using local influencers, appealing to sympathetic groups (like NHS staff), and co-producing messages between the police and public (in other words, police via consent, and do not exacerbate grievances) (meeting papers 19.4.20: 1-4; 21.4.20: 1-3; 4.5.20: 1-11)
  • Be wary of lockdowns specific to very small areas, which undermine the ‘all in it together’ message (REDACTED and Clifford Stott, no date: 1). If you must to it, clarify precisely who is affected and what they should do, support the people most vulnerable and impacted (e.g. financially), and redesign physical spaces (meeting paper SPI-B 22.4.20a)
  • When reopening schools (fully or partly), communication is key to the inevitably complex and unpredictable behavioural consequences (so, for example, work with parents, teachers, and other stakeholders to co-produce clear guidance) (29.4.20d: 1-10)
  • On the introduction of Alert Levels, as part of the Joint Biosecurity Centre work on local outbreaks (described in meeting paper 20.5.20a: 1-9): build public trust and understanding regarding JBC alert levels, and relate them very clearly to expected behaviour (SAGE 28.5.20). Each Alert Level should relate clearly to a required response in that area, and ‘public communications on Alert Levels needs many trusted messengers giving the same advice, many times’ (meeting paper 27.5.20b: 3).
  • On transmission between social networks, ‘Communicate two key principles: 1. People whose work involves large numbers of contacts with different people should avoid close, prolonged, indoor contact with anyone as far as possible … 2. People with different workplace networks should avoid meeting or sharing the same spaces’ (meeting paper 27.5.20b: 1).
  • On outbreaks in ‘forgotten institutional settings’ (including Prisons, Homeless Hostels, Migrant dormitories, and Long stay mental health): address the unusually low levels of trust in (or awareness of) government messaging among so-called ‘hard to reach groups’ (meeting paper 28.5.20a: 1).

See also:

SPI-M (Meeting paper 17.3.20b: 4) list of how to describe probabilities. This is more important than it looks, since there is a potentially major gap between the public and advisory group understanding of words like ‘probably’ (compare with the CIA’s Words of Estimative Probability).

SAGE language of probability 17.3.20b p4

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

4 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

This post is part 6 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

Limited testing

Oral evidence to the Health and Social Care committee highlights the now-well-documented limits to UK testing capacity and PPE stocks (see also NERVTAG on PPE). SAGE does not discuss testing capacity much in the beginning, although on 10.3.20 it lists as an action point: ‘Plans for how PHE can move from 1,000 serology tests to 10,000 tests per week’ and by 16.3.20 it describes the urgent need to scale up testing – perhaps with commercial involvement and to test at home (if can ensure accuracy) – and to secure sufficient data to track the epidemic well enough to inform operational decisions. From April, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20), and the need for far more testing is a feature of almost every meeting from then.

Limited contact tracing

Initially, SAGE describes a quite-low contact tracing capacity: ‘Currently, PHE can cope with five new cases a week (requiring isolation of 800 contacts). Modelling suggests this capacity could be increased to 50 new cases a week (8,000 contact isolations)’ (18.2.20: 1).

Previously, it had noted that the point would come when transmission was too high to make contact tracing worthwhile, particularly since many (e.g. asymptomatic) cases may already have been missed (20.2.20: 2) and the necessary testing capacity was not in place (16.4.20): ‘PHE to work with SPI-M to develop criteria for when contact tracing is no longer worthwhile. This should include consideration of any limiting factors on testing and alternative methods of identifying epidemic evolution and characteristics’ (11.2.20: 3; see also Testing and contact tracing).

It returned to the feasibility question after the lockdown, with:

  • SPI-M (meeting paper 4.20d: 1-3) estimating that effective contact tracing (80% of non-household cases, in 2 days) could reduce the R by 30-60% if you could quarantine many people, multiple times; and,
  • SPI-B (meeting paper 4.20a: 1-3) advising on the need to clarify to people how it would work and what they should do, redesign physical spaces, and conduct new qualitative research and stakeholder engagement to ‘help us to understand more clearly the specific drivers, enablers and barriers for new behavioural recommendations’ to address an unprecedented problem in the UK (22.4.20a: 2). SPI-B also describes the trade-offs between app-informed systems (notification based on symptoms would suit people seeking to be precautionary, but could reduce compliance among people who believe the risk to be low) (see meeting papers 29.4.20: 3 and 5.5.20: 1-8)
  • SAGE noting ongoing work on clusters and super-spreading events, which necessitate cluster-based contact tracing (11.6.20: 3)
  • A more general message that contact tracing will be overwhelmed if lockdown measures are released too soon, raising R well above 1 and causing incidence to rise too quickly (e.g. 14.5.20)

Low capacity to achieve high levels of information necessary for forecasting

This type of discussion exemplifies a general and continuous focus on the lack of data to inform advice:

‘24. Real-time forecasting models rely on deriving information on the epidemic from surveillance. If transmission is established in the UK there will necessarily be a delay before sufficiently accurate forecasts in the UK are available. 25. Decisions being made on whether to modify or lift non-pharmaceutical interventions require accurate understanding of the state of the epidemic. Large-scale serological data would be ideal, especially combined with direct monitoring of contact behaviour. 26. Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK (or a similar country). While some estimates may be available before this time their accuracy will be much more limited. 27. The UK hospitalisation rate and CFR will be very important for operational planning and will be estimated over a similar timeframe. They may take longer depending on the availability of data’ (Meeting paper 2.3.20: 3-4).

A limited capacity to reach a relatively cautious consensus?

These limitations to information contributed to the difference between SAGE’s estimate on UK transmission (such as in comparison with Italy) and the UK’s much faster rate of transmission:

‘the UK likely has thousands of cases – as many as 5,000 to 10,000 – which are geographically spread nationally … The UK is considered to be 4-5 weeks behind Italy but on a similar curve (6-8 weeks behind if interventions are applied)’ (10.3.20: 1)

‘Based on limited available evidence, SAGE considers that the UK is 2 to 4 weeks behind Italy in terms of the epidemic curve’ (18.3.20: 1)

Rather, the UK was under 2 weeks behind Italy on the 10th March, suggesting that its lockdown measures were put in place too late.

At the heart of this estimate was the under-estimated doubling time of infection (‘the time it takes for the number of cases to double in size’, Meeting paper 3.2.20a):

  • although described as 3-4 days (28.1.20: 1) then 4-6 days (Meeting paper 2.3.20) based on Wuhan, and 3-5 days based on Hubei (Meeting paper 3.2.20a),
  • SAGE estimates ‘every 5-6 days’ (16.3.20: 1) and states that ‘Assuming a doubling time of around 5-7 days continues to be reasonable’ (18.3.20: 1).
  • Only by meeting 18 does SAGE estimate the doubling time (ICU patients) at 3-4 days (23.3.20). By meeting 19, it describes the doubling time in hospitals as 3.3 days (26.3.20: 1).

Kit Yates suggests that (a) the UK exhibited a 3-day doubling time during this period (Huffington Post), and (b) although many members of SAGE and SPI-M would have preferred to model on the assumption of 3-days:

Having spoken to some of the modellers on SPI-M, not all of them were missing this. Many of the groups had fitted models to data and come up with shorter and more realistic doubling times, maybe around the 3-day mark, but their estimates never found consensus within the group, so some members of SPI-M have communicated their concerns to me that some of the modelling groups had more influence over the consensus decision than others, which meant that some opinions or estimates which might have been valid, didn’t get heard, and consequently weren’t passed on up the line to SAGE, and then further towards the government, so an over-reliance on certain models or modelling groups might have been costly in this situation (interview, Kit Yates, More or Less, 10.6.20: 4m47s-5m27s)

Yates then suggests that the most listened-to model – led by Neil Ferguson, published 16.3.20 –  estimates a doubling time of 5-days, based on early data from Wuhan, using estimate of R2.4 (and generation time of 6.5 days), ‘which we now know to be way too low’ when we look at the UK data:

If they had just plotted the early trajectory of the epidemics against the current UK data at that point, they would have seen [by 14.3.20] that their model was starting to underestimate the number of cases and then the number of deaths which were occurring in the UK’ (interview, Kit Yates, More or Less, 10.6.20: 7m2s-7m15s)

Yates’ account highlights not only

  1. the effect of uncertainty and limited capacity to generate more information, but also
  2. the wider effect of path dependence, in which the (a) written and unwritten rules and norms of organisations, and (b) enduring ways of thinking (in individuals and groups, and political systems) place limits on new action. These limits are often necessary and beneficial, and often unnecessary and harmful.

Compare with Vallance’s oral evidence to the Health and Social Care committee (17.3.20: q96):

‘If you thought SAGE and the way SAGE works was a cosy consensus of agreeing scientists, you would be very mistaken. It is a lively, robust discussion, with multiple inputs. We do not try to get everybody saying exactly the same thing’.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

6 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 1. The language of intervention

This post is part 5 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

There is often a clear distinction between a strategy designed to (a) eliminate a virus/ the spread of disease quickly, and (b) manage the spread of infection over the long term (see The overall narrative).

However, generally, the language of virus management is confusing. We need to be careful with interpreting the language used in these minutes, and other sources such as oral evidence to House of Commons committees, particularly when comparing the language at the beginning (when people were also unsure what to call SARS-CoV-2 and COVID-19) to present day debates.

For example, in January, it is tempting to contrast ‘slow down the spread of the outbreak domestically’ (28.1.20: 2) with a strategy towards ‘extinction’, but the proposed actions may be the same even if the expectations of impact are different. Some people interpret these differences as indicative of a profoundly different approach (delay versus eradicate); some describe the semantic differences as semantics.

By February, SAGE’s expectation is of an inevitable epidemic and inability to contain COVID-19, prompting it to describe the inevitable series of stages:

‘Priorities will shift during a potential outbreak from containment and isolation on to delay and, finally, to case management … When there is sustained transmission in the UK, contact tracing will no longer be useful’ (18.2.20: 1; its discussion on 20.2.20: 2 also concludes that ‘individual cases could already have been missed – including individuals advised that they are not infectious’).

Mitigation versus suppression

On the face of it, it looks like there is a major difference in the ways on which (a) the Imperial College COVID-19 Response Team and (b) SAGE describe possible policy responses. The Imperial paper makes a distinction between mitigation and suppression:

  1. Its ‘mitigation strategy scenarios’ highlight the relative effects of partly-voluntary measures on mortality and demand for ‘critical care beds’ in hospitals: (voluntary) ‘case isolation in the home’ (people with symptoms stay at home for 7 days), ‘voluntary home quarantine’ (all members of the household stay at home for 14 days if one member has symptoms), (government enforced) ‘social distancing of those over 70’ or ‘social distancing of entire population’ (while still going to work, school or University), and closure of most schools and universities. It omits ‘stopping mass gatherings’ because ‘the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants’ (2020a: 8). Assuming 70-75% compliance, it describes the combination of ‘case isolation, home quarantine and social distancing of those aged over 70’ as the most impactful, but predicts that ‘mitigation is unlikely to be a viable option without overwhelming healthcare systems’ (2020a: 8-10). These measures would only ‘reduce peak critical care demand by two-thirds and halve the number of deaths’ (to approximately 250,000).
  2. Its ‘suppression strategy scenarios’ describe what it would take to reduce the rate of infection (R) from the estimated 2.0-2.6 to 1 or below (in other words, the game-changing point at which one person would infect no more than one other person) and reduce ‘critical care requirements’ to manageable levels. It predicts that a combination of four options – ‘case isolation’, ‘social distancing of the entire population’ (the measure with the largest impact), ‘household quarantine’ and ‘school and university closure’ – would reduce critical care demand from its peak ‘approximately 3 weeks after the interventions are introduced’, and contribute to a range of 5,600-48,000 deaths over two years (depending on the current R and the ‘trigger’ for action in relation to the number of occupied critical care beds) (2020a: 13-14).

In comparison, the SAGE meeting paper (26.2.20b: 1-3), produced 2-3 weeks earlier, pretty much assumes away the possible distinction between mitigation versus suppression measures (which Vallance has described as semantic rather than substantive – scroll down to The distinction between mitigation and suppression measures). In other words, it assumes ‘high levels of compliance over long periods of time’ (26.2.20b: 1). As such, we can interpret SAGE’s discussion as (a) requiring high levels of compliance for these measures to work (the equivalent of Imperial’s description of suppression), while (b) not describing how to use (more or less voluntary versus impositional) government policy to secure compliance. In comparison, Imperial equates suppression with the relatively-short-term measures associated with China and South Korea (while noting uncertainty about how to maintain such measures until a vaccine is produced).

One reason for SAGE to assume compliance in its scenario building is to focus on the contribution of each measure, generally taking place over 13 weeks, to delaying the peak of infection (while stating that ‘It will likely not be feasible to provide estimates of the effectiveness of individual control measures, just the overall effectiveness of them all’, 26.2.20b: 1), while taking into account their behavioural implications (26.2.20b: 2-3).

  • School closures could contribute to a 3-week delay, especially if combined with FE/ HE closures (but with an unequal impact on ‘Those in lower socio-economic groups … more reliant on free school meals or unable to rearrange work to provide childcare’).
  • Home isolation (65% of symptomatic cases stay at home for 7 days) could contribute to a 2-3 week delay (and is the ‘Easiest measure to explain and justify to the public’).
  • ‘Voluntary household quarantine’ (all member of the household isolate for 14 days) would have a similar effect – assuming 50% compliance – but with far more implications for behavioural public policy:

‘Resistance & non-compliance will be greater if impacts of this policy are inequitable. For those on low incomes, loss of income means inability to pay for food, heating, lighting, internet. This can be addressed by guaranteeing supplies during quarantine periods.

Variable compliance, due to variable capacity to comply, may lead to dissatisfaction.

Ensuring supplies flow to households is essential. A desire to help among the wider community (e.g. taking on chores, delivering supplies) could be encouraged and scaffolded to support quarantined households.

There is a risk of stigma, so ‘voluntary quarantine’ should be portrayed as an act of altruistic civic duty’.

  • ‘Social distancing’ (‘enacted early’), in which people restrict themselves to essential activity (work and school) could produce a 3-5 week delay (and likely to be supported in relation to mass leisure events, albeit less so when work activities involve a lot of contact.

[Note that it is not until May that it addresses this issue of feasibility directly (and, even then, it does not distinguish between technical and political feasibility: ‘It was noted that a useful addition to control measures SAGE considers (in addition to scientific uncertainty) would be the feasibility of monitoring/ enforcement’ (7.5.20: 3)]

As theme 2 suggests, there is a growing recognition that these measures should have been introduced by early March (such as via the Coronavirus Act 2020 not passed until 25.3.20), and likely would if the UK government and SAGE had more information (or interpreted its information in a different way). However, by mid-March, SAGE expresses a mixture of (a) growing urgency, but also (b) the need to stick to the plan, to reduce the peak and avoid a second peak of infection). On 13th March, it states:

‘There are no strong scientific grounds to hasten or delay implementation of either household isolation or social distancing of the elderly or the vulnerable in order to manage the epidemiological curve compared to previous advice. However, there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic. Household isolation is modelled to have the biggest effect of the three interventions currently planned, but with some risks. SAGE therefore thinks there is scientific evidence to support household isolation being implemented as soon as practically possible’ (13.3.20: 1)

‘SAGE further agreed that one purpose of behavioural and social interventions is to enable the NHS to meet demand and therefore reduce indirect mortality and morbidity. There is a risk that current proposed measures (individual and household isolation and social distancing) will not reduce demand enough: they may need to be coupled with more intensive actions to enable the NHS to cope, whether regionally or nationally’ (13.3.20: 2)

On 16th March, it states:

‘On the basis of accumulating data, including on NHS critical care capacity, the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1)

Overall, we can conclude two things about the language of intervention:

  1. There is now a clear difference between the ways in which SAGE and its critics describe policy: to manage an inevitably long-term epidemic, versus to try to eliminate it within national borders.
  2. There is a less clear difference between terms such as suppress and mitigate, largely because SAGE focused primarily on a comparison of different measures (and their combination) rather than the question of compliance.

See also: There is no ‘herd immunity strategy’, which argues that this focus on each intervention was lost in radio and TV interviews with Vallance.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

4 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: SAGE meetings from January-June 2020

This post is part 4 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE began a series of extraordinary meetings from 22nd January 2020. The first was described as ‘precautionary’ (22.1.20: 1) and includes updates from NERVTAG which met from 13th January. Its minutes state that ‘SAGE is unable to say at this stage whether it might be required to reconvene’ (22.1.20: 2). The second meeting notes that SAGE will meet regularly (e.g. 2-3 times per week in February) and coordinate all relevant science advice to inform domestic policy, including from NERVTAG and SPI-M (Scientific Pandemic Influenza Group on Modelling) which became a ‘formal sub-group of SAGE for the duration of this outbreak’ (SPI-M-O) (28.1.20: 1). It also convened an additional Scientific Pandemic Influenza subgroup (SPI-B) in February. I summarise these developments by month, but you can see that, by March, it is worth summarising each meeting. The main theme is uncertainty.

January 2020

The first meeting highlights immense uncertainty. Its description of WN-CoV (Wuhan Coronavirus), and statements such as ‘There is evidence of person-to-person transmission. It is unknown whether transmission is sustainable’, sum up the profound lack of information on what is to come (22.1.20: 1-2). It notes high uncertainty on how to identify cases, rates of infection, infectiousness in the absence of symptoms, and which previous experience (such as MERS) offers the most useful guidance. Only 6 days later, it estimates an R between 2-3, doubling rate of 3-4 days, incubation period of around 5 days, 14-day window of infectivity, varied symptoms such as coughing and fever, and a respiratory transmission route (different from SARS and MERS) (28.1.20: 1). These estimates are fairly constant from then, albeit qualified with reference to uncertainty (e.g. about asymptomatic transmission), some key outliers (e.g. the duration of illness in one case was 41 days – 4.2.20: 1), and some new estimates (e.g. of a 6-day ‘serial interval’, or ‘time between successive cases in a chain of transmission’, 11.2.20: 1). By now, it is preparing a response: modelling a ‘reasonable worst case scenario’ (RWC) based on the assumption of an R of 2.5 and no known treatment or vaccine, considering how to slow the spread, and considering how behavioural insights can be used to encourage self-isolation.

February 2020

SAGE began to focus on what measures might delay or reduce the impact of the epidemic. It described travel restrictions from China as low value, since a 95% reduction would have to be draconian to achieve and only secure a one month delay, which might be better achieved with other measures (3.2.20: 1-2). It, and supporting papers, suggested that the evidence was so limited that they could draw ‘no meaningful conclusions … as to whether it is possible to achieve a delay of a month’ by using one or a combination of these measures: international travel restrictions, domestic travel restrictions, quarantine people coming from infected areas, close schools, close FE/ HE, cancel large public events, contact tracing, voluntary home isolation, facemasks, hand washing. Further, some could undermine each other (e.g. school closures impact on older people or people in self-isolation) and have major societal or opportunity costs (SPI-M-O, 3.2.20b: 1-4). For example, the ‘SPI-M-O: Consensus view on public gatherings’ (11.2.20: 1) notes the aim to reduce duration and closeness of (particularly indoor) contact. Large outdoor gatherings are not worse than small, and stopping large events could prompt people to go to pubs (worse).

Throughout February, the minutes emphasize high uncertainty:

  • if there will be an epidemic outside of China (4.2.20: 2)
  • if it spreads through ‘air conditioning systems’ (4.2.20: 3)
  • the spread from, and impact on, children and therefore the impact of closing schools (4.2.20: 3; discussed in a separate paper by SPI-M-O, 10.2.20c: 1-2)
  • ‘SAGE heard that NERVTAG advises that there is limited to no evidence of the benefits of the general public wearing facemasks as a preventative measure’ (while ‘symptomatic people should be encouraged to wear a surgical face mask, providing that it can be tolerated’ (4.2.20: 3)

At the same time, its meeting papers emphasized a delay in accurate figures during an initial outbreak: ‘Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK’ (SPI-M-O, 3.2.20a: 3).

This problem proved to be crucial to the timing of government intervention. A key learning point will be the disconnect between the following statement and the subsequent realisation (3-4 weeks later) that the lockdown measures from mid-to-late March came too late to prevent an unanticipated number of excess deaths:

‘SAGE advises that surveillance measures, which commenced this week, will provide

actionable data to inform HMG efforts to contain and mitigate spread of Covid-19’ … PHE’s surveillance approach provides sufficient sensitivity to detect an outbreak in its early stages. This should provide evidence of an epidemic around 9- 11 weeks before its peak … increasing surveillance coverage beyond the current approach would not significantly improve our understanding of incidence’ (25.2.20: 1)

It also seems clear from the minutes and papers that SAGE highlighted a reasonable worst case scenario on 26.2.20. It was as worrying as the Imperial College COVID-19 Response Team report dated 16.3.20 that allegedly changed the UK Government’s mind on the 16th March. Meeting paper 26.2.20a described the assumption of an 80% infection attack rate and 50% clinical attack rate (i.e. 50% of the UK population would experience symptoms), which underpins the assumption of 3.6 million requiring hospital care of at least 8 days (11% of symptomatic), and 541,200 requiring ventilation (1.65% of symptomatic) for 16 days. While it lists excess deaths as unknown, its 1% infection mortality rate suggests 524,800 deaths. This RWC replaces a previous projection (in Meeting paper 10.2.20a: 1-3, based on pandemic flu assumptions) of 820,000 excess deaths (27.2.20: 1).

As such, the more important difference could come from SAGE’s discussion of ‘non-pharmaceutical interventions (NPIs)’ if it recommends ‘mitigation’ while the Imperial team recommends ‘suppression’. However, the language to describe each approach is too unclear to tell (see Theme 1. The language of intervention; also note that NPIs were often described from March as ‘behavioural and social interventions’ following an SPI-B recommendation, Meeting paper 3.2.20: 1, but the language of NPI seems to have stuck).

March 2020

In March, SAGE focused initially (Meetings 12-14) on preparing for the peak of infection on the assumption that it had time to transition towards a series of isolation and social distancing measures that would be sustainable (and therefore unlikely to contribute to a second peak if lifted too soon). Early meetings and meeting papers express caution about the limited evidence for intervention and the potential for their unintended consequences. This approach began to change somewhat from mid-March (Meeting 15), and accelerate from Meetings 16-18, when it became clear that incidence and virus transmission were much larger than expected, before a new phase began from Meeting 19 (after the UK lockdown was announced on the 23rd).

Meeting 12 (3.3.18) describes preparations to gather and consolidate information on the epidemic and the likely relative effect of each intervention, while its meeting papers emphasise:

  • ‘It is highly likely that there is sustained transmission of COVID-19 in the UK at present’, and a peak of infection ‘might be expected approximately 3-5 months after the establishment of widespread sustained transmission’ (SPI-M Meeting paper 2.3.20: 1)
  • the need the prepare the public while giving ‘clear and transparent reasons for different strategies’ and reducing ambiguity whenever giving guidance (SPI-B Meeting paper 3.2.20: 1-2)
  • The need to combine different measures (e.g. school closure, self-isolation, household isolation, isolating over-65s) at the right time; ‘implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave’ (Meeting paper 4.3.20a: 3).

Meeting 13 (5.3.20) describes staying in the ‘containment’ phase (which, I think, means isolating people with positive tests at home or in hospital) , and introducing: a 12-week period of individual and household isolation measures in 1-2 weeks, on the assumption of 50% compliance; and a longer period of shielding over-65s 2 weeks later. It describes ‘no evidence to suggest that banning very large gatherings would reduce transmission’, while closing bars and restaurants ‘would have an effect, but would be very difficult to implement’, and ‘school closures would have smaller effects on the epidemic curve than other options’ (5.3.20: 1). Its SPI-B Meeting paper (4.3.20b) expresses caution about limited evidence and reliance on expert opinion, while identifying:

  • potential displacement problems (e.g. school closures prompt people to congregate elsewhere, or be looked after by vulnerable older people, while parents to lose the chance to work)
  • the visibility of groups not complying
  • the unequal impact on poorer and single parent families of school closure and loss of school meals, lost income, lower internet access, and isolation
  • how to reduce discontent about only isolating at-risk groups (the view that ‘explaining that members of the community are building some immunity will make this acceptable’ is not unanimous) (4.3.20b: 2).

Meeting 14 (10.3.20) states that the UK may have 5-10000 cases and ‘10-14 weeks from the epidemic peak if no mitigations are introduced’ (10.3.20: 2). It restates the focus on isolation first, followed by additional measures in April, and emphasizes the need to transition to measures that are acceptable and sustainable for the long term:

‘SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods’ …’the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2)

Meeting 15 (13.3.20: 1) describes an update to its data, suggesting ‘more cases in the UK than SAGE previously expected at this point, and we may therefore be further ahead on the epidemic curve, but the UK remains on broadly the same epidemic trajectory and time to peak’. It states that ‘household isolation and social distancing of the elderly and vulnerable should be implemented soon, provided they can be done well and equitably’, noting that there are ‘no strong scientific grounds’ to accelerate key measures but ‘there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic’ (13.3.20: 1) and ‘more intensive actions’ will be required to maintain NHS capacity (13.3.20: 2).

*******

On the 16th March, the UK Prime Minister Boris Johnson describes an ‘emergency’ (one week before declaring a ‘national emergency’ and UK-wide lockdown)

*******

Meeting 16 (16.3.20) describes the possibility that there are 5-10000 new cases in the UK (there is great uncertainty on the estimate’), doubling every 5-6 days. Therefore, to stay within NHS capacity, ‘the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1). SPI-M Meeting paper (16.3.20: 1) describes:

‘a combination of case isolation, household isolation and social distancing of vulnerable groups is very unlikely to prevent critical care facilities being overwhelmed … it is unclear whether or not the addition of general social distancing measures to case isolation, household isolation and social distancing of vulnerable groups would curtail the epidemic by reducing the reproduction number to less than 1 … the addition of both general social distancing and school closures to case isolation, household isolation and social distancing of vulnerable groups would be likely to control the epidemic when kept in place for a long period. SPI-M-O agreed that this strategy should be followed as soon as practical’

Meeting 17 (18.3.20) marks a major acceleration of plans, and a de-emphasis of the low-certainty/ beware-the-unintended-consequences approach of previous meetings (on the assumption that it was now 2-4 weeks behind Italy). It recommends school closures as soon as possible (and it, and SPIM Meeting paper 17.3.20b, now downplays the likely displacement effect). It focuses particularly on London, as the place with the largest initial numbers:

‘Measures with the strongest support, in terms of effect, were closure of a) schools, b) places of leisure (restaurants, bars, entertainment and indoor public spaces) and c) indoor workplaces. … Transport measures such as restricting public transport, taxis and private hire facilities would have minimal impact on reducing transmission’ (18.3.20: 2)

Meeting 18 (23.3.20) states that the R is higher than expected (2.6-2.8), requiring ‘high rates of compliance for social distancing’ to get it below 1 and stay under NHS capacity (23.3.20: 1). There is an urgent need for more community testing/ surveillance (and to address the global shortage of test supplies). In the meantime, it needs a ‘clear rationale for prioritising testing for patients and health workers’ (the latter ‘should take priority’) (23.3.20: 3) Closing UK borders ‘would have a negligible effect on spread’ (23.3.20: 2).

*******

The lockdown. On the 23rd March 2020, the UK Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of coronavirus, including police powers to support public health, such as to disperse gatherings of more than two people (unless they live together), close events and shops, and limit outdoor exercise to once per day (at a distance of two metres from others).

*******

Meeting 19 (26.3.20) follows the lockdown. SAGE describes its priorities if the R goes below 1 and NHS capacity remains under 100%: ‘monitoring, maintenance and release’ (based on higher testing); public messaging on mass testing and varying interventions; understanding nosocomial transmission and immunology; clinical trials (avoiding hasty decisions’ on new drug treatment in absence of good data) and ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2). The optimistic scenario is 10,000 deaths from the first wave (SPIM-O Meeting paper 25.3.20: 4).

Meeting 20 Confirms RWC and optimistic scenarios (Meeting paper 25.3.20), but it needs a ‘clearer narrative, clarifying areas subject to uncertainty and sensitivities’ and to clarify that scenarios (with different assumptions on, for example, the R, which should be explained more) are not predictions (29.3.20).

Meeting 21 seeks to establish SAGE ‘scientific priorities’ (e.g. long term health impacts of COVID-19, including socioeconomic impact on health (including mental health), community testing, international work (‘comorbidities such as malaria and malnutrition) (31.3.20: 1-2). NHS to set up an interdisciplinary group (including science and engineering) to ‘understand and tackle nosocomial transmission’ in the context of its growth and urgent need to define/ track it (31.3.20: 1-2). SAGE to focus on testing requirements, not operational issues. It notes the need to identify a single source of information on deaths.

April 2020

The meetings in April highlight four recurring themes.

First, it stresses that it will not know the impact of lockdown measures for some time, that it is too soon to understand the impact of releasing them, and there is high risk of failure: ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1; see also 14.4.20: 1-2). This problem remains even if a reliable testing and contact tracing system is in place, and if there are environmental improvements to reduce transmission (by keeping people apart).

Second, it notes signals from multiple sources (including CO-CIN and the RCGP) on the higher risk of major illness and death among black people, the ongoing investigation of higher risk to ‘BAME’ health workers (16.4.20), and further (high priority) work on ‘ethnicity, deprivation, and mortality’ (21.4.20: 1) (see also: Race, ethnicity, and the social determinants of health).

Third, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20). The need for far more testing is a feature of almost every meeting (see also The need to ramp up testing).

Fourth, SAGE describes the need for more short and long-term research, identifying nosocomial infection as a short term priority, and long term priorities in areas such as the long term health impacts of COVID-19 (including socioeconomic impacts on physical and mental health), community testing, and international work (31.3.20: 1-2).

Finally, it reflects shifting advice on the precautionary use of face masks. Previously, advisory bodies emphasized limited evidence of a clear benefit to the wearer, and worried that public mask use would reduce the supply to healthcare professionals and generate a false sense of security (compare with this Greenhalgh et al article on the precautionary principle, the subsequent debate, and work by the Royal Society). Even by April: ‘NERVTAG concluded that the increased use of masks would have minimal effect’ on general population infection (7.4.20: 1), while the WHO described limited evidence that facemasks are beneficial for community use (9.4.20). Still, general face mask use but could have small positive effect, particularly in ‘enclosed environments with poor ventilation, and around vulnerable people’ (14.4.20: 2) and ‘on balance, there is enough evidence to support recommendation of community use of cloth face masks, for short periods in enclosed spaces where social distancing is not possible’ (partly because people can be infectious with no symptoms), as long as people know that it is no substitute for social distancing and handwashing (21.4.20)

May 2020

In May, SAGE continues to discuss high uncertainty on relaxing lockdown measures, the details of testing systems, and the need for research.

Generally, it advises that relaxations should not happen before there is more understanding of transmission in hospitals and care homes, and ‘until effective outbreak surveillance and test and trace systems are up and running’ (14.5.20). It advises specifically ‘against reopening personal care services, as they typically rely on highly connected workers who may accelerate transmission’ (5.5.20: 3) and warns against the too-quick introduction of social bubbles. Relaxation runs the risk of diminishing public adherence to social distancing, and to overwhelm any contact tracing system put in place:

‘SAGE participants reaffirmed their recent advice that numbers of Covid-19 cases remain high (around 10,000 cases per day with wide confidence intervals); that R is 0.7-0.9 and could be very close to 1 in places across the UK; and that there is very little room for manoeuvre especially before a test, trace and isolate system is up and running effectively. It is not yet possible to assess the effect of the first set of changes which were made on easing restrictions to lockdown’ (28.5.20: 3).

It recommends extensive testing in hospitals and care homes (12.5.20: 3) and ‘remains of the view that a monitoring and test, trace & isolate system needs to be put in place’ (12.5.20: 1)

June 2020

In June, SAGE identifies the importance of clusters of infection (super-spreading events) and the importance of a contact tracing system that focuses on clusters (rather than simply individuals) (11.6.20: 3). It reaffirms the value of a 2-metre distance rule. It also notes that the research on immunology remains unclear, which makes immunity passports a bad idea (4.6.20).

It describes the result of multiple meeting papers on the unequal impact of COVID-19:

‘There is an increased risk from Covid-19 to BAME groups, which should be urgently investigated through social science research and biomedical research, and mitigated by policy makers’ … ‘SAGE also noted the importance of involving BAME groups in framing research questions, participating in research projects, sharing findings and implementing recommendations’ (4.6.20: 1-3)

See also: Race, ethnicity, and the social determinants of health

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

4 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy