Category Archives: public policy

Policy Analysis in 750 Words: Two approaches to policy learning and transfer

This post forms one part of the Policy Analysis in 750 words series. It draws on work for an in-progress book on learning to reduce inequalities. Some of the text will seem familiar if you have read other posts. Think of it as an adventure game in which the beginning is the same but you don’t know the end.

Policy learning is the use of new information to update policy-relevant knowledge. Policy transfer involves the use of knowledge about policy and policymaking in one government to inform policy and policymaking in another.

These processes may seem to relate primarily to research and expertise, but they require many kinds of political choices (explored in this series). They take place in complex policymaking systems over which no single government has full knowledge or control.

Therefore, while the agency of policy analysts and policymakers still matters, they engage with a policymaking context that constrains or facilitates their action.

Two approaches to policy learning: agency and context-driven stories

Policy analysis textbooks focus on learning and transfer as an agent-driven process with well-established  guidance (often with five main steps). They form part of a functionalist analysis where analysts identify the steps required to turn comparative analysis into policy solutions, or part of a toolkit to manage stages of the policy process.

Agency is less central to policy process research, which describes learning and transfer as contingent on context. Key factors include:

Analysts compete to define problems and determine the manner and sources of learning, in a multi-centric environment where different contexts will constrain and facilitate action in different ways. For example, varying structural factors – such as socioeconomic conditions – influence the feasibility of proposed policy change, and each centre’s institutions provide different rules for gathering, interpreting, and using evidence.

The result is a mixture of processes in which:

  1.  Learning from experts is one of many possibilities. For example, Dunlop and Radaelli also describe ‘reflexive learning’, ‘learning through bargaining’, and ‘learning in the shadow hierarchy’
  2.  Transfer takes many forms.

How should analysts respond?

Think of two different ways to respond to this description of the policy process with this lovely blue summary of concepts. One is your agency-centred strategic response. The other is me telling you why it won’t be straightforward.

An image of the policy process (see 5 images)

There are many policy makers and influencers spread across many policymaking ‘centres’

  1. Find out where the action is and tailor your analysis to different audiences.
  2. There is no straightforward way to influence policymaking if multiple venues contribute to policy change and you don’t know who does what.

Each centre has its own ‘institutions’

  1. Learn the rules of evidence gathering in each centre: who takes the lead, how do they understand the problem, and how do they use evidence?
  2. There is no straightforward way to foster policy learning between political systems if each is unaware of each other’s unwritten rules. Researchers could try to learn their rules to facilitate mutual learning, but with no guarantee of success.

Each centre has its own networks

  1. Form alliances with policymakers and influencers in each relevant venue.
  2. The pervasiveness of policy communities complicates policy learning because the boundary between formal power and informal influence is not clear.

Well-established ‘ideas’ tend to dominate discussion

  1. Learn which ideas are in good currency. Tailor your advice to your audience’s beliefs.
  2. The dominance of different ideas precludes many forms of policy learning or transfer. A popular solution in one context may be unthinkable in another.

Many policy conditions (historic-geographic, technological, social and economic factors) command the attention of policymakers and are out of their control. Routine events and non-routine crises prompt policymaker attention to lurch unpredictably.

  1. Learn from studies of leadership in complex systems or the policy entrepreneurs who find the right time to exploit events and windows of opportunity to propose solutions.
  2. The policy conditions may be so different in each system that policy learning is limited and transfer would be inappropriate. Events can prompt policymakers to pay disproportionately low or high attention to lessons from elsewhere, and this attention relates weakly to evidence from analysts.

Feel free to choose one or both forms of advice. One is useful for people who see analysts and researchers as essential to major policy change. The other is useful if it serves as a source of cautionary tales rather than fatalistic responses.

See also:

Policy Concepts in 1000 Words: Policy Transfer and Learning

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Policy learning to reduce inequalities: a practical framework

Three ways to encourage policy learning

Epistemic versus bargaining-driven policy learning

The ‘evidence-based policymaking’ page explores these issues in more depth

Leave a comment

Filed under 750 word policy analysis, IMAJINE, Policy learning and transfer, public policy

Policy in 500 Words: Trust

This post summarises ‘COVID-19: effective policymaking depends on trust in experts, politicians, and the public’ by Adam Wellstead and me.

The meaning of trust

We define trust as ‘a belief in the reliability of other people, organizations, or processes’, but it is one of those terms – like ‘policy’ – that defies a single comprehensive definition. The term ‘distrust’ complicates things further, since it does not simply mean the absence of trust.

Its treatment in social science also varies, which makes our statement – ‘Trust is necessary for cooperation, coordination, social order, and to reduce the need for coercive state imposition’ – one of many ways to understand its role.

A summary of key concepts

Social science accounts of trust relate it to:

1. Individual choice

I may trust someone to do something if I value their integrity (if they say they will do it, I believe them), credibility (I believe their claim is accurate and feasible), and competence (I believe they have the ability).

This perception of reliability depends on:

  • The psychology of the truster. The truster assesses the risk of relying on others, while combining cognition and emotion to relate that risk of making themselves vulnerable to the benefit of collective action, while drawing on an expectation of reciprocity.
  • The behaviour of the trustee. They demonstrate their trustworthiness in relation to past performance, which demonstrates their competence and reliability and perhaps their selflessness in favour of collective action.
  • Common reference points. The trustee and truster may use shortcuts to collective action, such as a reference to something they have in common (e.g. their beliefs or social background), their past interactions, or the authority of the trustee.

2. Social and political rules (aka institutions).

Perhaps ideally, we would learn who to trust via our experiences of working together, but we also need to trust people we have never met, and put equivalent trust in organisations and ‘systems’.

In that context, approaches such as the Institutional Analysis and Development (IAD) identify the role of many different kinds of rules in relation to trust:

  • Rules can be formal, written, and widely understood (e.g. to help assign authority regardless of levels of interaction) or informal, unwritten, and only understood by some (e.g. resulting from interactions in some contexts).
  • Rules can represent low levels of trust and a focus on deterring breaches (e.g. creating and enforcing contracts) or high levels of trust (e.g. to formalize ‘effective practices built on reciprocity, emotional bonds, and/or positive expectations’).

3. Societal necessity and interdependence.

Trust is a functional requirement. We need to trust people because we cannot maintain a functional society or political system without working together. Trust-building underpins the study of collaboration (or cooperation and bargaining), such as in the Ecology of Games approach (which draws on the IAD).

  • In that context, trust is a resource (to develop) that is crucial to a required outcome.

Is trust good and distrust bad?

We describe trust as ‘necessary for cooperation’ and distrust as a ‘potent motivator’ that may prompt people to ignore advice or defy cooperation or instruction. Yet, neither is necessarily good or bad. Too much trust may be a function of: (1) the abdication of our responsibility to engage critically with leaders in political systems, (2) vulnerability to manipulation, and/ or (3) excessive tribalism, prompting people to romanticise their own cause and demonise others, each of which could lead us to accept uncritically the cynical choices of policymakers.

Further reading

Trust is a slippery concept, and academics often make it slippier by assuming rather than providing a definition. In that context, why not read all of the 500 Words series and ask yourself where trust/ distrust fit in?

Leave a comment

Filed under 500 words, public policy

Policy Analysis in 750 Words: power and knowledge

This post adapts Policy in 500 Words: Power and Knowledge (the body of this post) to inform the Policy Analysis in 750 words series (the top and tails).

One take home message from the 750 Words series is to avoid seeing policy analysis simply as a technical (and ‘evidence-based’) exercise. Mainstream policy analysis texts break down the process into technical-looking steps, but also show how each step relates to a wider political context. Critical policy analysis texts focus more intensely on the role of politics in the everyday choices that we might otherwise take for granted or consider to be innocuous. The latter connect strongly to wider studies of the links between power and knowledge.

Power and ideas

Classic studies suggest that the most profound and worrying kinds of power are the hardest to observe. We often witness highly visible political battles and can use pluralist methods to identify who has material resources, how they use them, and who wins. However, key forms of power ensure that many such battles do not take place. Actors often use their resources to reinforce social attitudes and policymakers’ beliefs, to establish which issues are policy problems worthy of attention and which populations deserve government support or punishment. Key battles may not arise because not enough people think they are worthy of debate. Attention and support for debate may rise, only to be crowded out of a political agenda in which policymakers can only debate a small number of issues.

Studies of power relate these processes to the manipulation of ideas or shared beliefs under conditions of bounded rationality (see for example the NPF). Manipulation might describe some people getting other people to do things they would not otherwise do. They exploit the beliefs of people who do not know enough about the world, or themselves, to know how to identify and pursue their best interests. Or, they encourage social norms – in which we describe some behaviour as acceptable and some as deviant – which are enforced by (1) the state (for example, via criminal justice and mental health policy), (2) social groups, and (3) individuals who govern their own behaviour with reference to what they feel is expected of them (and the consequences of not living up to expectations).

Such beliefs, norms, and rules are profoundly important because they often remain unspoken and taken for granted. Indeed, some studies equate them with the social structures that appear to close off some action. If so, we may not need to identify manipulation to find unequal power relationships: strong and enduring social practices help some people win at the expense of others, by luck or design.

Relating power to policy analysis: whose knowledge matters?

The concept of‘epistemic violence’ is one way todescribe the act of dismissing an individual, social group, or population by undermining the value of their knowledge or claim to knowledge. Specific discussions include: (a) the colonial West’s subjugation of colonized populations, diminishing the voice of the subaltern; (b) privileging scientific knowledge and dismissing knowledge claims via personal or shared experience; and (c) erasing the voices of women of colour from the history of women’s activism and intellectual history.

It is in this context that we can understand ‘critical’ research designed to ‘produce social change that will empower, enlighten, and emancipate’ (p51). Powerlessness can relate to the visible lack of economic material resources and factors such as the lack of opportunity to mobilise and be heard.

750 Words posts examining this link between power and knowledge

Some posts focus on the role of power in research and/ or policy analysis:

These posts ask questions such as: who decides what evidence will be policy-relevant, whose knowledge matters, and who benefits from this selective use of evidence? They help to (1) identify the exercise of power to maintain evidential hierarchies (or prioritise scientific methods over other forms of knowledge gathering and sharing), and (2) situate this action within a wider context (such as when focusing on colonisation and minoritization). They reflect on how (and why) analysts should respect a wider range of knowledge sources, and how to produce more ethical research with an explicit emancipatory role. As such, they challenge the – naïve or cynical – argument that science and scientists are objective and that science-informed analysis is simply a technical exercise (see also Separating facts from values).

Many posts incorporate these discussions into many policy analysis themes.

See also

Policy Concepts in 1000 Words: Power and Ideas

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization. It discusses studies of education policy (many draw on critical policy analysis)

There are also many EBPM posts that slip this discussion of power and politics into discussions of evidence and policy. They don’t always use the word ‘power’ though (see Evidence-informed policymaking: context is everything)

Leave a comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: Separating facts from values

This post begins by reproducing Can you separate the facts from your beliefs when making policy?(based on the 1st edition of Understanding Public Policy) …

A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.

This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.

‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.

In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.

Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.

It continues with an edited excerpt from p59 of Understanding Public Policy, which explores the implications of bounded rationality for contemporary accounts of ‘evidence-based policymaking’:

‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.

Implications for policy analysis

Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.

A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.

Most relevant posts in the 750 series

Linda Tuhiwai Smith (2012) Decolonizing Methodologies 

Carol Bacchi (2009) Analysing Policy: What’s the problem represented to be? 

Deborah Stone (2012) Policy Paradox

Who should be involved in the process of policy analysis?

William Riker (1986) The Art of Political Manipulation

Using Statistics and Explaining Risk (David Spiegelhalter and Gerd Gigerenzer)

Barry Hindess (1977) Philosophy and Methodology in the Social Sciences

See also

To think further about the relevance of this discussion, see this post on policy evaluation, this page on the use of evidence in policymaking, this book by Douglas, and this short commentary on ‘honest brokers’ by Jasanoff.

1 Comment

Filed under 750 word policy analysis, Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Policy Analysis in 750 Words: How to communicate effectively with policymakers

This post forms one part of the Policy Analysis in 750 words series overview. The title comes from this article by Cairney and Kwiatkowski on ‘psychology based policy studies’.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts. How might we combine insights to think about effective communication?

1. Insights from policy analysis texts

Most texts in this series relate communication to understanding your audience (or client) and the political context. Your audience has limited attention or time to consider problems. They may have a good antennae for the political feasibility of any solution, but less knowledge of (or interest in) the technical details. In that context, your aim is to help them treat the problem as worthy of their energy (e.g. as urgent and important) and the solution as doable. Examples include:

  • Bardach: communicating with a client requires coherence, clarity, brevity, and minimal jargon.
  • Dunn: argumentation involves defining the size and urgency of a problem, assessing the claims made for each solution, synthesising information from many sources into a concise and coherent summary, and tailoring reports to your audience.
  • Smith: your audience makes a quick judgement on whether or not to read your analysis. Ask yourself questions including: how do I frame the problem to make it relevant, what should my audience learn, and how does each solution relate to what has been done before? Maximise interest by keeping communication concise, polite, and tailored to a policymaker’s values and interests.

2. Insights from studies of policymaker psychology

These insights emerged from the study of bounded rationality: policymakers do not have the time, resources, or cognitive ability to consider all information, possibilities, solutions, or consequences of their actions. They use two types of informational shortcut associated with concepts such as cognition and emotion, thinking ‘fast and slow’, ‘fast and frugal heuristics’, or, if you like more provocative terms:

  • ‘Rational’ shortcuts. Goal-oriented reasoning based on prioritizing trusted sources of information.
  • ‘Irrational’ shortcuts. Emotional thinking, or thought fuelled by gut feelings, deeply held beliefs, or habits.

We can use such distinctions to examine the role of evidence-informed communication, to reduce:

  • Uncertainty, or a lack of policy-relevant knowledge. Focus on generating ‘good’ evidence and concise communication as you collate and synthesise information.
  • Ambiguity, or the ability to entertain more than one interpretation of a policy problem. Focus on argumentation and framing as you try to maximise attention to (a) one way of defining a problem, and (b) your preferred solution.

Many policy theories describe the latter, in which actors: combine facts with emotional appeals, appeal to people who share their beliefs, tell stories to appeal to the biases of their audience, and exploit dominant ways of thinking or social stereotypes to generate attention and support. These possibilities produce ethical dilemmas for policy analysts.

3. Insights from studies of complex policymaking environments

None of this advice matters if it is untethered from reality.

Policy analysis texts focus on political reality to note that even a perfectly communicated solution is worthless if technically feasible but politically unfeasible.

Policy process texts focus on policymaking reality: showing that ideal-types such as the policy cycle do not guide real-world action, and describing more accurate ways to guide policy analysts.

For example, they help us rethink the ‘know your audience’ mantra by:

Identifying a tendency for most policy to be processed in policy communities or subsystems:

Showing that many policymaking ‘centres’ create the instruments that produce policy change

Gone are the mythical days of a small number of analysts communicating to a single core executive (and of the heroic researcher changing the world by speaking truth to power). Instead, we have many analysts engaging with many centres, creating a need to not only (a) tailor arguments to different audiences, but also (b) develop wider analytical skills (such as to foster collaboration and the use of ‘design principles’).

How to communicate effectively with policymakers

In that context, we argue that effective communication requires analysts to:

1. Understand your audience and tailor your response (using insights from psychology)

2. Identify ‘windows of opportunity’ for influence (while noting that these windows are outside of anyone’s control)

3. Engage with real world policymaking rather than waiting for a ‘rational’ and orderly process to appear (using insights from policy studies).

See also:

Why don’t policymakers listen to your evidence?

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

Entrepreneurial policy analysis

1 Comment

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy, Storytelling

Policy in 500 Words: Peter Hall’s policy paradigms

Several 500 Word and 1000 Word (a, b, c) posts try to define and measure policy change.

Most studies agree that policymaking systems produce huge amounts of minor change and rare instances of radical change, but not how to explain these patterns. For example:

  • Debates on incrementalism questioned if radical change could be managed via non-radical steps.
  • Punctuated equilibrium theory describes policy change as a function of disproportionately low or high attention to problems, and akin to the frequency of earthquakes (a huge number of tiny changes, and more major changes than we would see in a ‘normal distribution’).

One of the most famous accounts of major policy change is by Peter Hall. ‘Policy paradigms’ help explain a tendency towards inertia, punctuated rarely by radical change (compare with discussions of path dependence and critical junctures).

A policy paradigm is a dominant and often taken-for-granted worldview (or collection of beliefs) about: policy goals, the nature of a policy problem, and the instruments to address it.

Paradigms can operate for long periods, subject to minimal challenge or defended successfully during events that call current policies into question. Adherence to a paradigm produces two ‘orders’ of change:

  • 1st order: frequent routine bureaucratic changes to instruments while maintaining policy goals.
  • 2nd order: less frequent, non-routine changes (or use of new instruments) while maintaining policy goals.

Radical and rare – 3rd order – policy change may only follow a crisis in which policymakers cannot solve a policy problem or explain why policy is failing. It prompts a reappraisal and rejection of the dominant paradigm, by a new government with new ways of thinking and/or a government rejecting current experts in favour of new ones. Hall’s example was of rapid paradigm shift in UK economic policy – from ‘Keynesianism’ to ‘Monetarism’ – within very few years.

Hall’s account prompted two different debates:

1. Some describe Hall’s case study as unusual.

Many scholars produced different phrases to describe a more likely pattern of (a) non-radical policy changes contributing to (b) long-term paradigm change and (c) institutional change, perhaps over decades. They include: ‘gradual change with transformative results’ and ‘punctuated evolution’ (see also 1000 Words: Evolution).

2. Some describe Hall’s case study as inaccurate.

This UK paradigm change did not actually happen. Instead, there was:

(a) A sudden and profound policy change that did not represent a paradigm shift (the UK experiment with Monetarism was short-lived).

(b) A series of less radical changes that produced paradigm change over decades: from Keynesianism to ‘neo-Keynesianism’, or from state intervention to neoliberalism (such as to foster economic growth via private rather than public borrowing and spending)

These debates connect strongly to issues in policy analysis, particularly if analysts seek transformative policy change to challenge unequal and unfair outcomes (such as in relation to racism or the climate crisis):

  1. Is paradigm change generally only possible over decades?
  2. How will we know if this transformation is actually taking place and here to stay (if even the best of us can be fooled by temporary developments)?

See also:

1. Beware the use of the word ‘evolution

2. This focus on the endurance of policy instrument change connects to studies of policy success (see Great Policy Successes).

3. Paul Cairney and Chris Weible (2015) ‘Comparing and Contrasting Peter Hall’s Paradigms and Ideas with the Advocacy Coalition Framework’ in (eds) M. Howlett and J. Hogan Policy Paradigms in Theory and Practice (Basingstoke: Palgrave) PDF

4 Comments

Filed under 500 words, public policy

The future of education equity policy: ‘neoliberal’ versus ‘social justice’ approaches

This post summarises Cairney and Kippin’s qualitative systematic review of peer-reviewed research on education equity policy. See also: The future of equity policy in education and health: will intersectoral action be the solution? and posts on ‘Heath in All Policies’ and health inequalities.

Governments, international organisations, and researchers all express a high and enduring commitment to ‘education equity’. Yet, this is where the agreement ends.

The definition of the problem of inequity and the feasibility of solutions is highly contested, to the extent that it is common to identify two competing approaches:

1. A ‘neoliberal’ approach, focusing on education’s role in the economy, market-based reforms, and ‘new public management’ reforms to schools.

2. A ‘social justice’ approach, focusing on education’s role in student wellbeing and life opportunities, and state-led action to address the wider social determinants of education outcomes.

Almost all of the research included in our review suggests that the neoliberal approach dominates international and domestic policy agendas at the expense of the wider focus on social justice.

We describe education equity researchers as the narrators of cautionary tales of education inequity. Most employ critical policy analysis to challenge what they call the dominant stories of education that hinder meaningful equity policies.

First, many describe common settings, including a clear sense that unfair inequalities endure despite global and domestic equity rhetoric.

They also describe the multi-level nature of the governance of education, but with less certainty about relationships across levels. A small number of international organisations and countries are key influencers of a global neoliberal agenda and there is discretion to influence policy at local and school levels. In that context, some studies relate the lack of progress to the malign influence of one or more levels, such as global and central government agendas undermining local change, or local actors disrupting central initiatives.

Second, studies describe similar plots. Many describe stymied progress on equity caused by the negative impacts of neoliberalism: undermining equity by (1) equating it with narrow definitions of equal access to well-performing schools and test-based attainment outcomes, and (2) taking attention from social justice to focus on economic competitiveness.

Many describe policymakers using a generic focus on equity as a facade, to ignore and reproduce inequalities in relation to minoritized populations. Or, equity is a ‘wicked’ issue that defies simple solutions. Many plots involve a contrast between agency-focused narratives that emphasise hopefulness (e.g. among ‘change agents’) and systemic or structural narratives that emphasise helplessness.

Third, they present common ideas about characters. In global narratives, researchers challenge the story by international organisations that they are the heroes providing funding backed by crucial instructions to make educations systems and economies competitive. Most education articles portray neoliberal international organisations and central governments as the villains: narrowing equity to simplistic measures of performance at the expense of more meaningful outcomes.

At a national and local level, they criticise the dominant stories of equity within key countries, such as the US, that continue to reproduce highly unequal outcomes while projecting a sense of progress. The most vividly told story is of white parents, who portray their ‘gifted’ children as most deserving of advantage in the school system, and therefore the victims of attempts to widen access or redistribute scarce resources (high quality classes and teachers). Rather, these parents are the villains standing – sometimes unintentionally, but mostly intentionally – in the way of progress.

The only uncertainty regards the role of local and school leaders. In some cases, they are the initially-heroic figures, able to find ways to disrupt a damaging national agenda and become the ‘change agents’ that shift well-established rules and norms before being thwarted by community and parental opposition. In others, they are perhaps-unintentional villains who reproduce racialised, gendered, or class-based norms regarding which students are ‘gifted’ and worthy of investment versus which students need remedial classes or disrupt other learners.

Fourth, the moral of the story is mostly clear. Almost all studies criticise the damaging impact of neoliberal definitions of equity and the performance management and quasi-market techniques that support it. They are sold as equity measures but actually exacerbate inequalities. As such, the moral is to focus our efforts elsewhere: on social justice, the social and economic determinants of education, and the need to address head-on the association between inequalities and minoritized populations (to challenge ‘equity for all’ messages). However, it is difficult to pinpoint the source of much-needed change. In some cases, strong direction from central governments is necessary to overcome obstacles to change. In others, only bottom-up action by local and school leaders will induce change.

Perhaps the starkest difference in approaches relates to expectations for the future. For ‘neoliberal’ advocates, solutions such as market incentives or education system reforms will save schools and the next generation of students. In contrast, ‘social justice’ advocates expect these reforms to fail and cause irreparable damage to the prospect of education equity.

1 Comment

Filed under COVID-19, education policy, Policy learning and transfer, public policy

Education equity policy: ‘equity for all’ as a distraction from race, minoritization, and marginalization

By Paul Cairney and Sean Kippin

This post summarizes a key section of our review of education equity policymaking [see the full article for references to the studies summarized here].

One of the main themes is that many governments present a misleading image of their education policies. There are many variations on this theme, in which policymakers:

  1. Describe the energetic pursuit of equity, and use the right language, as a way to hide limited progress.
  2. Pursue ‘equity for all’ initiatives that ignore or downplay the specific importance of marginalization and minoritization, such as in relation to race and racism, immigration, ethnic minorities, and indigenous populations.
  3. Pursue narrow definitions of equity in terms of access to schools, at the expense of definitions that pay attention to ‘out of school’ factors and social justice.

Minoritization is a strong theme in US studies in particular. US experiences help us categorise multiple modes of marginalisation in relation to race and migration, driven by witting and unwitting action and explicit and implicit bias:

  • The social construction of students and parents. Examples include: framing white students as ‘gifted’ and more deserving of merit-based education (or victims of equity initiatives); framing non-white students as less intelligent, more in need of special needs or remedial classes, and having cultural or other learning ‘deficits’ that undermine them and disrupt white students; and, describing migrant parents as unable to participate until they learn English.
  • Maintaining or failing to challenge inequitable policies. Examples include higher funding for schools and colleges with higher white populations, and tracking (segregating students according to perceived ability), which benefit white students disproportionately.
  • Ignoring social determinants or ‘out of school’ factors.
  • Creating the illusion of equity with measures that exacerbate inequalities. For example, promoting school choice policies while knowing that the rules restrict access to sought-after schools.
  • Promoting initiatives to ignore race, including so-called ‘color blind’ or ‘equity for all’ initiatives.
  • Prioritizing initiatives at the expense of racial or socio-economic equity, such as measures to boost overall national performance at the expense of targeted measures.
  • Game playing and policy subversion, including school and college selection rules to restrict access and improve metrics.

The wider international – primarily Global North – experience suggests that minoritization and marginalization in relation to race, ethnicity, and migration is a routine impediment to equity strategies, albeit with some uncertainty about which policies would have the most impact.

Other country studies describe the poor treatment of citizens in relation to immigration status or ethnicity, often while presenting the image of a more equitable system. Until recently, Finland’s global reputation for education equity built on universalism and comprehensive schools has contrasted with its historic ‘othering’ of immigrant populations. Japan’s reputation for containing a homogeneous population, allowing its governments to present an image of classless egalitarianism and harmonious society, contrasts with its discrimination against foreign students. Multiple studies of Canadian provinces provide the strongest accounts of the symbolic and cynical use of multiculturalism for political gains and economic ends:

As in the US, many countries use ‘special needs’ categories to segregate immigrant and ethnic minority populations. Mainstreaming versus special needs debates have a clear racial and ethnic dimension when (1) some groups are more likely to be categorised as having learning disabilities or behavioural disorders, and (2) language and cultural barriers are listed as disabilities in many countries. Further, ‘commonwealth’ country studies identify the marginalisation of indigenous populations in ways comparable to the US marginalisation of students of colour.

Overall, these studies generate the sense that the frequently used language of education equity policy can signal a range of possibilities, from (1) high energy and sincere commitment to social justice, to (2) the cynical use of rhetoric and symbolism to protect historic inequalities.

Examples:

  • Turner, E.O., and Spain, A.K., (2020) ‘The Multiple Meanings of (In)Equity: Remaking School District Tracking Policy in an Era of Budget Cuts and Accountability’, Urban Education, 55, 5, 783-812 https://doi.org/10.1177%2F0042085916674060
  • Thorius, K.A. and Maxcy, B.D. (2015) ‘Critical Practice Analysis of Special Education Policy: An RTI Example’, Remedial and Special Education, 36, 2, 116-124 https://doi.org/10.1177%2F0741932514550812
  • Felix, E.R. and Trinidad, A. (2020) ‘The decentralization of race: tracing the dilution of racial equity in educational policy’, International Journal of Qualitative Studies in Education, 33, 4, 465-490 https://doi.org/10.1080/09518398.2019.1681538
  • Alexiadou, N. (2019) ‘Framing education policies and transitions of Roma students in Europe’, Comparative Education, 55, 3,  https://doi.org/10.1080/03050068.2019.1619334

See also: https://paulcairney.wordpress.com/2017/09/09/policy-concepts-in-500-words-social-construction-and-policy-design/

2 Comments

Filed under education policy, Evidence Based Policymaking (EBPM), Policy learning and transfer, Prevention policy, public policy

Perspectives on academic impact and expert advice to policymakers

A blog post prompted by this fascinating post by Dr Christiane Gerblinger: Are experts complicit in making their advice easy for politicians to ignore?

There is a lot of advice out there for people seeking to make an ‘impact’ on policy with their research, but some kinds of advice must seem like they are a million miles apart.

For the sake of brevity, here are some exemplars of the kinds of discussion that you might find:

Advice from former policymakers

Here is what you could have done to influence my choices when I was in office. Almost none of you did it.

(for a nicer punchline see How can we demonstrate the public value of evidence-based policy making when government ministers declare that the people ‘have had enough of experts’?)

Advice from former civil servants

If you don’t know and follow the rules here, people will ignore your research. We despair when you just email your articles.

(for nicer advice see Creating and communicating social research for policymakers in government)

Advice from training courses on communication

Be concise and engaging.

Advice from training courses on policy impact

Find out where the action is, learn the rules, build up relationships and networks, become a trusted guide, be in the right place at the right time to exploit opportunities, give advice rather than sitting on the fence.

(see for example Knowledge management for policy impact: the case of the European Commission’s Joint Research Centre)

Advice from researchers with some experience of engagement

Do great research, make it relevant and readable, understand your policymaking context, decide how far you want to go have an impact, be accessible, build relationships, be entrepreneurial.

(see Beware the well-intentioned advice of unusually successful academics)

Advice from academic-practitioner exchanges

Note the different practices and incentives that undermine routine and fruitful exchanges between academics, practitioners, and policymakers.

(see Theory and Practice: How to Communicate Policy Research beyond the Academy and ANZOG Wellington).

Advice extrapolated from policy studies

Your audience decides if your research will have impact; policymakers will necessarily ignore almost all of it; a window of opportunity may never arise; and, your best shot may be to tailor your research findings to policymakers whose beliefs you may think are abhorrent.

(discussed in how much impact can you expect from your analysis? and book The Politics of Policy Analysis)

Inference from my study of UK COVID-19 policy

Very few expert advisers had a continuous impact on policy, some had decent access, but almost all were peripheral players or outsiders by choice.

(see The UK government’s COVID-19 policy: what does ‘guided by the science’ mean in practice? and COVID-19 page)

Inference from Dr Gerblinger

Experts ensure that they ignored when: ‘focussing extensively on one strand of enquiry while sidestepping the wider context; expunging complexity; and routinely raising the presence of inconclusiveness’.

What can we make of all of this advice?

One way to navigate all of this material is to make some basic distinctions between:

Sensible basic advice to early career researchers

Know your audience, and tailor your communication accordingly; see academic-practitioner exchange as two-way conversation rather than one-way knowledge transfer.

Take home message: here are some sensible ways to share experiences with people who might find your research useful.

Reflections from people with experience

It will likely not reflect your position or experience (but might be useful sometimes).

Take home message: I think this stuff worked for me, but I am not really sure, and I doubt you will have the same resources.

Reflections from studies of academic-practitioner exchange

It tends to find minimal evidence that people are (a) evaluating research engagement projects, and (b) finding tangible evidence of success (see Research engagement with government: insights from research on policy analysis and policymaking)

Take home message: there is a lot of ‘impact’ work going on, but no one is sure what it all adds up to.

Policy initiatives such as the UK Research Excellence Framework, which requires case studies of policy (or other) impact to arise directly from published research.

Take home message: I have my own thoughts, but see Rethinking policy ‘impact’: four models of research-policy relations

Reflections from people like me

Policy studies can be quite dispiriting. It often looks like I am saying that none of these activities will make much of a difference to policy or policymaking. Rather, I am saying to beware the temptation to turn (a) studies that describe policymaking complexity (e.g. 500 Words) into an agent-centred story of heroically impactful researchers (see for example the Discussion section of this article on health equity policy).

Take home message: don’t confuse studies of policymaking with advice for policy participants.

In other words, identify what you are after before you start to process all of this advice. If you want to engage more with policymakers, you will find some sensible practical advice. If you want to be responsible for a fundamental change of public policy in your field, I doubt any of the available advice will help (unless you seek an explanation for failure).

Leave a comment

Filed under Academic innovation or navel gazing, Evidence Based Policymaking (EBPM), public policy

The future of public health policymaking after COVID-19: lessons from Health in All Policies

Paul Cairney, Emily St Denny, Heather Mitchell 

This post summarises new research on the health equity strategy Health in All Policies. As our previous post suggests, it is common to hope that a major event will create a ‘window of opportunity’ for such strategies to flourish, but the current COVID-19 experience suggests otherwise. If so, what do HIAP studies tell us about how to respond, and do they offer any hope for future strategies? The full report is on Open Research Europe, accompanied by a brief interview on its contribution to the Horizon 2020 project – IMAJINE – on spatial justice.

COVID-19 should have prompted governments to treat health improvement as fundamental to public policy

Many had made strong rhetorical commitments to public health strategies focused on preventing a pandemic of non-communicable diseases (NCDs). To do so, they would address the ‘social determinants’ of health, defined by the WHO as ‘the unfair and avoidable differences in health status’ that are ‘shaped by the distribution of money, power and resources’ and ‘the conditions in which people are born, grow, live, work and age’.

COVID-19 reinforces the impact of the social determinants of health. Health inequalities result from factors such as income and social and environmental conditions, which influence people’s ability to protect and improve their health. COVID-19 had a visibly disproportionate impact on people with (a) underlying health conditions associated with NCDs, and (b) less ability to live and work safely.

Yet, the opposite happened. The COVID-19 response side-lined health improvement

Health departments postponed health improvement strategies and moved resources to health protection.

This experience shows that the evidence does not speak for itself

The evidence on social determinants is clear to public health specialists, but the idea of social determinants is less well known or convincing to policymakers.

It also challenges the idea that the logic of health improvement is irresistible

Health in All Policies (HIAP) is the main vehicle for health improvement policymaking, underpinned by: a commitment to health equity by addressing the social determinants of health; the recognition that the most useful health policies are not controlled by health departments; the need for collaboration across (and outside) government; and, the search for high level political commitment to health improvement.

Its logic is undeniable to HIAP advocates, but not policymakers. A government’s public commitment to HIAP does not lead inevitably to the roll-out of a fully-formed HIAP model. There is a major gap between the idea of HIAP and its implementation. It is difficult to generate HIAP momentum, and it can be lost at any time.

Instead, we need to generate more realistic lessons from health improvement and promotion policy

However, most HIAP research does not provide these lessons. Most HIAP research combines:

  1. functional logic (here is what we need)
  2. programme logic (here is what we think we need to do to achieve it), and
  3. hope.

Policy theory-informed empirical studies of policymaking could help produce a more realistic agenda, but very few HIAP studies seem to exploit their insights.

To that end, this review identifies lessons from studies of HIAP and policymaking

It summarises a systematic qualitative review of HIAP research. It includes 113 articles (2011-2020) that refer to policymaking theories or concepts while discussing HIAP.

We produced these conclusions from pre-COVID-19 studies of HIAP and policymaking, but our new policymaking context – and its ironic impact on HIAP – is impossible to ignore.

It suggests that HIAP advocates produced a 7-point playbook for the wrong game

The seven most common pieces of advice add up to a plausible but incomplete strategy:

  1. adopt a HIAP model and toolkit
  2. raise HIAP awareness and support in government
  3. seek win-win solutions with partners
  4. avoid the perception of ‘health imperialism’ when fostering intersectoral action
  5. find HIAP policy champions and entrepreneurs
  6. use HIAP to support the use of health impact assessments (HIAs)
  7. challenge the traditional cost-benefit analysis approach to valuing HIAP.

Yet, two emerging pieces of advice highlight the limits to the current playbook and the search for its replacement:

  1. treat HIAP as a continuous commitment to collaboration and health equity, not a uniform model; and,
  2. address the contradictions between HIAP aims.

As a result, most country studies report a major, unexpected, and disappointing gap between HIAP commitment and actual outcomes

These general findings are apparent in almost all relevant studies. They stand out in the ‘best case’ examples where: (a) there is high political commitment and strategic action (such as South Australia), or (b) political and economic conditions are conducive to HIAP (such as Nordic countries).

These studies show that the HIAP playbook has unanticipated results, such as when the win-win strategy leads to  HIAP advocates giving ground but receiving little in return.

HIAP strategies to challenge the status quo are also overshadowed by more important factors, including (a) a far higher commitment to existing healthcare policies and the core business of government, and (b) state retrenchment. Additional studies of decentralised HIAP models find major gaps between (a) national strategic commitment (backed by national legislation) and (b) municipal government progress.

Some studies acknowledge the need to use policymaking research to produce new ways to encourage and evaluate HIAP success

Studies of South Australia situate HIAP in a complex policymaking system in which the link between policy activity and outcomes is not linear.  

Studies of Nordic HIAP show that a commitment to municipal responsibility and stakeholder collaboration rules out the adoption of a national uniform HIAP model.

However, most studies do not use policymaking research effectively or appropriately

Almost all HIAP studies only scratch the surface of policymaking research (while some try to synthesise its insights, but at the cost of clarity).

Most HIAP studies use policy theories to:

  1. produce practical advice (such as to learn from ‘policy entrepreneurs’), or
  2. supplement their programme logic (to describe what they think causes policy change and better health outcomes).

Most policy theories were not designed for this purpose.

Policymaking research helps primarily to explain the HIAP ‘implementation gap’

Its main lesson is that policy outcomes are beyond the control of policymakers and HIAP advocates. This explanation does not show how to close implementation gaps.

Its practical lessons come from critical reflection on dilemmas and politics, not the reinvention of a playbook

It prompts advocates to:

  • Treat HIAP as a political project, not a technical exercise or puzzle to be solved.
  • Re-examine the likely impact of a focus on intersectoral action and collaboration, to recognise the impact of imbalances of power and the logic of policy specialisation.
  • Revisit the meaning-in-practice of the vague aims that they take for granted without explaining, such as co-production, policy learning, and organisational learning.
  • Engage with key trade-offs, such as between a desire for uniform outcomes (to produce health equity) but acceptance of major variations in HIAP policy and policymaking.
  • Avoid reinventing phrases or strategies when facing obstacles to health improvement.

We describe these points in more detail here:

Our Open Research Europe article (peer reviewed) The future of public health policymaking… (europa.eu)

Paul summarises the key points as part of a HIAP panel: Health in All Policies in times of COVID-19

ORE blog on the wider context of this work: forthcoming

7 Comments

Filed under agenda setting, COVID-19, Evidence Based Policymaking (EBPM), Public health, public policy

The COVID-19 exams fiasco across the UK: why did policymaking go so wrong?

This post first appeared on the LSE British Politics and Policy blog, and it summarises our new article: Sean Kippin and Paul Cairney (2021) ‘The COVID-19 exams fiasco across the UK: four nations and two windows of opportunity’, British Politics, PDF Annex. The focus on inequalities of attainment is part of the IMAJINE project on spatial justice and territorial inequalities.

In the summer of 2020, after cancelling exams, the UK and devolved governments sought teacher estimates on students’ grades, but supported an algorithm to standardise the results. When the results produced a public outcry over unfair consequences, they initially defended their decision but reverted quickly to teacher assessment. These experiences, argue Sean Kippin and Paul Cairney, highlight the confluence of events and choices in which an imperfect and rejected policy solution became a ‘lifeline’ for four beleaguered governments. 

In 2020, the UK and devolved governments performed a ‘U-turn’ on their COVID-19 school exams replacement policies. The experience was embarrassing for education ministers and damaging to students. There are significant differences between (and often within) the four nations in terms of the structure, timing, weight, and relationship between the different examinations. However, in general, the A-level (England, Northern Ireland, Wales) and Higher/ Advanced Higher (Scotland) examinations have similar policy implications, dictating entry to further and higher education, and influencing employment opportunities. The Priestley review, commissioned by the Scottish Government after their U-turn, described this as an ‘impossible task’.

Initially, each government defined the new policy problem in relation to the need to ‘credibly’ replicate the purpose of exams to allow students to progress to tertiary education or employment. All four quickly announced their intentions to allocate in some form grades to students, rather than replace the assessments with, for example, remote examinations. However, mindful of the long-term credibility of the examinations system and of ensuring fairness, each government opted to maintain the qualifications and seek a similar distribution of grades to previous years. A key consideration was that UK universities accept large numbers of students from across the UK.

One potential solution open to policymakers was to rely solely on teacher grading (CAG). CAGs are ‘based on a range of evidence including mock exams, non-exam assessment, homework assignments and any other record of student performance over the course of study’. Potential problems included the risk of high variation and discrepancies between different centres, the potential overload of the higher education system, and the tendency for teacher predicted grades to reward already privileged students and punish disabled, non-white, and economically deprived children.

A second option was to take CAGs as a starting point, then use an algorithm to produce ‘standardisation’, which was potentially attractive to each government as it allowed students to complete secondary education and to progress to the next level in similar ways to previous (and future) cohorts. Further, an emphasis on the technical nature of this standardisation, with qualifications agencies taking the lead in designing the process by which grades would be allocated, and opting not share the details of its algorithm were a key part of its (temporary) viability. Each government then made similar claims when defending the problem and selecting the solution. Yet this approach reduced both the debate on the unequal impact of this process on students, and the chance for other experts to examine if the algorithm would produce the desired effect. Policymakers in all four governments assured students that the grading would be accurate and fair, with teacher discretion playing a large role in the calculation of grades.

To these governments, it appeared at first that they had found a fair and efficient (or at least defendable) way to allocate grades, and public opinion did not respond negatively to its announcement. However, these appearances proved to be profoundly deceptive and vanished on each day of each exam result. The Scottish national mood shifted so intensely that, after a few days, pursuing standardisation no longer seemed politically feasible. The intense criticism centred on the unequal level of reductions of grades after standardisation, rather than the unequal overall rise in grade performance after teacher assessment and standardisation (which advantaged poorer students).

Despite some recognition that similar problems were afoot elsewhere, this shift of problem definition did not happen in the rest of the UK until (a) their published exam results highlighted similar problems regarding the role of previous school performance on standardised results, and (b) the Scottish Government had already changed course. Upon the release of grades outside Scotland, it became clear that downgrades were also concentrated in more deprived areas. For instance, in Wales, 42% of students saw their A-Level results lowered from their Centre Assessed Grades, with the figure close to a third for Northern Ireland.

Each government thus faced similar choices between defending the original system by challenging the emerging consensus around its apparent unfairness; modifying the system by changing the appeal system; or abandoning it altogether and reverting to solely teacher assessed grades. Ultimately, all three governments followed the same path. Initially, they opted to defend their original policy choice. However, by 17 August, the UK, Welsh, and Northern education secretaries announced (separately) that examination grades would be based solely on CAGs – unless the standardisation process had generated a higher grade (students would receive whichever was highest).

Scotland’s initial experience was instructive to the rest of the UK and its example provided the UK government with a blueprint to follow (eventually). It began with a new policy choice – reverting to teacher assessed grades – sold as fairer to victims of the standardisation process. Once this precedent had been set, a different course for policymakers at the UK level became difficult to resist, particularly when faced with a similar backlash. The UK’s government’s decision in turn influenced the Welsh and Northern Irish governments.

In short, we can see that the particular ordering of choices created a cascading effect across the four governments which created initially one policy solution, before triggering a U-turn. This focus on order and timing should not be lost during the inevitable inquiries and reports on the examinations systems. The take-home message is to not ignore the policy process when evaluating the long-term effect of these policies. Focus on why the standardisation processes went wrong is welcome, but we should also focus on why the policymaking process malfunctioned, to produce a wildly inconsistent approach to the same policy choice in such a short space of time. Examining both aspects of this fiasco will be crucial to the grading process in 2021, given that governments will be seeking an alternative to exams for a second year.

__________________________

Note: the above draws on the authors’ published work in British Politics.

Leave a comment

Filed under IMAJINE, Policy learning and transfer, public policy, UK politics and policy

What have we learned so far from the UK government’s COVID-19 policy?

This post first appeared on LSE British Politics and Policy (27.11.20) and is based on this article in British Politics.

Paul Cairney assesses government policy in the first half of 2020. He identifies the intense criticism of its response so far, encouraging more systematic assessments grounded in policy research.

In March 2020, COVID-19 prompted policy change in the UK at a speed and scale only seen during wartime. According to the UK government, policy was informed heavily by science advice. Prime Minister Boris Johnson argued that, ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’. Further, key scientific advisers such as Sir Patrick Vallance emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term.

Both ministers and advisors emphasised the need for individual behavioural change, supplemented by government action, in a liberal democracy in which direct imposition is unusual and unsustainable. However, for its critics, the government experience has quickly become an exemplar of policy failure.

Initial criticisms include that ministers did not take COVID-19 seriously enough in relation to existing evidence, when its devastating effect was apparent in China in January and Italy from February; act as quickly as other countries to test for infection to limit its spread; or introduce swift-enough measures to close schools, businesses, and major social events. Subsequent criticisms highlight problems in securing personal protective equipment (PPE), testing capacity, and an effective test-trace-and-isolate system. Some suggest that the UK government was responding to the ‘wrong pandemic’, assuming that COVID-19 could be treated like influenza. Others blame ministers for not pursuing an elimination strategy to minimise its spread until a vaccine could be developed. Some criticise their over-reliance on models which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown. Many describe these problems and delays as the contributors to the UK’s internationally high number of excess deaths.

How can we hold ministers to account in a meaningful way?

I argue that these debates are often fruitless and too narrow because they do not involve systematic policy analysis, take into account what policymakers can actually do, or widen debate to consider whose lives matter to policymakers. Drawing on three policy analysis perspectives, I explore the questions that we should ask to hold ministers to account in a way that encourages meaningful learning from early experience.

These questions include:

Was the government’s definition of the problem appropriate?
Much analysis of UK government competence relates to specific deficiencies in preparation (such as shortages in PPE), immediate action (such as to discharge people from hospitals to care homes without testing them for COVID-19), and implementation (such as an imperfect test-trace-and-isolate system). The broader issue relates to its focus on intervening in late March to protect healthcare capacity during a peak of infection, rather than taking a quicker and more precautionary approach. This judgment relates largely to its definition of the policy problem which underpins every subsequent policy intervention.

Did the government select the right policy mix at the right time? Who benefits most from its choices?

Most debates focus on the ‘lock down or not?’ question without exploring fully the unequal impact of any action. The government initially relied on exhortation, based on voluntarism and an appeal to social responsibility. Initial policy inaction had unequal consequences on social groups, including people with underlying health conditions, black and ethnic minority populations more susceptible to mortality at work or discrimination by public services, care home residents, disabled people unable to receive services, non-UK citizens obliged to pay more to live and work while less able to access public funds, and populations (such as prisoners and drug users) that receive minimal public sympathy. Then, in March, its ‘stay at home’ requirement initiated a major new policy and different unequal impacts in relation to the income, employment, and wellbeing of different groups. These inequalities are list in more general discussions of impacts on the whole population.

Did the UK government make the right choices on the trade-offs between values, and what impacts could the government have reasonably predicted?

Initially, the most high-profile value judgment related to freedom from state coercion to reduce infection versus freedom from the harm of infection caused by others. Then, values underpinned choices on the equitable distribution of measures to mitigate the economic and wellbeing consequences of lockdown. A tendency for the UK government to project centralised and ‘guided by the science’ policymaking has undermined public deliberation on these trade-offs between policies. The latter will be crucial to ongoing debates on the trade-offs associated with national and regional lockdowns.

Did the UK government combine good policy with good policymaking?

A problem like COVID-19 requires trial-and-error policymaking on a scale that seems incomparable to previous experiences. It requires further reflection on how to foster transparent and adaptive policymaking and widespread public ownership for unprecedented policy measures, in a political system characterised by (a) accountability focused incorrectly on strong central government control and (b) adversarial politics that is not conducive to consensus seeking and cooperation.

These additional perspectives and questions show that too-narrow questions – such as was the UK government ‘following the science’ – do not help us understand the longer term development and wider consequences of UK COVID-19 policy. Indeed, such a narrow focus on science marginalises wider discussions of values and the populations that are most disadvantaged by government policy.

_____________________

2 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, Public health, public policy, UK politics and policy

Policy learning to reduce inequalities: a practical framework

This post first appeared on LSE BPP on 16.11.2020 and it describes the authors’ published work in Territory, Politics, Governance (for IMAJINE)

While policymakers often want to learn how other governments have responded to certain policies, policy learning is characterized by contestation. Policymakers compete to define the problem, set the parameters for learning, and determine which governments should take the lead. Emily St.DennyPaul Cairney, and Sean Kippin discuss a framework that would encourage policy learning in multilevel systems.

Governments face similar policy problems and there is great potential for mutual learning and policy transfer. Yet, most policy research highlights the political obstacles to learning and the weak link between research and transfer. One solution may be to combine academic insights from policy research with practical insights from people with experience of learning in political environments. In that context, our role is to work with policy actors to produce pragmatic strategies to encourage realistic research-informed learning.

Pragmatic policy learning

Producing concepts, research questions, and methods that are interesting to both academics and practitioners is challenging. It requires balancing different approaches to gathering and considering ‘evidence’ when seeking to solve a policy problem. Practitioners need to gather evidence quickly, focusing on ‘what works’ or positive experiences from a small number of relevant countries. Policy scholars may seek more comprehensive research and warn against simple solutions. Further, they may do so without offering a feasible alternative to their audience.

To bridge these differences and facilitate policy learning, we encourage a pragmatic approach to policy learning that requires:

  • Seeing policy learning through the eyes of participants, to understand how they define and seek to solve this problem;
  • Incorporating insights from policy research to construct a feasible approach;
  • Reflecting on this experience to inform research.

Our aim is not ‘evidence-based policymaking’. Rather, it is to incorporate the fact that researchers and evidence form only one small component of a policymaking system characterized by complexity. Additionally, policy actors enjoy less control over these systems than we might like to admit. Learning is therefore best understood as a contested process in which actors combine evidence and beliefs to define policy problems, identify technically and politically feasible solutions, and negotiate who should be responsible for their adoption and delivery in multilevel policymaking systems. Taking seriously the contested, context-specific, and political nature of policymaking is crucial for producing effective advice from which to learn.

Policy learning to reduce inequalities

We apply these insights as part of the EU Horizon 2020 project Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe (IMAJINE). Its overall aim is to research how national and territorial governments across the European Union pursue ‘spatial justice’ and try to reduce inequalities.

Our role is to facilitate policy learning and consider the transfer of policy solutions from successful experiences. Yet, we are confronted by the usual challenges. They include the need to: identify appropriate exemplars from where to draw lessons; help policy practitioners control for differences in context; and translate between academic and practitioner communities.

Additionally, we work on an issue – inequality – which is notoriously ambiguous and contested. It involves not only scientific information about the lives and experiences of people, but also political disagreement about the legitimate role of the state in intervening in people’s lives or redistributing of resources. Developing a policy learning framework that is able to generate practically useful insights for policy actors is difficult but key to ensuring policy effectiveness and coherence.

Drawing on work we carried out for the Scottish Government’s National Advisory Council on Women and Girls on approaches to reducing inequalities in relation to gender mainstreaming, we apply the IMAJINE framework to support policy learning. The IMAJINE framework guides such academic–practitioner analysis in four steps:

Step 1: Define the nature of policy learning in political systems.

Preparing for learning requires taking into account the interaction between:

  • Politics, in which actors contest the nature of problems and the feasibility of solutions;
  • Bounded rationality, which requires actors to use organizational and cognitive shortcuts to gather and use evidence;
  • ‘Multi-centric’ policymaking systems, which limit a single central government’s control over choices and outcomes.

These dynamics play out in different ways in each territory, which means that the importers and exporters of lessons are operating in different contexts and addressing inequalities in different ways. Therefore, we must ask how the importers and exporters of lessons: define the problem, decide what policies are feasible, establish which level of government should be responsible for policy and identify criteria to evaluate policy success.

Step 2: Map policymaking responsibilities for the selection of policy instruments.

The Council of Europe defines gender mainstreaming as ‘the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages’.

Such definitions help explain why mainstreaming approaches often appear to be incoherent. To map the sheer weight of possible measures, and the spread of responsibility across many levels of government (such as local, Scottish, UK and EU), is to identify a potentially overwhelming scale of policymaking ambition. Further, governments tend to address this potential by breaking policymaking into manageable sectors. Each sector has its own rules and logics, producing coherent policymaking in each ‘silo’ but a sense of incoherence overall, particularly if the overarching aim is a low priority in government. Mapping these dynamics and responsibilities is necessary to ensure lessons learned can be effectively applied in similarly complex domestic systems.

Step 3: Learn from experience.

Policy actors want to draw lessons from the most relevant exemplars. Often, they will have implicit or explicit ideas concerning which countries they would like to learn more from. Negotiating which cases to explore, so that it takes into consideration both policy actors’ interests and the need to generate appropriate and useful lessons, is vital.

In the case of mainstreaming, we focused on three exemplar approaches, selected by members of our audience according to perceived levels of ambition: maximal (Sweden), medial (Canada) and minimal (the UK, which controls aspects of Scottish policy). These cases were also justified with reference to the academic literature which often uses these countries as exemplars of different approaches to policy design and implementation.

Step 4: Deliberate and reflect.

Work directly with policy participants to reflect on the implications for policy in their context. Research has many important insights on the challenges to and limitations of policy learning in complex systems. In particular, it suggests that learning cannot be comprehensive and does not lead to the importation of a well-defined package of measures. Bringing these sorts of insights to bear on policy actors’ practical discussions of how lessons can be drawn and applied from elsewhere is necessary, though ultimately insufficient. In our experience so far, step 4 is the biggest obstacle to our impact.

___________________

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), feminism, IMAJINE, Policy learning and transfer, public policy

The UK government’s lack of control of public policy

This post first appeared as Who controls public policy? on the UK in a Changing Europe website. There is also a 1-minute video, but you would need to be a completist to want to watch it.

Most coverage of British politics focuses on the powers of a small group of people at the heart of government. In contrast, my research on public policy highlights two major limits to those powers, related to the enormous number of problems that policymakers face, and to the sheer size of the government machine.

First, elected policymakers simply do not have the ability to properly understand, let alone solve, the many complex policy problems they face. They deal with this limitation by paying unusually high attention to a small number of problems and effectively ignoring the rest.

Second, policymakers rely on a huge government machine and network of organisations (containing over 5 million public employees) essential to policy delivery, and oversee a statute book which they could not possibly understand.

In other words, they have limited knowledge and even less control of the state, and have to make choices without knowing how they relate to existing policies (or even what happens next).

These limits to ministerial powers should prompt us to think differently about how to hold them to account. If they only have the ability to influence a small proportion of government business, should we blame them for everything that happens in their name?

My approach is to apply these general insights to specific problems in British politics. Three examples help to illustrate their ability to inform British politics in new ways.

First, policymaking can never be ‘evidence based’. Some scientists cling to the idea that the ‘best’ evidence should always catch the attention of policymakers, and assume that ‘speaking truth to power’ helps evidence win the day.

As such, researchers in fields like public health and climate change wonder why policymakers seem to ignore their evidence.

The truth is that policymakers only have the capacity to consider a tiny proportion of all available information. Therefore, they must find efficient ways to ignore almost all evidence to make timely choices.

They do so by setting goals and identifying trusted sources of evidence, but also using their gut instinct and beliefs to rule out most evidence as irrelevant to their aims.

Second, the UK government cannot ‘take back control’ of policy following Brexit simply because it was not in control of policy before the UK joined. The idea of control is built on the false image of a powerful centre of government led by a small number of elected policymakers.

This way of thinking assumes that sharing power is simply a choice. However, sharing power and responsibility is borne of necessity because the British state is too large to be manageable.

Governments manage this complexity by breaking down their responsibilities into many government departments. Still, ministers can only pay attention to a tiny proportion of issues managed by each department. They delegate most of their responsibilities to civil servants, agencies, and other parts of the public sector.

In turn, those organisations rely on interest groups and experts to provide information and advice.

As a result, most public policy is conducted through small and specialist ‘policy communities’ that operate out of the public spotlight and with minimal elected policymaker involvement.

The logical conclusion is that senior elected politicians are less important than people think. While we like to think of ministers sitting in Whitehall and taking crucial decisions, most of these decisions are taken in their name but without their intervention.

Third, the current pandemic underlines all too clearly the limits of government power. Of course people are pondering the degree to which we can blame UK government ministers for poor choices in relation to Covid-19, or learn from their mistakes to inform better policy.

Many focus on the extent to which ministers were ‘guided by the science’. However, at the onset of a new crisis, government scientists face the same uncertainty about the nature of the policy problem, and ministers are not really able to tell if a Covid-19 policy would work as intended or receive enough public support.

Some examples from the UK experience expose the limited extent to which policymakers can understand, far less control, an emerging crisis.

Prior to the lockdown, neither scientists nor ministers knew how many people were infected, nor when levels of infection would peak.

They had limited capacity to test. They did not know how often (and how well) people wash their hands. They did not expect people to accept and follow strict lockdown rules so readily, and did not know which combination of measures would have the biggest impact.

When supporting businesses and workers during ‘furlough’, they did not know who would be affected and therefore how much the scheme would cost.

In short, while Covid-19 has prompted policy change and state intervention on a scale not witnessed outside of wartime, the government has never really known what impact its measures would have.

Overall, the take-home message is that the UK narrative of strong central government control is damaging to political debate and undermines policy learning. It suggests that every poor outcome is simply the consequence of bad choices by powerful leaders. If so, we are unable to distinguish between the limited competence of some leaders and the limited powers of them all.

2 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, public policy, UK politics and policy

The UK Government’s COVID-19 policy: assessing evidence-informed policy analysis in real time

abstract 25k words

On the 23rd March 2020, the UK Government’s Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of COVID-19 , including new regulations on behaviour, police powers to support public health, budgetary measures to support businesses and workers during their economic inactivity, the almost-complete closure of schools, and the major expansion of healthcare capacity via investment in technology, discharge to care homes, and a consolidation of national, private, and new health service capacity (note that many of these measures relate only to England, with devolved governments responsible for public health in Northern Ireland, Scotland, and Wales). Overall, the coronavirus prompted almost-unprecedented policy change, towards state intervention, at a speed and magnitude that seemed unimaginable before 2020.

Yet, many have criticised the UK government’s response as slow and insufficient. Criticisms include that UK ministers and their advisors did not:

  • take the coronavirus seriously enough in relation to existing evidence (when its devastating effect was increasingly apparent in China in January and Italy from February)
  • act as quickly as some countries to test for infection to limit its spread, and/ or introduce swift measures to close schools, businesses, and major social events, and regulate social behaviour (such as in Taiwan, South Korea, or New Zealand)
  • introduce strict-enough measures to stop people coming into contact with each other at events and in public transport.

They blame UK ministers for pursuing a ‘mitigation’ strategy, allegedly based on reducing the rate of infection and impact of COVID-19 until the population developed ‘herd immunity’, rather than an elimination strategy to minimise its spread until a vaccine or antiviral could be developed. Or, they criticise the over-reliance on specific models, which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown.

Many cite this delay, compounded by insufficient personal protective equipment (PPE) in hospitals and fatal errors in the treatment of care homes, as the biggest contributor to the UK’s unusually high number of excess deaths (Campbell et al, 2020; Burn-Murdoch and Giles, 2020; Scally et al, 2020; Mason, 2020; Ball, 2020; compare with Freedman, 2020a; 2020b and Snowden, 2020).

In contrast, scientific advisers to UK ministers have emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term (e.g. Vallance). Throughout, they emphasised the need for individual behavioural change (hand washing and social distancing), supplemented by government action, in a liberal democracy in which direct imposition is unusual and, according to UK ministers, unsustainable in the long term.

We can relate these debates to the general limits to policymaking identified in policy studies (summarised in Cairney, 2016; 2020a; Cairney et al, 2019) and underpinning the ‘governance thesis’ that dominates the study of British policymaking (Kerr and Kettell, 2006: 11; Jordan and Cairney, 2013: 234).

First, policymakers must ignore almost all evidence. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information.

Second, policymakers have a limited understanding, and even less control, of their policymaking environments. No single centre of government has the power to control policy outcomes. Rather, there are many policymakers and influencers spread across a political system, and most choices in government are made in subsystems, with their own rules and networks, over which ministers have limited knowledge and influence. Further, the social and economic context, and events such as a pandemic, often appear to be largely out of their control.

Third, even though they lack full knowledge and control, governments must still make choices. Therefore, their choices are necessarily flawed.

Fourth, their choices produce unequal impacts on different social groups.

Overall, the idea that policy is controlled by a small number of UK government ministers, with the power to solve major policy problems, is still popular in media and public debate, but dismissed in policy research .

Hold the UK government to account via systematic analysis, not trials by social media

To make more sense of current developments in the UK, we need to understand how UK policymakers address these limitations in practice, and widen the scope of debate to consider the impact of policy on inequalities.

A policy theory-informed and real-time account helps us avoid after-the-fact wisdom and bad-faith trials by social media.

UK government action has been deficient in important ways, but we need careful and systematic analysis to help us separate (a) well-informed criticism to foster policy learning and hold ministers to account, from (a) a naïve and partisan rush to judgement that undermines learning and helps let ministers off the hook.

To that end, I combine insights from policy analysis guides, policy theories, and critical policy analysis to analyse the UK government’s initial coronavirus policy. I use the lens of 5-step policy analysis models to identify what analysts and policymakers need to do, the limits to their ability to do it, and the distributional consequences of their choices.

I focus on sources in the public record, including oral evidence to the House of Commons Health and Social Care committee, and the minutes and meeting papers of the UK Government’s Scientific Advisory Group for Emergencies (SAGE) (and NERVTAG), transcripts of TV press conferences and radio interviews, and reports by professional bodies and think tanks.

The short version is here. The long version – containing a huge list of sources and ongoing debates – is here. Both are on the COVID-19 page.

Leave a comment

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

COVID-19 policy in the UK: SAGE Theme 1. The language of intervention

This post is part 5 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

There is often a clear distinction between a strategy designed to (a) eliminate a virus/ the spread of disease quickly, and (b) manage the spread of infection over the long term (see The overall narrative).

However, generally, the language of virus management is confusing. We need to be careful with interpreting the language used in these minutes, and other sources such as oral evidence to House of Commons committees, particularly when comparing the language at the beginning (when people were also unsure what to call SARS-CoV-2 and COVID-19) to present day debates.

For example, in January, it is tempting to contrast ‘slow down the spread of the outbreak domestically’ (28.1.20: 2) with a strategy towards ‘extinction’, but the proposed actions may be the same even if the expectations of impact are different. Some people interpret these differences as indicative of a profoundly different approach (delay versus eradicate); some describe the semantic differences as semantics.

By February, SAGE’s expectation is of an inevitable epidemic and inability to contain COVID-19, prompting it to describe the inevitable series of stages:

‘Priorities will shift during a potential outbreak from containment and isolation on to delay and, finally, to case management … When there is sustained transmission in the UK, contact tracing will no longer be useful’ (18.2.20: 1; its discussion on 20.2.20: 2 also concludes that ‘individual cases could already have been missed – including individuals advised that they are not infectious’).

Mitigation versus suppression

On the face of it, it looks like there is a major difference in the ways on which (a) the Imperial College COVID-19 Response Team and (b) SAGE describe possible policy responses. The Imperial paper makes a distinction between mitigation and suppression:

  1. Its ‘mitigation strategy scenarios’ highlight the relative effects of partly-voluntary measures on mortality and demand for ‘critical care beds’ in hospitals: (voluntary) ‘case isolation in the home’ (people with symptoms stay at home for 7 days), ‘voluntary home quarantine’ (all members of the household stay at home for 14 days if one member has symptoms), (government enforced) ‘social distancing of those over 70’ or ‘social distancing of entire population’ (while still going to work, school or University), and closure of most schools and universities. It omits ‘stopping mass gatherings’ because ‘the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants’ (2020a: 8). Assuming 70-75% compliance, it describes the combination of ‘case isolation, home quarantine and social distancing of those aged over 70’ as the most impactful, but predicts that ‘mitigation is unlikely to be a viable option without overwhelming healthcare systems’ (2020a: 8-10). These measures would only ‘reduce peak critical care demand by two-thirds and halve the number of deaths’ (to approximately 250,000).
  2. Its ‘suppression strategy scenarios’ describe what it would take to reduce the rate of infection (R) from the estimated 2.0-2.6 to 1 or below (in other words, the game-changing point at which one person would infect no more than one other person) and reduce ‘critical care requirements’ to manageable levels. It predicts that a combination of four options – ‘case isolation’, ‘social distancing of the entire population’ (the measure with the largest impact), ‘household quarantine’ and ‘school and university closure’ – would reduce critical care demand from its peak ‘approximately 3 weeks after the interventions are introduced’, and contribute to a range of 5,600-48,000 deaths over two years (depending on the current R and the ‘trigger’ for action in relation to the number of occupied critical care beds) (2020a: 13-14).

In comparison, the SAGE meeting paper (26.2.20b: 1-3), produced 2-3 weeks earlier, pretty much assumes away the possible distinction between mitigation versus suppression measures (which Vallance has described as semantic rather than substantive – scroll down to The distinction between mitigation and suppression measures). In other words, it assumes ‘high levels of compliance over long periods of time’ (26.2.20b: 1). As such, we can interpret SAGE’s discussion as (a) requiring high levels of compliance for these measures to work (the equivalent of Imperial’s description of suppression), while (b) not describing how to use (more or less voluntary versus impositional) government policy to secure compliance. In comparison, Imperial equates suppression with the relatively-short-term measures associated with China and South Korea (while noting uncertainty about how to maintain such measures until a vaccine is produced).

One reason for SAGE to assume compliance in its scenario building is to focus on the contribution of each measure, generally taking place over 13 weeks, to delaying the peak of infection (while stating that ‘It will likely not be feasible to provide estimates of the effectiveness of individual control measures, just the overall effectiveness of them all’, 26.2.20b: 1), while taking into account their behavioural implications (26.2.20b: 2-3).

  • School closures could contribute to a 3-week delay, especially if combined with FE/ HE closures (but with an unequal impact on ‘Those in lower socio-economic groups … more reliant on free school meals or unable to rearrange work to provide childcare’).
  • Home isolation (65% of symptomatic cases stay at home for 7 days) could contribute to a 2-3 week delay (and is the ‘Easiest measure to explain and justify to the public’).
  • ‘Voluntary household quarantine’ (all member of the household isolate for 14 days) would have a similar effect – assuming 50% compliance – but with far more implications for behavioural public policy:

‘Resistance & non-compliance will be greater if impacts of this policy are inequitable. For those on low incomes, loss of income means inability to pay for food, heating, lighting, internet. This can be addressed by guaranteeing supplies during quarantine periods.

Variable compliance, due to variable capacity to comply, may lead to dissatisfaction.

Ensuring supplies flow to households is essential. A desire to help among the wider community (e.g. taking on chores, delivering supplies) could be encouraged and scaffolded to support quarantined households.

There is a risk of stigma, so ‘voluntary quarantine’ should be portrayed as an act of altruistic civic duty’.

  • ‘Social distancing’ (‘enacted early’), in which people restrict themselves to essential activity (work and school) could produce a 3-5 week delay (and likely to be supported in relation to mass leisure events, albeit less so when work activities involve a lot of contact.

[Note that it is not until May that it addresses this issue of feasibility directly (and, even then, it does not distinguish between technical and political feasibility: ‘It was noted that a useful addition to control measures SAGE considers (in addition to scientific uncertainty) would be the feasibility of monitoring/ enforcement’ (7.5.20: 3)]

As theme 2 suggests, there is a growing recognition that these measures should have been introduced by early March (such as via the Coronavirus Act 2020 not passed until 25.3.20), and likely would if the UK government and SAGE had more information (or interpreted its information in a different way). However, by mid-March, SAGE expresses a mixture of (a) growing urgency, but also (b) the need to stick to the plan, to reduce the peak and avoid a second peak of infection). On 13th March, it states:

‘There are no strong scientific grounds to hasten or delay implementation of either household isolation or social distancing of the elderly or the vulnerable in order to manage the epidemiological curve compared to previous advice. However, there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic. Household isolation is modelled to have the biggest effect of the three interventions currently planned, but with some risks. SAGE therefore thinks there is scientific evidence to support household isolation being implemented as soon as practically possible’ (13.3.20: 1)

‘SAGE further agreed that one purpose of behavioural and social interventions is to enable the NHS to meet demand and therefore reduce indirect mortality and morbidity. There is a risk that current proposed measures (individual and household isolation and social distancing) will not reduce demand enough: they may need to be coupled with more intensive actions to enable the NHS to cope, whether regionally or nationally’ (13.3.20: 2)

On 16th March, it states:

‘On the basis of accumulating data, including on NHS critical care capacity, the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1)

Overall, we can conclude two things about the language of intervention:

  1. There is now a clear difference between the ways in which SAGE and its critics describe policy: to manage an inevitably long-term epidemic, versus to try to eliminate it within national borders.
  2. There is a less clear difference between terms such as suppress and mitigate, largely because SAGE focused primarily on a comparison of different measures (and their combination) rather than the question of compliance.

See also: There is no ‘herd immunity strategy’, which argues that this focus on each intervention was lost in radio and TV interviews with Vallance.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

4 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: SAGE meetings from January-June 2020

This post is part 4 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE began a series of extraordinary meetings from 22nd January 2020. The first was described as ‘precautionary’ (22.1.20: 1) and includes updates from NERVTAG which met from 13th January. Its minutes state that ‘SAGE is unable to say at this stage whether it might be required to reconvene’ (22.1.20: 2). The second meeting notes that SAGE will meet regularly (e.g. 2-3 times per week in February) and coordinate all relevant science advice to inform domestic policy, including from NERVTAG and SPI-M (Scientific Pandemic Influenza Group on Modelling) which became a ‘formal sub-group of SAGE for the duration of this outbreak’ (SPI-M-O) (28.1.20: 1). It also convened an additional Scientific Pandemic Influenza subgroup (SPI-B) in February. I summarise these developments by month, but you can see that, by March, it is worth summarising each meeting. The main theme is uncertainty.

January 2020

The first meeting highlights immense uncertainty. Its description of WN-CoV (Wuhan Coronavirus), and statements such as ‘There is evidence of person-to-person transmission. It is unknown whether transmission is sustainable’, sum up the profound lack of information on what is to come (22.1.20: 1-2). It notes high uncertainty on how to identify cases, rates of infection, infectiousness in the absence of symptoms, and which previous experience (such as MERS) offers the most useful guidance. Only 6 days later, it estimates an R between 2-3, doubling rate of 3-4 days, incubation period of around 5 days, 14-day window of infectivity, varied symptoms such as coughing and fever, and a respiratory transmission route (different from SARS and MERS) (28.1.20: 1). These estimates are fairly constant from then, albeit qualified with reference to uncertainty (e.g. about asymptomatic transmission), some key outliers (e.g. the duration of illness in one case was 41 days – 4.2.20: 1), and some new estimates (e.g. of a 6-day ‘serial interval’, or ‘time between successive cases in a chain of transmission’, 11.2.20: 1). By now, it is preparing a response: modelling a ‘reasonable worst case scenario’ (RWC) based on the assumption of an R of 2.5 and no known treatment or vaccine, considering how to slow the spread, and considering how behavioural insights can be used to encourage self-isolation.

February 2020

SAGE began to focus on what measures might delay or reduce the impact of the epidemic. It described travel restrictions from China as low value, since a 95% reduction would have to be draconian to achieve and only secure a one month delay, which might be better achieved with other measures (3.2.20: 1-2). It, and supporting papers, suggested that the evidence was so limited that they could draw ‘no meaningful conclusions … as to whether it is possible to achieve a delay of a month’ by using one or a combination of these measures: international travel restrictions, domestic travel restrictions, quarantine people coming from infected areas, close schools, close FE/ HE, cancel large public events, contact tracing, voluntary home isolation, facemasks, hand washing. Further, some could undermine each other (e.g. school closures impact on older people or people in self-isolation) and have major societal or opportunity costs (SPI-M-O, 3.2.20b: 1-4). For example, the ‘SPI-M-O: Consensus view on public gatherings’ (11.2.20: 1) notes the aim to reduce duration and closeness of (particularly indoor) contact. Large outdoor gatherings are not worse than small, and stopping large events could prompt people to go to pubs (worse).

Throughout February, the minutes emphasize high uncertainty:

  • if there will be an epidemic outside of China (4.2.20: 2)
  • if it spreads through ‘air conditioning systems’ (4.2.20: 3)
  • the spread from, and impact on, children and therefore the impact of closing schools (4.2.20: 3; discussed in a separate paper by SPI-M-O, 10.2.20c: 1-2)
  • ‘SAGE heard that NERVTAG advises that there is limited to no evidence of the benefits of the general public wearing facemasks as a preventative measure’ (while ‘symptomatic people should be encouraged to wear a surgical face mask, providing that it can be tolerated’ (4.2.20: 3)

At the same time, its meeting papers emphasized a delay in accurate figures during an initial outbreak: ‘Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK’ (SPI-M-O, 3.2.20a: 3).

This problem proved to be crucial to the timing of government intervention. A key learning point will be the disconnect between the following statement and the subsequent realisation (3-4 weeks later) that the lockdown measures from mid-to-late March came too late to prevent an unanticipated number of excess deaths:

‘SAGE advises that surveillance measures, which commenced this week, will provide

actionable data to inform HMG efforts to contain and mitigate spread of Covid-19’ … PHE’s surveillance approach provides sufficient sensitivity to detect an outbreak in its early stages. This should provide evidence of an epidemic around 9- 11 weeks before its peak … increasing surveillance coverage beyond the current approach would not significantly improve our understanding of incidence’ (25.2.20: 1)

It also seems clear from the minutes and papers that SAGE highlighted a reasonable worst case scenario on 26.2.20. It was as worrying as the Imperial College COVID-19 Response Team report dated 16.3.20 that allegedly changed the UK Government’s mind on the 16th March. Meeting paper 26.2.20a described the assumption of an 80% infection attack rate and 50% clinical attack rate (i.e. 50% of the UK population would experience symptoms), which underpins the assumption of 3.6 million requiring hospital care of at least 8 days (11% of symptomatic), and 541,200 requiring ventilation (1.65% of symptomatic) for 16 days. While it lists excess deaths as unknown, its 1% infection mortality rate suggests 524,800 deaths. This RWC replaces a previous projection (in Meeting paper 10.2.20a: 1-3, based on pandemic flu assumptions) of 820,000 excess deaths (27.2.20: 1).

As such, the more important difference could come from SAGE’s discussion of ‘non-pharmaceutical interventions (NPIs)’ if it recommends ‘mitigation’ while the Imperial team recommends ‘suppression’. However, the language to describe each approach is too unclear to tell (see Theme 1. The language of intervention; also note that NPIs were often described from March as ‘behavioural and social interventions’ following an SPI-B recommendation, Meeting paper 3.2.20: 1, but the language of NPI seems to have stuck).

March 2020

In March, SAGE focused initially (Meetings 12-14) on preparing for the peak of infection on the assumption that it had time to transition towards a series of isolation and social distancing measures that would be sustainable (and therefore unlikely to contribute to a second peak if lifted too soon). Early meetings and meeting papers express caution about the limited evidence for intervention and the potential for their unintended consequences. This approach began to change somewhat from mid-March (Meeting 15), and accelerate from Meetings 16-18, when it became clear that incidence and virus transmission were much larger than expected, before a new phase began from Meeting 19 (after the UK lockdown was announced on the 23rd).

Meeting 12 (3.3.18) describes preparations to gather and consolidate information on the epidemic and the likely relative effect of each intervention, while its meeting papers emphasise:

  • ‘It is highly likely that there is sustained transmission of COVID-19 in the UK at present’, and a peak of infection ‘might be expected approximately 3-5 months after the establishment of widespread sustained transmission’ (SPI-M Meeting paper 2.3.20: 1)
  • the need the prepare the public while giving ‘clear and transparent reasons for different strategies’ and reducing ambiguity whenever giving guidance (SPI-B Meeting paper 3.2.20: 1-2)
  • The need to combine different measures (e.g. school closure, self-isolation, household isolation, isolating over-65s) at the right time; ‘implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave’ (Meeting paper 4.3.20a: 3).

Meeting 13 (5.3.20) describes staying in the ‘containment’ phase (which, I think, means isolating people with positive tests at home or in hospital) , and introducing: a 12-week period of individual and household isolation measures in 1-2 weeks, on the assumption of 50% compliance; and a longer period of shielding over-65s 2 weeks later. It describes ‘no evidence to suggest that banning very large gatherings would reduce transmission’, while closing bars and restaurants ‘would have an effect, but would be very difficult to implement’, and ‘school closures would have smaller effects on the epidemic curve than other options’ (5.3.20: 1). Its SPI-B Meeting paper (4.3.20b) expresses caution about limited evidence and reliance on expert opinion, while identifying:

  • potential displacement problems (e.g. school closures prompt people to congregate elsewhere, or be looked after by vulnerable older people, while parents to lose the chance to work)
  • the visibility of groups not complying
  • the unequal impact on poorer and single parent families of school closure and loss of school meals, lost income, lower internet access, and isolation
  • how to reduce discontent about only isolating at-risk groups (the view that ‘explaining that members of the community are building some immunity will make this acceptable’ is not unanimous) (4.3.20b: 2).

Meeting 14 (10.3.20) states that the UK may have 5-10000 cases and ‘10-14 weeks from the epidemic peak if no mitigations are introduced’ (10.3.20: 2). It restates the focus on isolation first, followed by additional measures in April, and emphasizes the need to transition to measures that are acceptable and sustainable for the long term:

‘SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods’ …’the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2)

Meeting 15 (13.3.20: 1) describes an update to its data, suggesting ‘more cases in the UK than SAGE previously expected at this point, and we may therefore be further ahead on the epidemic curve, but the UK remains on broadly the same epidemic trajectory and time to peak’. It states that ‘household isolation and social distancing of the elderly and vulnerable should be implemented soon, provided they can be done well and equitably’, noting that there are ‘no strong scientific grounds’ to accelerate key measures but ‘there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic’ (13.3.20: 1) and ‘more intensive actions’ will be required to maintain NHS capacity (13.3.20: 2).

*******

On the 16th March, the UK Prime Minister Boris Johnson describes an ‘emergency’ (one week before declaring a ‘national emergency’ and UK-wide lockdown)

*******

Meeting 16 (16.3.20) describes the possibility that there are 5-10000 new cases in the UK (there is great uncertainty on the estimate’), doubling every 5-6 days. Therefore, to stay within NHS capacity, ‘the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1). SPI-M Meeting paper (16.3.20: 1) describes:

‘a combination of case isolation, household isolation and social distancing of vulnerable groups is very unlikely to prevent critical care facilities being overwhelmed … it is unclear whether or not the addition of general social distancing measures to case isolation, household isolation and social distancing of vulnerable groups would curtail the epidemic by reducing the reproduction number to less than 1 … the addition of both general social distancing and school closures to case isolation, household isolation and social distancing of vulnerable groups would be likely to control the epidemic when kept in place for a long period. SPI-M-O agreed that this strategy should be followed as soon as practical’

Meeting 17 (18.3.20) marks a major acceleration of plans, and a de-emphasis of the low-certainty/ beware-the-unintended-consequences approach of previous meetings (on the assumption that it was now 2-4 weeks behind Italy). It recommends school closures as soon as possible (and it, and SPIM Meeting paper 17.3.20b, now downplays the likely displacement effect). It focuses particularly on London, as the place with the largest initial numbers:

‘Measures with the strongest support, in terms of effect, were closure of a) schools, b) places of leisure (restaurants, bars, entertainment and indoor public spaces) and c) indoor workplaces. … Transport measures such as restricting public transport, taxis and private hire facilities would have minimal impact on reducing transmission’ (18.3.20: 2)

Meeting 18 (23.3.20) states that the R is higher than expected (2.6-2.8), requiring ‘high rates of compliance for social distancing’ to get it below 1 and stay under NHS capacity (23.3.20: 1). There is an urgent need for more community testing/ surveillance (and to address the global shortage of test supplies). In the meantime, it needs a ‘clear rationale for prioritising testing for patients and health workers’ (the latter ‘should take priority’) (23.3.20: 3) Closing UK borders ‘would have a negligible effect on spread’ (23.3.20: 2).

*******

The lockdown. On the 23rd March 2020, the UK Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of coronavirus, including police powers to support public health, such as to disperse gatherings of more than two people (unless they live together), close events and shops, and limit outdoor exercise to once per day (at a distance of two metres from others).

*******

Meeting 19 (26.3.20) follows the lockdown. SAGE describes its priorities if the R goes below 1 and NHS capacity remains under 100%: ‘monitoring, maintenance and release’ (based on higher testing); public messaging on mass testing and varying interventions; understanding nosocomial transmission and immunology; clinical trials (avoiding hasty decisions’ on new drug treatment in absence of good data) and ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2). The optimistic scenario is 10,000 deaths from the first wave (SPIM-O Meeting paper 25.3.20: 4).

Meeting 20 Confirms RWC and optimistic scenarios (Meeting paper 25.3.20), but it needs a ‘clearer narrative, clarifying areas subject to uncertainty and sensitivities’ and to clarify that scenarios (with different assumptions on, for example, the R, which should be explained more) are not predictions (29.3.20).

Meeting 21 seeks to establish SAGE ‘scientific priorities’ (e.g. long term health impacts of COVID-19, including socioeconomic impact on health (including mental health), community testing, international work (‘comorbidities such as malaria and malnutrition) (31.3.20: 1-2). NHS to set up an interdisciplinary group (including science and engineering) to ‘understand and tackle nosocomial transmission’ in the context of its growth and urgent need to define/ track it (31.3.20: 1-2). SAGE to focus on testing requirements, not operational issues. It notes the need to identify a single source of information on deaths.

April 2020

The meetings in April highlight four recurring themes.

First, it stresses that it will not know the impact of lockdown measures for some time, that it is too soon to understand the impact of releasing them, and there is high risk of failure: ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1; see also 14.4.20: 1-2). This problem remains even if a reliable testing and contact tracing system is in place, and if there are environmental improvements to reduce transmission (by keeping people apart).

Second, it notes signals from multiple sources (including CO-CIN and the RCGP) on the higher risk of major illness and death among black people, the ongoing investigation of higher risk to ‘BAME’ health workers (16.4.20), and further (high priority) work on ‘ethnicity, deprivation, and mortality’ (21.4.20: 1) (see also: Race, ethnicity, and the social determinants of health).

Third, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20). The need for far more testing is a feature of almost every meeting (see also The need to ramp up testing).

Fourth, SAGE describes the need for more short and long-term research, identifying nosocomial infection as a short term priority, and long term priorities in areas such as the long term health impacts of COVID-19 (including socioeconomic impacts on physical and mental health), community testing, and international work (31.3.20: 1-2).

Finally, it reflects shifting advice on the precautionary use of face masks. Previously, advisory bodies emphasized limited evidence of a clear benefit to the wearer, and worried that public mask use would reduce the supply to healthcare professionals and generate a false sense of security (compare with this Greenhalgh et al article on the precautionary principle, the subsequent debate, and work by the Royal Society). Even by April: ‘NERVTAG concluded that the increased use of masks would have minimal effect’ on general population infection (7.4.20: 1), while the WHO described limited evidence that facemasks are beneficial for community use (9.4.20). Still, general face mask use but could have small positive effect, particularly in ‘enclosed environments with poor ventilation, and around vulnerable people’ (14.4.20: 2) and ‘on balance, there is enough evidence to support recommendation of community use of cloth face masks, for short periods in enclosed spaces where social distancing is not possible’ (partly because people can be infectious with no symptoms), as long as people know that it is no substitute for social distancing and handwashing (21.4.20)

May 2020

In May, SAGE continues to discuss high uncertainty on relaxing lockdown measures, the details of testing systems, and the need for research.

Generally, it advises that relaxations should not happen before there is more understanding of transmission in hospitals and care homes, and ‘until effective outbreak surveillance and test and trace systems are up and running’ (14.5.20). It advises specifically ‘against reopening personal care services, as they typically rely on highly connected workers who may accelerate transmission’ (5.5.20: 3) and warns against the too-quick introduction of social bubbles. Relaxation runs the risk of diminishing public adherence to social distancing, and to overwhelm any contact tracing system put in place:

‘SAGE participants reaffirmed their recent advice that numbers of Covid-19 cases remain high (around 10,000 cases per day with wide confidence intervals); that R is 0.7-0.9 and could be very close to 1 in places across the UK; and that there is very little room for manoeuvre especially before a test, trace and isolate system is up and running effectively. It is not yet possible to assess the effect of the first set of changes which were made on easing restrictions to lockdown’ (28.5.20: 3).

It recommends extensive testing in hospitals and care homes (12.5.20: 3) and ‘remains of the view that a monitoring and test, trace & isolate system needs to be put in place’ (12.5.20: 1)

June 2020

In June, SAGE identifies the importance of clusters of infection (super-spreading events) and the importance of a contact tracing system that focuses on clusters (rather than simply individuals) (11.6.20: 3). It reaffirms the value of a 2-metre distance rule. It also notes that the research on immunology remains unclear, which makes immunity passports a bad idea (4.6.20).

It describes the result of multiple meeting papers on the unequal impact of COVID-19:

‘There is an increased risk from Covid-19 to BAME groups, which should be urgently investigated through social science research and biomedical research, and mitigated by policy makers’ … ‘SAGE also noted the importance of involving BAME groups in framing research questions, participating in research projects, sharing findings and implementing recommendations’ (4.6.20: 1-3)

See also: Race, ethnicity, and the social determinants of health

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

4 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: The role of SAGE and science advice to government

This post is part 2 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The issue of science advice to government, and the role of SAGE in particular, became unusually high profile in the UK, particularly in relation to four factors:

  1. Ministers described ‘following the science’ to project a certain form of authority and control.
  2. The SAGE minutes and papers – including a record of SAGE members and attendees – were initially unpublished, in line with the previous convention of government to publish after, rather than during, a crisis.

‘SAGE is keen to make the modelling and other inputs underpinning its advice available to the public and fellow scientists’ (13.3.20: 1)

When it agrees to publish SAGE papers/ documents, it stresses: ‘It is important to demonstrate the uncertainties scientists have faced, how understanding of Covid-19 has developed over time, and the science behind the advice at each stage’ (16.3.20: 2)

‘SAGE discussed plans to release the academic models underpinning SAGE and SPI-M discussions and judgements. Modellers agreed that code would become public but emphasised that the effort to do this immediately would distract from other analyses. It was agreed that code should become public as soon as practical, and SPI-M would return to SAGE with a proposal on how this would be achieved. ACTION: SPI-M to advise on how to make public the source code for academic models, working with relevant partners’ (18.3.20: 2).

SAGE welcomes releasing names of SAGE participants (if willing) and notes role of Ian Boyd as ‘independent challenge function’ (28.4.20: 1)

SAGE also describes the need for a better system to allow SAGE participants to function effectively and with proper support (given the immense pressure/ strain on their time and mental health) (7.5.20: 1)

  1. There were growing concerns that ministers would blame their advisers for poor choices (compare Freedman and Snowdon) or at least use science advice as ‘an insurance policy’, and
  2. There was some debate about the appropriateness of Dominic Cummings (Prime Minister Boris Johnson’s special adviser) attending some meetings.

Therefore, its official description reflects its initial role plus a degree of clarification on the role of science advice mechanisms during the COVID-19 pandemic. The SAGE webpage on the gov.uk sites describes its role as:

provides scientific and technical advice to support government decision makers during emergencies … SAGE is responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR). The advice provided by SAGE does not represent official government policy’.

Its more detailed explainer describes:

‘SAGE’s role is to provide unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants. This includes everything from latest knowledge of the virus to modelling the disease course, understanding the clinical picture, and effects of and compliance with interventions. This advice together with a descriptor of uncertainties is then passed onto government ministers. The advice is used by Ministers to allow them to make decisions and inform the government’s response to the COVID-19 outbreak …

The government, naturally, also considers a range of other evidence including economic, social, and broader environmental factors when making its decisions…

SAGE is comprised of leading lights in their representative fields from across the worlds of academia and practice. They do not operate under government instruction and expert participation changes for each meeting, based on the expertise needed to address the crisis the country is faced with …

SAGE is also attended by official representatives from relevant parts of government. There are roughly 20 such officials involved in each meeting and they do not frequently contribute to discussions, but can play an important role in highlighting considerations such as key questions or concerns for policymakers that science needs to help answer or understanding Civil Service structures. They may also ask for clarification on a scientific point’ (emphasis added by yours truly).

Note that the number of participants can be around 60 people, which is more like an assembly with presentations and a modest amount of discussion, than a decision-making function (the Zoom meeting on 4.6.20 lists 76 participants). Even a Cabinet meeting is about 20 and that is too much for coherent discussion/ action (hence separate, smaller, committees).

Further, each set of now-published minutes contains an ‘addendum’ to clarify its operation. For example, its first minutes in 2020 seek to clarify the role of participants. Note that the participants change somewhat at each meeting (see the full list of members/ attendees), and some names are redacted. Dominic Cummings’ name only appears (I think) on 5.3.20, 14.4.20, and two meetings on 1.5.20 (although, as Freedman notes, ‘his colleague Ben Warner was a more regular presence’).

SAGE minutes 1 addendum 22.1.20

More importantly, the minutes from late February begin to distinguish between three types of potential science advice:

  1. to describe the size of the problem (e.g. surveillance of cases and trends, estimating a reasonable worst case scenario)
  2. to estimate the relative impact of many possible interventions (e.g. restrictions on travel, school closures, self-isolation, household quarantine, and social distancing measures)
  3. to recommend the level and timing of state action to achieve compliance in relation to those interventions.

SAGE focused primarily on roles 1 and 2, arguing against role 3 on the basis that state intervention is a political choice to be taken by ministers. Ministers are responsible for weighing up the potential public health benefits of each measure in relation to their social and economic costs (see also: The relationship between science, science advice, and policy).

Example 1: setting boundaries between advice and strategy

  • ‘It is a political decision to consider whether it is preferable to enact stricter measures at first, lifting them gradually as required, or to start with fewer measures and add further measures if required. Surveillance data streams will allow real-time monitoring of epidemic growth rates and thus allow approximate evaluation of the impact of whatever package of interventions is implemented’ (Meeting paper 26.2.20b: 1)

This example highlights a limitation in performing role 2 to inform 3: SAGE would not be able to compare the relative impact of measures without knowing their level of imposition and its impact on compliance. Further, the way in which it addressed this problem is crucial to our interpretation and evaluation of the timing and substance of the UK government’s response.

In short, it simultaneously assumed away and maintained attention to this problem by stating:

  • ‘The measures outlined below assume high levels of compliance over long periods of time. This may be unachievable in the UK population’ (26.2.20b: 1).
  • ‘advice on interventions should be based on what the NHS needs and what modelling of those interventions suggests, not on the (limited) evidence on whether the public will comply with the interventions in sufficient numbers and over time’ (16.3.20: 1)

The assumption of high compliance reduces the need for SAGE to make distinctions between terms such as mitigation versus suppression (see also: Confusion about the language of intervention and stages of intervention). However, it contributes to confusion within wider debates on UK action (see Theme 1. The language of intervention).

Example 2: setting boundaries between advice and value judgements

  • ‘SAGE has not provided a recommendation of which interventions, or package of interventions, that Government may choose to apply. Any decision must consider the impacts these interventions may have on society, on individuals, the workforce and businesses, and the operation of Government and public services’ (Meeting paper 4.3.20a: 1).

To all intents and purposes, SAGE is noting that governments need to make value-based choices to:

  1. Weigh up the costs and benefits of any action (as described by Layard et al, with reference to wellbeing measures and the assumed price of a life), and
  2. Decide whose wellbeing, and lives, matter the most (because any action or inaction will have unequal consequences across a population).

In other words, policy analysis is one part evidence and one part value judgement. Both elements are contested in different ways, and different questions inform political choices (e.g. whose knowledge counts versus whose wellbeing counts?).

[see also:

  • ‘Determining a tolerable level of risk from imported cases requires consideration of a number of non-science factors and is a policy question’ (28.4.20: 3)
  • ‘SAGE reemphasises that its own focus should always be on providing clear scientific advice to government and the principles behind that advice’ (7.5.20: 1)]

Future reflections

Any future inquiry will be heavily contested, since policy learning and evaluation are political acts (and the best way to gather and use evidence during a pandemic is highly contested).  Still, hopefully, it will promote reflection on how, in practice, governments and advisory bodies negotiate the blurry boundary between scientific advice and political choice when they are so interdependent and rely so heavily on judgement in the face of ambiguity and uncertainty (or ‘radical uncertainty’). I discuss this issue in the next post, which highlights the ways in which UK ministers relied on SAGE (and advisers) to define the policy problem.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

7 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE explainer

SAGE is the Scientific Advisory Group for Emergencies. The text up there comes from the UK Government description. SAGE is the main venue to coordinate science advice to the UK government on COVID-19, including from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group, reporting to PHE), and the SPI-M (Scientific Pandemic Influenza Group on Modelling) sub-groups on modelling (SPI-M) and behavioural public policy (SPI-B) which supply meeting papers to SAGE.

I have summarized SAGE’s minutes (41 meetings, from 22 January to 11 June) and meeting/ background papers (125 papers, estimated range 1-51 pages, median 4, not-peer-reviewed, often produced a day after a request) in a ridiculously long table. This thing is huge (40 pages and 20000 words). It is the sequoia table. It is the humongous fungus. Even Joey Chestnut could not eat this table in one go. To make your SAGE meal more palatable, here is a series of blog posts that situate these minutes and papers in their wider context. This initial post is unusually long, so I’ve put in a photo to break it up a bit.

Did the UK government ‘follow the science’?

I use the overarching question Did the UK Government ‘follow the science’? initially for the clickbait. I reckon that, like a previous favourite (people have ‘had enough of experts’), ‘following the science’ is a phrase used by commentators more frequently than the original users of the phrase. It is easy to google and find some valuable commentaries with that hook (Devlin & Boseley, Siddique, Ahuja, Stevens, Flinders, Walker, , FT; see also Vallance) but also find ministers using a wider range of messages with more subtle verbs and metaphors:

  • ‘We will take the right steps at the right time, guided by the science’ (Prime Minister Boris Johnson, 3.20)
  • ‘We will be guided by the science’ (Health Secretary Matt Hancock, 2.20)
  • ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’ (Johnson, 3.20)
  • ‘The plan is driven by the science and guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.20)
  • ‘The plan does not set out what the government will do, it sets out the steps we could take at the right time along the basis of the scientific advice’ (Johnson, 3.20).

Still, clearly they are saying ‘the science’ as a rhetorical device, and it raises many questions or objections, including:

  1. There is no such thing as ‘the science’.

Rather, there are many studies described as scientific (generally with reference to a narrow range of accepted methods), and many people described as scientists (with reference to their qualifications and expertise). The same can be said for the rhetorical phrase ‘the evidence’ and the political slogan ‘evidence based policymaking’ (which often comes with its notionally opposite political slogan ‘policy based evidence’). In both cases, a reference to ‘the science’ or ‘the evidence’ often signals one or both of:

  • a particular, restrictive, way to describe evidence that lives up to a professional quality standard created by some disciplines (e.g. based on a hierarchy of evidence, in which the systematic review of randomized control trials is often at the top)
  • an attempt by policymakers to project their own governing competence, relative certainty, control, and authority, with reference to another source of authority

2. Ministers often mean ‘following our scientists

PM_press_conference Vallance Whitty 12.3.20

When Johnson (12.3.20) describes being ‘guided by the science’, he is accompanied by Professor Patrick Vallance (Government Chief Scientific Adviser) and Professor Chris Whitty (the UK government’s Chief Medical Adviser). Hancock (3.3.20) describes being ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.3.20).

In other words, following ‘the science’ means ‘following the advice of our scientific advisors’, via mechanisms such as SAGE.

As the SAGE minutes and meeting papers show, government scientists and SAGE participants necessarily tell a partial story about the relevant evidence from a particular perspective (note: this is not a criticism of SAGE; it is a truism). Other interpreters of evidence, and sources of advice, are available.

Therefore, the phrase ‘guided by the science’ is, in practice, a way to:

  • narrow the search for information (and pay selective attention to it)
  • close down, or set the terms of, debate
  • associate policy with particular advisors or advisory bodies, often to give ministerial choices more authority, and often as ‘an insurance policy’ to take the heat off ministers.
  1. What exactly is ‘the science’ guiding?

Let’s make a simple distinction between two types of science-guided action. Scientists provide evidence and advice on:

  1. the scale and urgency of a potential policy problem, such as describing and estimating the incidence and transmission of coronavirus
  2. the likely impact of a range of policy interventions, such as contact tracing, self-isolation, and regulations to oblige social distancing

In both cases, let’s also distinguish between science advice to reduce uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Put both together to produce a wide range of possibilities for policy ‘guided by the science’, from (a) simply providing facts to help reduce uncertainty on the incidence of coronavirus (minimal), to (b) providing information and advice on how to define and try to solve the policy problem (maximal).

If so, note that being guided by science does not signal more or less policy change. Ministers can use scientific uncertainty to defend limited action, or use evidence selectively to propose rapid change. In either case, it can argue – sincerely – that it is guided by science. Therefore, analyzing critically the phraseology of ministers is only a useful first step. Next, we need to identify the extent to which scientific advisors and advisory bodies, such as SAGE, guided ministers.

The role of SAGE: advice on evidence versus advice on strategy and values

In that context, the next post examines the role of SAGE.

It shows that, although science advice to government is necessarily political, the coronavirus has heightened attention to science and advice, and you can see the (subtle and not subtle) ways in SAGE members and its secretariat are dealing with its unusually high level of politicization. SAGE has responded by clarifying its role, and trying to set boundaries between:

  • Advice versus strategy
  • Advice versus value judgements

These aims are understandable, but difficult to do in theory (the fact/value distinction is impossible) and practice (plus, policymakers may not go along with the distinction anyway). I argue that it also had some unintended consequences, which should prompt some further reflection on facts-versus-values science advice during crises.

The ways in which UK ministers followed SAGE advice

With these caveats in mind, my reading of this material is that UK government policy was largely consistent with SAGE evidence and advice in the following ways:

  1. Defining the policy problem

This post (and a post on oral evidence to the Health and Social Care Committee) identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows (although the post provides a more expansive discussion):

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

While SAGE minutes suggest a general reluctance to comment too much on the point 4, government discussions were underpinned by 1-3. For me, this context is the most important. It provides a lens through which to understand all of SAGE advice: how it shapes, and is shaped by, UK government policy.

  1. The timing and substance of interventions before lockdown, maintenance of lockdown for several months, and gradual release of lockdown measures

This post presents a long chronological story of SAGE minutes and papers, divided by month (and, in March, by each meeting). Note the unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, can only be appreciated fully if you read the minutes from 1 to 41. Or, you know, take my word for it.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China (albeit while developing initially-good estimates of R, doubling rate, incubation period, window of infectivity, and symptoms). In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

In other words, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice (and it would not be outrageous to argue that it went ahead of it).

It is more difficult to describe the consistency between UK government policy & SAGE advice in relation to the relaxation of lockdown measures.

SAGE’s minutes and meeting papers describe very low certainty about what will happen after the release of lockdown. Their models do not hide this unusually high level of uncertainty, and they use models (built on assumptions) to generate scenarios rather than estimate what will happen. In this sense, ‘following the science’ could relate to (a) a level of buy-in for this kind of approach, and (b) making choices when scientific groups cannot offer much (if any) advice on what to do or what will happen. The example of reopening schools is a key example, since SPI-M and SPI-B focused intensely on the issue, but their conclusions could not underpin a specific UK government choice.

There are two ways to interpret what happened next.

First, there will always be a mild gap between hesitant SAGE advice and ministerial action. SAGE advice tends to be based on the amount and quality of evidence to support a change, which meant it was hesitant to recommend (a) a full lockdown and (b) a release from lockdown. Just as UK government policy seemed to go ahead of the evidence to enter lockdown on the 23rd March, so too does it seem to go ahead of the cautious approach to relaxing it.

Second, UK ministers are currently going too far ahead of the evidence. SPI-M papers state repeatedly that the too-quick release of measures will cause the R to go above 1 (in some papers, it describes reaching 1.7; in some graphs it models up to 3).

  1. The use of behavioural insights to inform and communicate policy

In March, you can find a lot of external debate about the appropriate role for ‘behavioural science’ and ‘behavioural public policy’ (BPP) (in other words, using insights from psychology to inform policy). Part of the initial problem related to the lack of transparency of the UK government, which prompted concerns that ministers were basing choices on limited evidence (see Hahn et al, Devlin, Mills). Oliver also describes initial confusion about the role of BPP when David Halpern became mildly famous for describing the concept of ‘herd immunity’ rather than sticking to psychology.

External concern focused primarily on the argument that the UK government (and many other governments) used the idea of ‘behavioural fatigue’ to justify delayed or gradual lockdown measures. In other words, if you do it too quickly and for too long, people will tire of it and break the rules.

Yet, this argument about fatigue is not a feature of the SAGE minutes and SPI-B papers (indeed, Oliver wonders if the phrase came from Whitty, based on his experience of people tiring of taking medication).

Rather, the papers tend to emphasise:

  • There is high uncertainty about behavioural change in key scenarios, and this reference to uncertainty should inform any choice on what to do next.
  • The need for effective and continuous communication with citizens, emphasizing transparency, honesty, clarity, and respect, to maintain high trust in government and promote a sense of community action (‘we are all in this together’).

John and Stoker argue that ‘much of behavioural science lends itself to’ a ‘top-down approach because its underlying thinking is that people tend to be limited in cognitive terms, and that a paternalistic expert-led government needs to save them from themselves’. Yet, my overall impression of the SPI-B (and related) work is that (a) although SPI-B is often asked to play that role, to address how to maximize adherence to interventions (such as social distancing), (b) its participants try to encourage the more deliberative or collaborative mechanisms favoured by John and Stoker (particularly when describing how to reopen schools and redesign work spaces). If so, my hunch is that they would not be as confident that UK ministers were taking their advice consistently (for example, throughout table 2, have a look at the need to provide a consistent narrative on two different propositions: we are all in this together, but the impact of each action/inaction will be profoundly unequal).

Expanded themes in SAGE minutes

Throughout this period, I think that one – often implicit – theme is that members of SAGE focused quite heavily on what seemed politically feasible to suggest to ministers, and for ministers to suggest to the public (while also describing technical feasibility – i.e. will it work as intended if implemented?). Generally, it seemed to anticipate policymaker concern about, and any unintended public reactions, to a shift towards more social regulation. For example:

‘Interventions should seek to contain, delay and reduce the peak incidence of cases, in that order. Consideration of what is publicly perceived to work is essential in any decisions’ (25.2.20: 1)

Put differently, it seemed to operate within the general confines of what might work in a UK-style liberal democracy characterised by relatively low social regulation. This approach is already a feature of The overall narrative underpinning SAGE advice and UK government policy, and the remaining posts highlight key themes that arise in that context.

They include how to:

Delaying the inevitable

All of these shorter posts delay your reading of a ridiculously long table summarizing each meeting’s discussion and advice/ action points (Table 2, which also includes a way to chase up the referencing in the blog posts: dates alone refer to SAGE minutes; multiple meeting papers are listed as a, b, c if they have the same date stamp rather than same authors).

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

It is part of a wider project, in which you can also read about:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I hope to get through all of this material (and equivalent material in the devolved governments) somehow, but also to find time to live, love, eat, and watch TV, so please bear with me if you want to know what happened but don’t want to do all of the reading to find out.

If you would rather just read all of this discussion in one document:

The whole thing in PDF

Table 2 in PDF

The whole thing as a Word document

Table 2 as a word document

If you would like some other analyses, compare with:

  • Freedman (7.6.20) ‘Where the science went wrong. Sage minutes show that scientific caution, rather than a strategy of “herd immunity”, drove the UK’s slow response to the Covid-19 pandemic’. Concludes that ‘as the epidemic took hold the government was largely following Sage’s advice’, and that the government should have challenged key parts of that advice (to ensure an earlier lockdown).
  • More or Less (1.7.20) ‘Why Did the UK Have Such a Bad Covid-19 Epidemic?’. Relates the delays in ministerial action to inaccurate scientific estimates of the doubling time of infection (discussed further in Theme 2).
  • Both Freedman and More or Less focus on the mishandling of care home safety, exacerbated by transfers from hospital without proper testing.
  • Snowden (28.5.20) ‘The lockdown’s founding myth. We’ve forgotten that the Imperial model didn’t even call for a full lockdown’. Challenges the argument that ministers dragged their feet while scientists were advising quick and extensive interventions (an argument he associates with Calvert et al (23.5.20) ‘22 days of dither and delay on coronavirus that cost thousands of British lives’). Rather, ministers were following SAGE advice, and the lockdown in Italy had a far bigger impact on ministers (since it changed what seemed politically feasible).
  • Greg Clark MP (chair of the House of Commons Science and Technology Committee) Between science and policy – Scrutinising the role of SAGE in providing scientific advice to government

10 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

In this post, ‘following the science’ describes UK ministers taking the advice of their scientific advisers and SAGE (the Scientific Advisory Group for Emergencies).

If so, were UK ministers ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’?

The short answer is yes.

They followed advice in two profoundly important ways:

  1. Defining coronavirus as a policy problem.

My reading of the SAGE minutes and meeting papers identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows:

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

If you examine UK ministerial speeches and SAGE minutes, you will find very similar messages: a coronavirus epidemic is inevitable, we need to ease gradually into suppression measures to avoid a second peak of infection as big as the first, and our focus is exhortation and encouragement over imposition.

  1. The timing and substance of interventions before lockdown

I describe a long chronological story of SAGE minutes and papers. Its main theme is unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, should not be dismissed.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China. In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

Therefore, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice. It would not be outrageous to argue that it went ahead of that advice, at least as recorded in SAGE minutes and meeting papers (compare with Freedman, Snowden, More or Less).

The long answer

If you would like the long answer, I can offer you 35280 words, including a 22380-word table summarizing the SAGE minutes and meeting papers (meetings 1-41, 22.1.20-11.6.20).

It includes:

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

So far, the wider project includes:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I am also writing a paper based on this post, but don’t hold your breath.

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy