James Nicholls, Wulf Livingston, Andy Perkins, Beth Cairns, Rebecca Foster, Kirsten M. A. Trayner, Harry R. Sumnall, Tracey Price, Paul Cairney, Josh Dumbrell, and Tessa Parkes (2022) ‘Drug Consumption Rooms and Public Health Policy: Perspectives of Scottish Strategic Decision-Makers’, International Journal of Environmental Research and Public Health, 19(11), 6575; https://doi.org/10.3390/ijerph19116575
Q: if stakeholders in Scotland express high support for drug consumption rooms, and many policymakers in Scotland seem sympathetic, why is there so little prospect of policy change?
My summary of the article’s answer is as follows:
Although stakeholders support DCRs almost unanimously, they do not support them energetically.
They see this solution as one part of a much larger package rather than a magic bullet. They are not sure of the cost-effectiveness in relation to other solutions, and can envisage some potential users not using them.
The existing evidence on their effectiveness is not persuasive for people who (1) adhere to a hierarchy of evidence which prioritizes evidence from randomized control trials or (2) advocate alternative ways to use evidence.
There are competing ways to frame this policy solution. It suggests that there are some unresolved issues among stakeholders which have not yet come to the fore (since the lack of need to implement something specific reduces the need to engage with a more concrete problem definition).
This method invites local policymakers and practitioners to try out new solutions, work with stakeholders and service users during delivery, reflect on the results, and use this learning to design the next iteration. This is a pragmatic, small-scale, approach that appeals to the (small-c conservative) Scottish Government, which uses pilots to delay major policy changes, and is keen on its image as not too centralist and quite collaboration minded.
3. This approach is not politically feasible in this case.
Some factors suggest that the general argument has almost been won, including positive informal feedback from policymakers, and increasingly sympathetic media coverage (albeit using problematic ways to describe drug use).
However, this level of support is not enough to support experimentation. Drug consumption rooms would need a far stronger steer from the Scottish Government.
In this case, it can’t experiment now and decide later. It needs to make a strong choice (with inevitable negative blowback) and stay the course, knowing that one failed political experiment could set back progress for years.
4. The multi-level policymaking system is not conducive to overcoming these obstacles.
The issue of drugs policy is often described as a public health – and therefore devolved – issue politically (and in policy circles)
However, the legal/ formal division of responsibilities suggests that UK government consent is necessary and not forthcoming.
It is possible that the Scottish Government could take a chance and act alone. Indeed, the example of smoking in public places showed that it shifted its position after a slow start (it described the issue as reserved to the UK took charge of its own legislation, albeit with UK support).
However, the Scottish Government seems unwilling to take that chance, partly because it has been stung by legal challenges in other areas, and is reluctant to engage in more of the same (see minimum unit pricing for alcohol).
Local policymakers could experiment on their own, but they won’t do it without proper authority from a central government.
This experience is part of a more general issue: people may describe multi-level policymaking as a source of venues for experimentation (‘laboratories of democracy’) to encourage policy learning and collaboration. However, this case, and cases like fracking, show that they can actually be sites of multiple veto points and multi-level reluctance.
If so, the remaining question for reflection is: what would it take to overcome these obstacles? The election of a Labour UK government? Scottish independence? Or, is there some other way to make it happen in the current context?
By James Nicholls and Paul Cairney, for the University of Stirling MPH and MPP programmes.
There are strong links between the study of public health and public policy. For example, public health scholars often draw on policy theories to help explain (often low amounts of) policy change to foster population health or reduce health inequalities. Studies include a general focus on public health strategies (such as HiAP) or specific policy instruments (such as a ban on smoking in public places). While public health scholars may seek to evaluate or influence policy, policy theories tend to focus on explaining processes and outcomes.
To demonstrate these links, we present:
A long-read blog post to (a) use an initial description of a key alcohol policy instrument (minimum unit pricing, adopted by the Scottish Government but not the UK Government) to (b) describe the application of policy concepts and theories and reflect on the empirical and practical implications. We then added some examples of further reading.
A 45 minute podcast to describe and explain these developments (click below or scroll to the end)
Minimum Unit Pricing in Scotland: background and development
Minimum Unit Pricing for alcohol was introduced in Scotland in 2018. In 2012, the UK Government had also announced plans to introduce MUP, but within a year dopped the policy following intense industry pressure. What do these two journeys tell us about policy processes?
When MUP was first proposed by Scottish Health Action on Alcohol Problems in 2007, it was a novel policy idea. Public health advocates had long argued that raising the price of alcohol could help tackle harmful consumption. However, conventional tax increases were not always passed onto consumers, so would not necessarily raise prices in the shops (and the Scottish Government did not have such taxation powers). MUP appeared to present a neat solution to this problem. It quickly became a prominent policy goal of public health advocates in Scotland and across the UK, while gaining increasing attention, and support, from the global alcohol policy community.
In 2008, the UK Minister for Health, Dawn Primarolo, had commissioned researchers at the University of Sheffield to look into links between alcohol pricing and harm. The Sheffield team developed economic models to analysis the predicted impact of different systems. MUP was included, and the ‘Sheffield Model’ would go on to play a decisive role in developing the case for the policy.
What problem would MUP help to solve?
Descriptions of the policy problem often differed in relation to each government. In the mid-2000s, alcohol harm had become a political problem for the UK government. Increasing consumption, alongside changes to the night-time economy, had started to gain widespread media attention. In 2004, just as a major liberalisation of the licensing system was underway in England, news stories began documenting the apparent horrors of ‘Binge Britain’: focusing on public drunkenness and disorder, but also growing rates of liver disease and alcohol-related hospital admissions.
In 2004, influential papers such as the Daily Mail began to target New Labour alcohol policy
Politicians began to respond, and the issue became especially useful for the Conservatives who were developing a narrative that Britain was ‘broken’ under New Labour. Labour’s liberalising reforms of alcohol licensing could conveniently be linked to this political framing. The newly formed Alcohol Health Alliance, a coalition set up under the leadership of Professor Sir Ian Gilmore, was also putting pressure on the UK Government to introduce stricter controls. In Scotland, while much of the debate on alcohol focused on crime and disorder, Scottish advocates were focused on framing the problem as one of public health. Emerging evidence showed that Scotland had dramatically higher rates of alcohol-related illness and death than the rest of Europe – a situation strikingly captured in a chart published in the Lancet.
Source: Leon, D. and McCambridge, J. (2006). Liver cirrhosis mortality rates in Britain from 1950 to 2002: an analysis of routine data. Lancet 367
The notion that Scotland faced an especially acute public health problem with alcohol was supported by key figures in the increasingly powerful Scottish National Party (in government since 2007), which, around this time, had developed working relationships with Alcohol Focus Scotland and other advocacy groups.
What happened next?
The SNP first announced that it would support MUP in 2008, but it did not implement this change until 2018. There are two key reasons for the delay:
Its minority government did not achieve enough parliamentary support to pass legislation. It then formed a majority government in 2011, and its legislation to bring MUP into law was passed in 2012.
Court action took years to resolve. The alcohol industry, which is historically powerful in Scotland, was vehemently opposed. A coalition of industry bodies, led by the Scotch Whisky Association, took the Scottish Government to court in an attempt to prove the policy was illegal. Ultimately, this process would take years, and conclude in rulings by the European Court of Justice (2016), Scottish Court of Session Inner House (2016), and UK Supreme Court (2017) which found in favour of the Scottish Government.
Once again, the alcohol industry swung into action, launching a campaign led by the Wine and Spirits Trade Association, asking ‘Why should moderate drinkers pay more?’
This public campaign was accompanied by intense behind-the-scenes lobbying, aided by the fact that the leadership of industry groups had close ties to Government and that the All-Party Parliamentary Group on Beer had the largest membership of any APPG in Westminster. The industry campaign made much of the fact there was little evidence to suggest MUP would reduce crime, but also argued strongly that the modelling produced by Sheffield University was not valid evidence in the first place. A year after the adopting the policy, the UK Government announced a U-turn and MUP was dropped.
How can we use policy theories and concepts to interpret these dynamics?
Here are some examples of using policy theories and concepts as a lens to interpret these developments.
1. What was the impact of evidence in the case for policy change?
First, many political actors (including policymakers) have many different ideas about what counts as good evidence.
The assessment, promotion, and use of evidence is highly contested, and never speaks for itself.
Second, policymakers have to ignore almost all evidence to make choices.
They address ‘bounded rationality’ by using two cognitive shortcuts: ‘rational’ measures set goals and identify trusted sources, while ‘irrational’ measures use gut instinct, emotions, and firmly held beliefs.
Third, policymakers do not control the policy process.
There is no centralised and orderly policy cycle. Rather, policymaking involves policymakers and influencers spread across many authoritative ‘venues’, with each venue having its own rules, networks, and ways of thinking.
In that context, policy theories identify the importance of contestation between policy actors, and describe the development of policy problems, and how evidence fits in. Approaches include:
The acceptability of a policy solution will often depend on how the problem is described. Policymakers use evidence to reduce uncertainty, or a lack of information around problems and how to solve them. However, politics is about exercising power to reduce ambiguity, or the ability to interpret the same problem in different ways.
By suggesting MUP would solve problems around crime, the UK Government made it easier for opponents to claim the policy wasn’t evidence-based. In Scotland, policymakers and advocates focused on health, where the evidence was stronger. In addition, the SNP’s approach fitted within a wider political independence frame, in which more autonomy meant more innovation.
Policy actors tell stories to appeal to the beliefs (or exploit the cognitive shortcuts) of their audiences. A narrative contains a setting (the policy problem), characters (such as the villain who caused it, or the victim of its effects), plot (e.g. a heroic journey to solve the problem), and moral (e.g. the solution to the problem).
Supporters of MUP tended to tell the story that there was an urgent public health crisis, caused largely by the alcohol industry, and with many victims, but that higher alcohol prices pointed to one way out of this hole. Meanwhile opponents told the story of an overbearing ‘nanny state’, whose victims – ordinary, moderate drinkers – should be left alone by government.
Policymakers make strategic and emotional choices, to identify ‘good’ populations deserving of government help, and ‘bad’ populations deserving punishment or little help. These judgements inform policy design (government policies and practices) and provide positive or dispiriting signals to citizens.
For example, opponents of MUP rejected the idea that alcohol harms existed throughout the population. They focused instead on dividing the majority of moderate drinkers from irresponsible minority of binge drinkers, suggesting that MUP would harm the former more than help the latter.
This competition to frame policy problems takes place in political systems that contain many ‘centres’, or venues for authoritative choice. Some diffusion of power is by choice, such as to share responsibilities with devolved and local governments. Some is by necessity, since policymakers can only pay attention to a small proportion of their responsibilities, and delegate the rest to unelected actors such as civil servants and public bodies (who often rely on interest groups to process policy).
For example, ‘alcohol policy’ is really a collection of instruments made or influenced by many bodies, including (until Brexit) European organisations deciding on the legality of MUP, UK and Scottish governments, as well as local governments responsible for alcohol licensing. In Scotland, this delegation of powers worked in favour of MUP, since Alcohol Focus Scotland were funded by the Scottish Government to help deliver some of their alcohol policy goals, and giving them more privileged access than would otherwise have been the case.
The role of evidence in MUP
In the case of MUP, similar evidence was available and communicated to policymakers, but used and interpreted differently, in different centres, by the politicians who favoured or opposed MUP.
In Scotland, the promotion, use of, and receptivity to research evidence – on the size of the problem and potential benefit of a new solution – played a key role in increasing political momentum. The forms of evidence were complimentary. The ‘hard’ science on a potentially effective solution seemed authoritative (although few understood the details), and was preceded by easily communicated and digested evidence on a concrete problem:
There was compelling evidence of a public health problem put forward by a well-organised ‘advocacy coalition’ (see below) which focused clearly on health harms. In government, there was strong attention to this evidence, such as the Lancet chart which one civil servant described as ‘look[ing] like the north face of the Eiger’. There were also influential ‘champions’ in Government willing to frame action as supporting the national wellbeing.
Reports from Sheffield University appeared to provide robust evidence that MUP could reduce harm, and advocacy was supported by research from Canada which suggested that similar policies there had been successful elsewhere.
Advocacy in England was also well-organised and influential, but was dealing with a larger – and less supportive – Government machine, and the dominant political frame for alcohol harms remained crime and disorder rather than health.
Debates on MUP modelling exemplify these differences in evidence communication and use. Those in favour appealed to econometric models, but sometimes simplifying their complexity and blurring the distinction between projected outcomes and proof of efficacy. Opponents went the other way and dismissed the modelling as mere speculation. What is striking is the extent to which an incredibly complex, and often poorly understand, set of econometric models – and the ’Sheffield Model’ in particular – came to occupy centre stage in a national policy debate. Katikireddi and colleagues talked about this as an example of evidence as rhetoric:
Support became less about engagement with the econometric modelling, and more an indicator of general concern about alcohol harm and the power of the industry.
Scepticism was often viewed as the ‘industry position’, and an indicator of scepticism towards public health policy more broadly.
2. Who influences policy change?
Advocacy plays a key role in alcohol policy, with industry and other actors competing with public health groups to define and solve alcohol policy problems. It prompts our attention to policy networks, or the actors who make and influence policy.
People engage in politics to turn their beliefs into policy. They form advocacy coalitions with people who share their beliefs, and compete with other coalitions. The action takes place within a subsystem devoted to a policy issue, and a wider policymaking process that provides constraints and opportunities to coalitions. Beliefs about how to interpret policy problems act as a glue to bind actors together within coalitions. If the policy issue is technical and humdrum, there may be room for routine cooperation. If the issue is highly charged, then people romanticise their own cause and demonise their opponents.
MUP became a highly charged focus of contestation between a coalition of public health advocates, who saw themselves as fighting for the wellbeing of the wider community (and who believed fundamentally that government had a duty to promote population health), and a coalition of industry actors who were defending their commercial interests, while depicting public health policies as illiberal and unfair.
3. Was there a ‘window of opportunity’ for MUP?
Policy theories – including Punctuated Equilibrium Theory – describe a tendency for policy change to be minor in most cases and major in few. Paradigmatic policy change is rare and may take place over decades, as in the case of UK tobacco control where many different policy instruments changed from the 1980s. Therefore, a major change in one instrument could represent a sea-change overall or a modest adjustment to the overall approach.
Multiple Streams Analysis is a popular way to describe the adoption of a new policy solution such as MUP. It describes disorderly policymaking, in which attention to a policy problem does not produce the inevitable development, implementation, and evaluation of solutions. Rather, these ‘stages’ should be seen as separate ‘streams’. A ‘window of opportunity’ for policy change occurs when the three ‘streams’ come together:
Problem stream. There is high attention to one way to define a policy problem.
Policy stream. A technically and politically feasible solution already exists (and is often pushed by a ‘policy entrepreneur’ with the resources and networks to exploit opportunities).
Politics stream. Policymakers have the motive and opportunity to choose that solution.
However, these windows open and close, often quickly, and often without producing policy change.
This approach can help to interpret different developments in relation to Scottish and UK governments:
Problem stream
The Scottish Government paid high attention to public health crises, including the role of high alcohol consumption.
The UK government paid often-high attention to alcohol’s role in crime and anti-social behaviour (‘Binge Britain’ and ‘Broken Britain’)
Policy stream
In Scotland, MUP connected strongly to the dominant framing, offering a technically feasible solution that became politically feasible in 2011.
The UK Prime Minister David Cameron’s made a surprising bid to adopt MUP in 2012, but ministers were divided on its technical feasibility (to address the problem they described) and its political feasibility seemed to be more about distracting from other crises than public health.
Politics stream
The Scottish Government was highly motivated to adopt MUP. MUP was a flagship policy for the SNP; an opportunity to prove its independent credentials, and to be seen to address a national public health problem. It had the opportunity from 2011, then faced interest group opposition that delayed implementation.
The Coalition Government was ideologically more committed to defending commercial interests, and to framing alcohol harms as one of individual (rather than corporate) responsibility. It took less than a year for the alcohol industry to successfully push for a UK government U-turn.
As a result, MUP became policy (eventually) in Scotland, but the window closed (without resolution) in England.
Paul Cairney and Donley Studlar (2014) ‘Public Health Policy in the United Kingdom: After the War on Tobacco, Is a War on Alcohol Brewing?’ World Medical and Health Policy, 6, 3, 308-323PDF
Niamh Fitzgerald and Paul Cairney (2022) ‘National objectives, local policymaking: public health efforts to translate national legislation into local policy in Scottish alcohol licensing’, Evidence and Policy, https://doi.org/10.1332/174426421X16397418342227, PDF
Podcast
You can listen directly here:
You can also listen on Spotify or iTunes via Anchor
By James Nicholls and Paul Cairney, for the University of Stirling MPH and MPP programmes.
There are strong links between the study of public health and public policy. For example, public health scholars often draw on policy theories to help explain (often low amounts of) policy change to foster population health or reduce health inequalities. Studies include a general focus on public health strategies (such as HiAP) or specific policy instruments (such as a ban on smoking in public places). While public health scholars may seek to evaluate or influence policy, policy theories tend to focus on explaining processes and outcomes,.
To demonstrate these links, we present this podcast and blog post to (1) use an initial description of a key alcohol policy instrument (minimum unit pricing in Scotland) to (2) describe the application of policy concepts and theories and reflect on the empirical and practical implications.
Using policy theories to interpret public health case studies: the example of a minimum unit price for alcohol | Paul Cairney: Politics & Public Policy (wordpress.com)
In the summer of 2020, after cancelling exams, the UK and devolved governments sought teacher estimates on students’ grades, but supported an algorithm to standardise the results. When the results produced a public outcry over unfair consequences, they initially defended their decision but reverted quickly to teacher assessment. These experiences, argue Sean Kippin and Paul Cairney, highlight the confluence of events and choices in which an imperfect and rejected policy solution became a ‘lifeline’ for four beleaguered governments.
In 2020, the UK and devolved governments performed a ‘U-turn’ on their COVID-19 school exams replacement policies. The experience was embarrassing for education ministers and damaging to students. There are significant differences between (and often within) the four nations in terms of the structure, timing, weight, and relationship between the different examinations. However, in general, the A-level (England, Northern Ireland, Wales) and Higher/ Advanced Higher (Scotland) examinations have similar policy implications, dictating entry to further and higher education, and influencing employment opportunities. The Priestley review, commissioned by the Scottish Government after their U-turn, described this as an ‘impossible task’.
Initially, each government defined the new policy problem in relation to the need to ‘credibly’ replicate the purpose of exams to allow students to progress to tertiary education or employment. All four quickly announced their intentions to allocate in some form grades to students, rather than replace the assessments with, for example, remote examinations. However, mindful of the long-term credibility of the examinations system and of ensuring fairness, each government opted to maintain the qualifications and seek a similar distribution of grades to previous years. A key consideration was that UK universities accept large numbers of students from across the UK.
One potential solution open to policymakers was to rely solely on teacher grading (CAG). CAGs are ‘based on a range of evidence including mock exams, non-exam assessment, homework assignments and any other record of student performance over the course of study’. Potential problems included the risk of high variation and discrepancies between different centres, the potential overload of the higher education system, and the tendency for teacher predicted grades to reward already privileged students and punish disabled, non-white, and economically deprived children.
A second option was to take CAGs as a starting point, then use an algorithm to produce ‘standardisation’, which was potentially attractive to each government as it allowed students to complete secondary education and to progress to the next level in similar ways to previous (and future) cohorts. Further, an emphasis on the technical nature of this standardisation, with qualifications agencies taking the lead in designing the process by which grades would be allocated, and opting not share the details of its algorithm were a key part of its (temporary) viability. Each government then made similar claims when defending the problem and selecting the solution. Yet this approach reduced both the debate on the unequal impact of this process on students, and the chance for other experts to examine if the algorithm would produce the desired effect. Policymakers in all four governments assured students that the grading would be accurate and fair, with teacher discretion playing a large role in the calculation of grades.
To these governments, it appeared at first that they had found a fair and efficient (or at least defendable) way to allocate grades, and public opinion did not respond negatively to its announcement. However, these appearances proved to be profoundly deceptive and vanished on each day of each exam result. The Scottish national mood shifted so intensely that, after a few days, pursuing standardisation no longer seemed politically feasible. The intense criticism centred on the unequal level of reductions of grades after standardisation, rather than the unequal overall rise in grade performance after teacher assessment and standardisation (which advantaged poorer students).
Despite some recognition that similar problems were afoot elsewhere, this shift of problem definition did not happen in the rest of the UK until (a) their published exam results highlighted similar problems regarding the role of previous school performance on standardised results, and (b) the Scottish Government had already changed course. Upon the release of grades outside Scotland, it became clear that downgrades were also concentrated in more deprived areas. For instance, in Wales, 42% of students saw their A-Level results lowered from their Centre Assessed Grades, with the figure close to a third for Northern Ireland.
Each government thus faced similar choices between defending the original system by challenging the emerging consensus around its apparent unfairness; modifying the system by changing the appeal system; or abandoning it altogether and reverting to solely teacher assessed grades. Ultimately, all three governments followed the same path. Initially, they opted to defend their original policy choice. However, by 17 August, the UK, Welsh, and Northern education secretaries announced (separately) that examination grades would be based solely on CAGs – unless the standardisation process had generated a higher grade (students would receive whichever was highest).
Scotland’s initial experience was instructive to the rest of the UK and its example provided the UK government with a blueprint to follow (eventually). It began with a new policy choice – reverting to teacher assessed grades – sold as fairer to victims of the standardisation process. Once this precedent had been set, a different course for policymakers at the UK level became difficult to resist, particularly when faced with a similar backlash. The UK’s government’s decision in turn influenced the Welsh and Northern Irish governments.
In short, we can see that the particular ordering of choices created a cascading effect across the four governments which created initially one policy solution, before triggering a U-turn. This focus on order and timing should not be lost during the inevitable inquiries and reports on the examinations systems. The take-home message is to not ignore the policy process when evaluating the long-term effect of these policies. Focus on why the standardisation processes went wrong is welcome, but we should also focus on why the policymaking process malfunctioned, to produce a wildly inconsistent approach to the same policy choice in such a short space of time. Examining both aspects of this fiasco will be crucial to the grading process in 2021, given that governments will be seeking an alternative to exams for a second year.
__________________________
Note: the above draws on the authors’ published work in British Politics.
This post first appeared on LSE British Politics and Policy (27.11.20) and is based on this article in British Politics.
Paul Cairneyassesses government policy in the first half of 2020. He identifies the intense criticism of its response so far, encouraging more systematic assessments grounded in policy research.
In March 2020, COVID-19 prompted policy change in the UK at a speed and scale only seen during wartime. According to the UK government, policy was informed heavily by science advice. Prime Minister Boris Johnson argued that, ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’. Further, key scientific advisers such as Sir Patrick Vallance emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term.
Both ministers and advisors emphasised the need for individual behavioural change, supplemented by government action, in a liberal democracy in which direct imposition is unusual and unsustainable. However, for its critics, the government experience has quickly become an exemplar of policy failure.
Initial criticisms include that ministers did not take COVID-19 seriously enough in relation to existing evidence, when its devastating effect was apparent in China in January and Italy from February; act as quickly as other countries to test for infection to limit its spread; or introduce swift-enough measures to close schools, businesses, and major social events. Subsequent criticisms highlight problems in securing personal protective equipment (PPE), testing capacity, and an effective test-trace-and-isolate system. Some suggest that the UK government was responding to the ‘wrong pandemic’, assuming that COVID-19 could be treated like influenza. Others blame ministers for not pursuing an elimination strategy to minimise its spread until a vaccine could be developed. Some criticise their over-reliance on models which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown. Many describe these problems and delays as the contributors to the UK’s internationally high number of excess deaths.
How can we hold ministers to account in a meaningful way?
I argue that these debates are often fruitless and too narrow because they do not involve systematic policy analysis, take into account what policymakers can actually do, or widen debate to consider whose lives matter to policymakers. Drawing on three policy analysis perspectives, I explore the questions that we should ask to hold ministers to account in a way that encourages meaningful learning from early experience.
These questions include:
Was the government’s definition of the problem appropriate? Much analysis of UK government competence relates to specific deficiencies in preparation (such as shortages in PPE), immediate action (such as to discharge people from hospitals to care homes without testing them for COVID-19), and implementation (such as an imperfect test-trace-and-isolate system). The broader issue relates to its focus on intervening in late March to protect healthcare capacity during a peak of infection, rather than taking a quicker and more precautionary approach. This judgment relates largely to its definition of the policy problem which underpins every subsequent policy intervention.
Did the government select the right policy mix at the right time? Who benefits most from its choices?
Most debates focus on the ‘lock down or not?’ question without exploring fully the unequal impact of any action. The government initially relied on exhortation, based on voluntarism and an appeal to social responsibility. Initial policy inaction had unequal consequences on social groups, including people with underlying health conditions, black and ethnic minority populations more susceptible to mortality at work or discrimination by public services, care home residents, disabled people unable to receive services, non-UK citizens obliged to pay more to live and work while less able to access public funds, and populations (such as prisoners and drug users) that receive minimal public sympathy. Then, in March, its ‘stay at home’ requirement initiated a major new policy and different unequal impacts in relation to the income, employment, and wellbeing of different groups. These inequalities are list in more general discussions of impacts on the whole population.
Did the UK government make the right choices on the trade-offs between values, and what impacts could the government have reasonably predicted?
Initially, the most high-profile value judgment related to freedom from state coercion to reduce infection versus freedom from the harm of infection caused by others. Then, values underpinned choices on the equitable distribution of measures to mitigate the economic and wellbeing consequences of lockdown. A tendency for the UK government to project centralised and ‘guided by the science’ policymaking has undermined public deliberation on these trade-offs between policies. The latter will be crucial to ongoing debates on the trade-offs associated with national and regional lockdowns.
Did the UK government combine good policy with good policymaking?
A problem like COVID-19 requires trial-and-error policymaking on a scale that seems incomparable to previous experiences. It requires further reflection on how to foster transparent and adaptive policymaking and widespread public ownership for unprecedented policy measures, in a political system characterised by (a) accountability focused incorrectly on strong central government control and (b) adversarial politics that is not conducive to consensus seeking and cooperation.
These additional perspectives and questions show that too-narrow questions – such as was the UK government ‘following the science’ – do not help us understand the longer term development and wider consequences of UK COVID-19 policy. Indeed, such a narrow focus on science marginalises wider discussions of values and the populations that are most disadvantaged by government policy.
This post first appeared as Who controls public policy? on the UK in a Changing Europe website. There is also a 1-minute video, but you would need to be a completist to want to watch it.
Our #AcademicintheSpotlight series highlights social scientists doing innovative & dynamic research that you really ought to be aware of.
This week, it’s the turn of @CairneyPaul. Paul works on understanding the mechanics of public policy and policymaking. Interested? Read on… pic.twitter.com/bMs8RoQi2a
Most coverage of British politics focuses on the powers of a small group of people at the heart of government. In contrast, my research on public policy highlights two major limits to those powers, related to the enormous number of problems that policymakers face, and to the sheer size of the government machine.
First, elected policymakers simply do not have the ability to properly understand, let alone solve, the many complex policy problems they face. They deal with this limitation by paying unusually high attention to a small number of problems and effectively ignoring the rest.
Second, policymakers rely on a huge government machine and network of organisations (containing over 5 million public employees) essential to policy delivery, and oversee a statute book which they could not possibly understand.
In other words, they have limited knowledge and even less control of the state, and have to make choices without knowing how they relate to existing policies (or even what happens next).
These limits to ministerial powers should prompt us to think differently about how to hold them to account. If they only have the ability to influence a small proportion of government business, should we blame them for everything that happens in their name?
My approach is to apply these general insights to specific problems in British politics. Three examples help to illustrate their ability to inform British politics in new ways.
First, policymaking can never be ‘evidence based’. Some scientists cling to the idea that the ‘best’ evidence should always catch the attention of policymakers, and assume that ‘speaking truth to power’ helps evidence win the day.
As such, researchers in fields like public health and climate change wonder why policymakers seem to ignore their evidence.
The truth is that policymakers only have the capacity to consider a tiny proportion of all available information. Therefore, they must find efficient ways to ignore almost all evidence to make timely choices.
They do so by setting goals and identifying trusted sources of evidence, but also using their gut instinct and beliefs to rule out most evidence as irrelevant to their aims.
Second, the UK government cannot ‘take back control’ of policy following Brexit simply because it was not in control of policy before the UK joined. The idea of control is built on the false image of a powerful centre of government led by a small number of elected policymakers.
This way of thinking assumes that sharing power is simply a choice. However, sharing power and responsibility is borne of necessity because the British state is too large to be manageable.
Governments manage this complexity by breaking down their responsibilities into many government departments. Still, ministers can only pay attention to a tiny proportion of issues managed by each department. They delegate most of their responsibilities to civil servants, agencies, and other parts of the public sector.
In turn, those organisations rely on interest groups and experts to provide information and advice.
As a result, most public policy is conducted through small and specialist ‘policy communities’ that operate out of the public spotlight and with minimal elected policymaker involvement.
The logical conclusion is that senior elected politicians are less important than people think. While we like to think of ministers sitting in Whitehall and taking crucial decisions, most of these decisions are taken in their name but without their intervention.
Third, the current pandemic underlines all too clearly the limits of government power. Of course people are pondering the degree to which we can blame UK government ministers for poor choices in relation to Covid-19, or learn from their mistakes to inform better policy.
Many focus on the extent to which ministers were ‘guided by the science’. However, at the onset of a new crisis, government scientists face the same uncertainty about the nature of the policy problem, and ministers are not really able to tell if a Covid-19 policy would work as intended or receive enough public support.
Some examples from the UK experience expose the limited extent to which policymakers can understand, far less control, an emerging crisis.
Prior to the lockdown, neither scientists nor ministers knew how many people were infected, nor when levels of infection would peak.
They had limited capacity to test. They did not know how often (and how well) people wash their hands. They did not expect people to accept and follow strict lockdown rules so readily, and did not know which combination of measures would have the biggest impact.
When supporting businesses and workers during ‘furlough’, they did not know who would be affected and therefore how much the scheme would cost.
In short, while Covid-19 has prompted policy change and state intervention on a scale not witnessed outside of wartime, the government has never really known what impact its measures would have.
Overall, the take-home message is that the UK narrative of strong central government control is damaging to political debate and undermines policy learning. It suggests that every poor outcome is simply the consequence of bad choices by powerful leaders. If so, we are unable to distinguish between the limited competence of some leaders and the limited powers of them all.
SAGE’s emphasis on uncertainty and limited knowledge extended to the evidence on how to influence behaviour via communication:
‘there is limited evidence on the best phrasing of messages, the barriers and stressors that people will encounter when trying to follow guidance, the attitudes of the public to the interventions, or the best strategies to promote adherence in the long-term’ (SPI-B Meeting paper 3.3.20: 2)
Early on, SAGE minutes described continuously the potential problems of communicating risk and encouraging behavioural change through communication (in other words, based on low expectations for the types of quarantine measures associated with China and South Korea).
It sought ‘behavioural science input on public communication’ and ‘agreed on the importance of behavioural science informing policy – and on the importance of public trust in HMG’s approach’ (28.1.20: 2).
It worried about how the public might interpret ‘case fatality rate’, given the different ways to describe and interpret frequencies and risks (4.2.20: 3).
It stated that ‘Epidemiological terms need to be made clearer in the planning documents to avoid ambiguity’ (11.2.20: 3).
Its extensive discussion of behavioural science (13.2.20: 2-3) includes: there will be public scepticism and inaction until first deaths are confirmed; the main aim is to motivate people by relating behavioural change to their lives; messaging should stress ‘personal responsibility and responsibility to others’ and be clear on which measures are effective’, and ‘National messaging should be clear and definitive: if such messaging is presented as both precautionary and sufficient, it will reduce the likelihood of the public adopting further unnecessary or contradictory behaviours’ (13.2.20: 2-3)
Banning large public events could signal the need to change behaviour more generally, but evidence for its likely impact is unavailable (SPI-M-O, 11.2.20: 1).
Generally speaking, the assumption underpinning communication is that behavioural change will come largely from communication (encouragement and exhortation) rather than imposition. Hence, for example, the SPI-B (25.2.20: 2) recommendation on limiting the ‘risk of public disorder’:
‘Provide clear and transparent reasons for different strategies: The public need to understand the purpose of the Government’s policy, why the UK approach differs to other countries and how resources are being allocated. SPI-B agreed that government should prioritise messaging that explains clearly why certain actions are being taken, ahead of messaging designed solely for reassuring the public.
This should also set clear expectations on how the response will develop, g. ensuring the public understands what they can expect as the outbreak evolves and what will happen when large numbers of people present at hospitals. The use of early messaging will help, as a) individuals are likely to be more receptive to messages before an issue becomes controversial and b) it will promote a sense the Government is following a plan.
Promote a sense of collectivism: All messaging should reinforce a sense of community, that “we are all in this together.” This will avoid increasing tensions between different groups (including between responding agencies and the public); promote social norms around behaviours; and lead to self-policing within communities around important behaviours’.
The underpinning assumption is that the government should treat people as ‘rational actors’: explain risk and how to reduce it, support existing measures by the public to socially distance, be transparent, explain if UK is doing things differently to other countries, and recognise that these measures are easier for some more than others (13.3.20: 3).
In that context, SPI-B Meeting paper 22.3.20 describes how to enable social distancing with reference to the ‘behaviour change wheel’ (Michie et al, 2011): ‘There are nine broad ways of achieving behaviour change: Education, Persuasion, Incentivisation, Coercion, Enablement, Training, Restriction, Environmental restructuring, and Modelling’ and many could reinforce each other (22.3.20: 1). The paper comments on current policy in relation to 5 elements:
Education – clarify guidance (generally, and for shielding), e.g. through interactive website, tailored to many audiences
Persuasion – increase perceived threat among ‘those who are complacent, using hard-hitting emotional messaging’ while providing clarity and positive messaging (tailored to your audience’s motivation) on what action to take (22.3.20: 1-2).
Incentivisation – emphasise social approval as a reward for behaviour change
Coercion – ‘Consideration should be given to enacting legislation, with community involvement, to compel key social distancing measures’ (combined with encouraging ‘social disapproval but with a strong caveat around unwanted negative consequences’ (22.3.20: 2)
Enablement – make sure that people have alternative access to social contact, food, and other resources for people feeling the unequal impact of lockdown (particularly for vulnerable people shielding, aided by community support).
Apparently, section 3 of SPI-B’s meeting paper (1.4.20b: 2) had been redacted because it was critical of a UK Government ‘Framework; with 4 new proposals for greater compliance: ‘17) increasing the financial penalties imposed; 18) introducing self-validation for movements; 19) reducing exercise and/or shopping; 20) reducing non-home working’. On 17, it suggests that the evidence base for (e.g.) fining someone exercising more than 1km from their home could contribute to lower support for policy overall. On 17-19, it suggests that most people are already complying, so there is no evidence to support more targeted measures. It is more positive about 20, since it could reduce non-home working (especially if financially supported). Generally, it suggests that ministers should ‘also consider the role of rewards and facilitations in improving adherence’ and use organisational changes, such as staggered work hours and new use of space, rather than simply focusing on individuals.
Communication after the lockdown
SAGE suggests that communication problems are more complicated during the release of lockdown measures (in other words, without the ability to present the relatively-low-ambiguity message ‘stay at home’). Examples (mostly from SPI-B and its contributors) include:
When notifying people about the need to self-isolate, address the trade-offs between symptom versus positive test based notifications (meeting paper 29.4.20a: 1-4; 5.5.20: 1-8)
If you are worried about public ‘disorder’, focus on clear, effective, tailored communication, using local influencers, appealing to sympathetic groups (like NHS staff), and co-producing messages between the police and public (in other words, police via consent, and do not exacerbate grievances) (meeting papers 19.4.20: 1-4; 21.4.20: 1-3; 4.5.20: 1-11)
Be wary of lockdowns specific to very small areas, which undermine the ‘all in it together’ message (REDACTED and Clifford Stott, no date: 1). If you must to it, clarify precisely who is affected and what they should do, support the people most vulnerable and impacted (e.g. financially), and redesign physical spaces (meeting paper SPI-B 22.4.20a)
When reopening schools (fully or partly), communication is key to the inevitably complex and unpredictable behavioural consequences (so, for example, work with parents, teachers, and other stakeholders to co-produce clear guidance) (29.4.20d: 1-10)
On the introduction of Alert Levels, as part of the Joint Biosecurity Centre work on local outbreaks (described in meeting paper 20.5.20a: 1-9): build public trust and understanding regarding JBC alert levels, and relate them very clearly to expected behaviour (SAGE 28.5.20). Each Alert Level should relate clearly to a required response in that area, and ‘public communications on Alert Levels needs many trusted messengers giving the same advice, many times’ (meeting paper 27.5.20b: 3).
On transmission between social networks, ‘Communicate two key principles: 1. People whose work involves large numbers of contacts with different people should avoid close, prolonged, indoor contact with anyone as far as possible … 2. People with different workplace networks should avoid meeting or sharing the same spaces’ (meeting paper 27.5.20b: 1).
On outbreaks in ‘forgotten institutional settings’ (including Prisons, Homeless Hostels, Migrant dormitories, and Long stay mental health): address the unusually low levels of trust in (or awareness of) government messaging among so-called ‘hard to reach groups’ (meeting paper 28.5.20a: 1).
See also:
SPI-M (Meeting paper 17.3.20b: 4) list of how to describe probabilities. This is more important than it looks, since there is a potentially major gap between the public and advisory group understanding of words like ‘probably’ (compare with the CIA’s Words of Estimative Probability).
Oral evidence to the Health and Social Care committee highlights the now-well-documented limits to UK testing capacity and PPE stocks (see also NERVTAG on PPE). SAGE does not discuss testing capacity much in the beginning, although on 10.3.20 it lists as an action point: ‘Plans for how PHE can move from 1,000 serology tests to 10,000 tests per week’ and by 16.3.20 it describes the urgent need to scale up testing – perhaps with commercial involvement and to test at home (if can ensure accuracy) – and to secure sufficient data to track the epidemic well enough to inform operational decisions. From April, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20), and the need for far more testing is a feature of almost every meeting from then.
Limited contact tracing
Initially, SAGE describes a quite-low contact tracing capacity: ‘Currently, PHE can cope with five new cases a week (requiring isolation of 800 contacts). Modelling suggests this capacity could be increased to 50 new cases a week (8,000 contact isolations)’ (18.2.20: 1).
Previously, it had noted that the point would come when transmission was too high to make contact tracing worthwhile, particularly since many (e.g. asymptomatic) cases may already have been missed (20.2.20: 2) and the necessary testing capacity was not in place (16.4.20): ‘PHE to work with SPI-M to develop criteria for when contact tracing is no longer worthwhile. This should include consideration of any limiting factors on testing and alternative methods of identifying epidemic evolution and characteristics’ (11.2.20: 3; see also Testing and contact tracing).
It returned to the feasibility question after the lockdown, with:
SPI-M (meeting paper 4.20d: 1-3) estimating that effective contact tracing (80% of non-household cases, in 2 days) could reduce the R by 30-60% if you could quarantine many people, multiple times; and,
SPI-B (meeting paper 4.20a: 1-3) advising on the need to clarify to people how it would work and what they should do, redesign physical spaces, and conduct new qualitative research and stakeholder engagement to ‘help us to understand more clearly the specific drivers, enablers and barriers for new behavioural recommendations’ to address an unprecedented problem in the UK (22.4.20a: 2). SPI-B also describes the trade-offs between app-informed systems (notification based on symptoms would suit people seeking to be precautionary, but could reduce compliance among people who believe the risk to be low) (see meeting papers 29.4.20: 3 and 5.5.20: 1-8)
SAGE noting ongoing work on clusters and super-spreading events, which necessitate cluster-based contact tracing (11.6.20: 3)
A more general message that contact tracing will be overwhelmed if lockdown measures are released too soon, raising R well above 1 and causing incidence to rise too quickly (e.g. 14.5.20)
Low capacity to achieve high levels of information necessary for forecasting
This type of discussion exemplifies a general and continuous focus on the lack of data to inform advice:
‘24. Real-time forecasting models rely on deriving information on the epidemic from surveillance. If transmission is established in the UK there will necessarily be a delay before sufficiently accurate forecasts in the UK are available. 25. Decisions being made on whether to modify or lift non-pharmaceutical interventions require accurate understanding of the state of the epidemic. Large-scale serological data would be ideal, especially combined with direct monitoring of contact behaviour. 26. Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK (or a similar country). While some estimates may be available before this time their accuracy will be much more limited. 27. The UK hospitalisation rate and CFR will be very important for operational planning and will be estimated over a similar timeframe. They may take longer depending on the availability of data’ (Meeting paper 2.3.20: 3-4).
A limited capacity to reach a relatively cautious consensus?
These limitations to information contributed to the difference between SAGE’s estimate on UK transmission (such as in comparison with Italy) and the UK’s much faster rate of transmission:
‘the UK likely has thousands of cases – as many as 5,000 to 10,000 – which are geographically spread nationally … The UK is considered to be 4-5 weeks behind Italy but on a similar curve (6-8 weeks behind if interventions are applied)’ (10.3.20: 1)
‘Based on limited available evidence, SAGE considers that the UK is 2 to 4 weeks behind Italy in terms of the epidemic curve’ (18.3.20: 1)
At the heart of this estimate was the under-estimated doubling time of infection (‘the time it takes for the number of cases to double in size’, Meeting paper 3.2.20a):
although described as 3-4 days (28.1.20: 1) then 4-6 days (Meeting paper 2.3.20) based on Wuhan, and 3-5 days based on Hubei (Meeting paper 3.2.20a),
SAGE estimates ‘every 5-6 days’ (16.3.20: 1) and states that ‘Assuming a doubling time of around 5-7 days continues to be reasonable’ (18.3.20: 1).
Only by meeting 18 does SAGE estimate the doubling time (ICU patients) at 3-4 days (23.3.20). By meeting 19, it describes the doubling time in hospitals as 3.3 days (26.3.20: 1).
Kit Yates suggests that (a) the UK exhibited a 3-day doubling time during this period (Huffington Post), and (b) although many members of SAGE and SPI-M would have preferred to model on the assumption of 3-days:
Having spoken to some of the modellers on SPI-M, not all of them were missing this. Many of the groups had fitted models to data and come up with shorter and more realistic doubling times, maybe around the 3-day mark, but their estimates never found consensus within the group, so some members of SPI-M have communicated their concerns to me that some of the modelling groups had more influence over the consensus decision than others, which meant that some opinions or estimates which might have been valid, didn’t get heard, and consequently weren’t passed on up the line to SAGE, and then further towards the government, so an over-reliance on certain models or modelling groups might have been costly in this situation (interview, Kit Yates, More or Less, 10.6.20: 4m47s-5m27s)
Yates then suggests that the most listened-to model – led by Neil Ferguson, published 16.3.20 – estimates a doubling time of 5-days, based on early data from Wuhan, using estimate of R2.4 (and generation time of 6.5 days), ‘which we now know to be way too low’ when we look at the UK data:
‘If they had just plotted the early trajectory of the epidemics against the current UK data at that point, they would have seen [by 14.3.20] that their model was starting to underestimate the number of cases and then the number of deaths which were occurring in the UK’ (interview, Kit Yates, More or Less, 10.6.20: 7m2s-7m15s)
Yates’ account highlights not only
the effect of uncertainty and limited capacity to generate more information, but also
‘If you thought SAGE and the way SAGE works was a cosy consensus of agreeing scientists, you would be very mistaken. It is a lively, robust discussion, with multiple inputs. We do not try to get everybody saying exactly the same thing’.
There is often a clear distinction between a strategy designed to (a) eliminate a virus/ the spread of disease quickly, and (b) manage the spread of infection over the long term (see The overall narrative).
However, generally, the language of virus management is confusing. We need to be careful with interpreting the language used in these minutes, and other sources such as oral evidence to House of Commons committees, particularly when comparing the language at the beginning (when people were also unsure what to call SARS-CoV-2 and COVID-19) to present day debates.
For example, in January, it is tempting to contrast ‘slow down the spread of the outbreak domestically’ (28.1.20: 2) with a strategy towards ‘extinction’, but the proposed actions may be the same even if the expectations of impact are different. Some people interpret these differences as indicative of a profoundly different approach (delay versus eradicate); some describe the semantic differences as semantics.
By February, SAGE’s expectation is of an inevitable epidemic and inability to contain COVID-19, prompting it to describe the inevitable series of stages:
‘Priorities will shift during a potential outbreak from containment and isolation on to delay and, finally, to case management … When there is sustained transmission in the UK, contact tracing will no longer be useful’ (18.2.20: 1; its discussion on 20.2.20: 2 also concludes that ‘individual cases could already have been missed – including individuals advised that they are not infectious’).
Mitigation versus suppression
On the face of it, it looks like there is a major difference in the ways on which (a) the Imperial College COVID-19 Response Team and (b) SAGE describe possible policy responses. The Imperial paper makes a distinction between mitigation and suppression:
Its ‘mitigation strategy scenarios’ highlight the relative effects of partly-voluntary measures on mortality and demand for ‘critical care beds’ in hospitals: (voluntary) ‘case isolation in the home’ (people with symptoms stay at home for 7 days), ‘voluntary home quarantine’ (all members of the household stay at home for 14 days if one member has symptoms), (government enforced) ‘social distancing of those over 70’ or ‘social distancing of entire population’ (while still going to work, school or University), and closure of most schools and universities. It omits ‘stopping mass gatherings’ because ‘the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants’ (2020a: 8). Assuming 70-75% compliance, it describes the combination of ‘case isolation, home quarantine and social distancing of those aged over 70’ as the most impactful, but predicts that ‘mitigation is unlikely to be a viable option without overwhelming healthcare systems’ (2020a: 8-10). These measures would only ‘reduce peak critical care demand by two-thirds and halve the number of deaths’ (to approximately 250,000).
Its ‘suppression strategy scenarios’ describe what it would take to reduce the rate of infection (R) from the estimated 2.0-2.6 to 1 or below (in other words, the game-changing point at which one person would infect no more than one other person) and reduce ‘critical care requirements’ to manageable levels. It predicts that a combination of four options – ‘case isolation’, ‘social distancing of the entire population’ (the measure with the largest impact), ‘household quarantine’ and ‘school and university closure’ – would reduce critical care demand from its peak ‘approximately 3 weeks after the interventions are introduced’, and contribute to a range of 5,600-48,000 deaths over two years (depending on the current R and the ‘trigger’ for action in relation to the number of occupied critical care beds) (2020a: 13-14).
In comparison, the SAGE meeting paper (26.2.20b: 1-3), produced 2-3 weeks earlier, pretty much assumes away the possible distinction between mitigation versus suppression measures (which Vallance has described as semantic rather than substantive – scroll down to The distinction between mitigation and suppression measures). In other words, it assumes ‘high levels of compliance over long periods of time’ (26.2.20b: 1). As such, we can interpret SAGE’s discussion as (a) requiring high levels of compliance for these measures to work (the equivalent of Imperial’s description of suppression), while (b) not describing how to use (more or less voluntary versus impositional) government policy to secure compliance. In comparison, Imperial equates suppression with the relatively-short-term measures associated with China and South Korea (while noting uncertainty about how to maintain such measures until a vaccine is produced).
It is taking forever, but I think I am now managing to get into the rhythm of the February SAGE minutes (in relation to media interviews and oral evidence in March), at least enough to identify the need to interpret the text in a relatively thoughtful/ sympathetic way ….
— Professor Paul Cairney (@CairneyPaul) June 29, 2020
One reason for SAGE to assume compliance in its scenario building is to focus on the contribution of each measure, generally taking place over 13 weeks, to delaying the peak of infection (while stating that ‘It will likely not be feasible to provide estimates of the effectiveness of individual control measures, just the overall effectiveness of them all’, 26.2.20b: 1), while taking into account their behavioural implications (26.2.20b: 2-3).
School closures could contribute to a 3-week delay, especially if combined with FE/ HE closures (but with an unequal impact on ‘Those in lower socio-economic groups … more reliant on free school meals or unable to rearrange work to provide childcare’).
Home isolation (65% of symptomatic cases stay at home for 7 days) could contribute to a 2-3 week delay (and is the ‘Easiest measure to explain and justify to the public’).
‘Voluntary household quarantine’ (all member of the household isolate for 14 days) would have a similar effect – assuming 50% compliance – but with far more implications for behavioural public policy:
‘Resistance & non-compliance will be greater if impacts of this policy are inequitable. For those on low incomes, loss of income means inability to pay for food, heating, lighting, internet. This can be addressed by guaranteeing supplies during quarantine periods.
Variable compliance, due to variable capacity to comply, may lead to dissatisfaction.
Ensuring supplies flow to households is essential. A desire to help among the wider community (e.g. taking on chores, delivering supplies) could be encouraged and scaffolded to support quarantined households.
There is a risk of stigma, so ‘voluntary quarantine’ should be portrayed as an act of altruistic civic duty’.
‘Social distancing’ (‘enacted early’), in which people restrict themselves to essential activity (work and school) could produce a 3-5 week delay (and likely to be supported in relation to mass leisure events, albeit less so when work activities involve a lot of contact.
[Note that it is not until May that it addresses this issue of feasibility directly (and, even then, it does not distinguish between technical and political feasibility: ‘It was noted that a useful addition to control measures SAGE considers (in addition to scientific uncertainty) would be the feasibility of monitoring/ enforcement’ (7.5.20: 3)]
As theme 2 suggests, there is a growing recognition that these measures should have been introduced by early March (such as via the Coronavirus Act 2020 not passed until 25.3.20), and likely would if the UK government and SAGE had more information (or interpreted its information in a different way). However, by mid-March, SAGE expresses a mixture of (a) growing urgency, but also (b) the need to stick to the plan, to reduce the peak and avoid a second peak of infection). On 13th March, it states:
‘There are no strong scientific grounds to hasten or delay implementation of either household isolation or social distancing of the elderly or the vulnerable in order to manage the epidemiological curve compared to previous advice. However, there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic. Household isolation is modelled to have the biggest effect of the three interventions currently planned, but with some risks. SAGE therefore thinks there is scientific evidence to support household isolation being implemented as soon as practically possible’ (13.3.20: 1)
‘SAGE further agreed that one purpose of behavioural and social interventions is to enable the NHS to meet demand and therefore reduce indirect mortality and morbidity. There is a risk that current proposed measures (individual and household isolation and social distancing) will not reduce demand enough: they may need to be coupled with more intensive actions to enable the NHS to cope, whether regionally or nationally’ (13.3.20: 2)
On 16th March, it states:
‘On the basis of accumulating data, including on NHS critical care capacity, the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1)
Overall, we can conclude two things about the language of intervention:
There is now a clear difference between the ways in which SAGE and its critics describe policy: to manage an inevitably long-term epidemic, versus to try to eliminate it within national borders.
There is a less clear difference between terms such as suppress and mitigate, largely because SAGE focused primarily on a comparison of different measures (and their combination) rather than the question of compliance.
See also: There is no ‘herd immunity strategy’, which argues that this focus on each intervention was lost in radio and TV interviews with Vallance.
SAGE began a series of extraordinary meetings from 22nd January 2020. The first was described as ‘precautionary’ (22.1.20: 1) and includes updates from NERVTAG which met from 13th January. Its minutes state that ‘SAGE is unable to say at this stage whether it might be required to reconvene’ (22.1.20: 2). The second meeting notes that SAGE will meet regularly (e.g. 2-3 times per week in February) and coordinate all relevant science advice to inform domestic policy, including from NERVTAG and SPI-M (Scientific Pandemic Influenza Group on Modelling) which became a ‘formal sub-group of SAGE for the duration of this outbreak’ (SPI-M-O) (28.1.20: 1). It also convened an additional Scientific Pandemic Influenza subgroup (SPI-B) in February. I summarise these developments by month, but you can see that, by March, it is worth summarising each meeting. The main theme is uncertainty.
January 2020
The first meeting highlights immense uncertainty. Its description of WN-CoV (Wuhan Coronavirus), and statements such as ‘There is evidence of person-to-person transmission. It is unknown whether transmission is sustainable’, sum up the profound lack of information on what is to come (22.1.20: 1-2). It notes high uncertainty on how to identify cases, rates of infection, infectiousness in the absence of symptoms, and which previous experience (such as MERS) offers the most useful guidance. Only 6 days later, it estimates an R between 2-3, doubling rate of 3-4 days, incubation period of around 5 days, 14-day window of infectivity, varied symptoms such as coughing and fever, and a respiratory transmission route (different from SARS and MERS) (28.1.20: 1). These estimates are fairly constant from then, albeit qualified with reference to uncertainty (e.g. about asymptomatic transmission), some key outliers (e.g. the duration of illness in one case was 41 days – 4.2.20: 1), and some new estimates (e.g. of a 6-day ‘serial interval’, or ‘time between successive cases in a chain of transmission’, 11.2.20: 1). By now, it is preparing a response: modelling a ‘reasonable worst case scenario’ (RWC) based on the assumption of an R of 2.5 and no known treatment or vaccine, considering how to slow the spread, and considering how behavioural insights can be used to encourage self-isolation.
February 2020
SAGE began to focus on what measures might delay or reduce the impact of the epidemic. It described travel restrictions from China as low value, since a 95% reduction would have to be draconian to achieve and only secure a one month delay, which might be better achieved with other measures (3.2.20: 1-2). It, and supporting papers, suggested that the evidence was so limited that they could draw ‘no meaningful conclusions … as to whether it is possible to achieve a delay of a month’ by using one or a combination of these measures: international travel restrictions, domestic travel restrictions, quarantine people coming from infected areas, close schools, close FE/ HE, cancel large public events, contact tracing, voluntary home isolation, facemasks, hand washing. Further, some could undermine each other (e.g. school closures impact on older people or people in self-isolation) and have major societal or opportunity costs (SPI-M-O, 3.2.20b: 1-4). For example, the ‘SPI-M-O: Consensus view on public gatherings’ (11.2.20: 1) notes the aim to reduce duration and closeness of (particularly indoor) contact. Large outdoor gatherings are not worse than small, and stopping large events could prompt people to go to pubs (worse).
Throughout February, the minutes emphasize high uncertainty:
if there will be an epidemic outside of China (4.2.20: 2)
if it spreads through ‘air conditioning systems’ (4.2.20: 3)
the spread from, and impact on, children and therefore the impact of closing schools (4.2.20: 3; discussed in a separate paper by SPI-M-O, 10.2.20c: 1-2)
‘SAGE heard that NERVTAG advises that there is limited to no evidence of the benefits of the general public wearing facemasks as a preventative measure’ (while ‘symptomatic people should be encouraged to wear a surgical face mask, providing that it can be tolerated’ (4.2.20: 3)
At the same time, its meeting papers emphasized a delay in accurate figures during an initial outbreak: ‘Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK’ (SPI-M-O, 3.2.20a: 3).
This problem proved to be crucial to the timing of government intervention. A key learning point will be the disconnect between the following statement and the subsequent realisation (3-4 weeks later) that the lockdown measures from mid-to-late March came too late to prevent an unanticipated number of excess deaths:
‘SAGE advises that surveillance measures, which commenced this week, will provide
actionable data to inform HMG efforts to contain and mitigate spread of Covid-19’ … PHE’s surveillance approach provides sufficient sensitivity to detect an outbreak in its early stages. This should provide evidence of an epidemic around 9- 11 weeks before its peak … increasing surveillance coverage beyond the current approach would not significantly improve our understanding of incidence’ (25.2.20: 1)
It also seems clear from the minutes and papers that SAGE highlighted a reasonable worst case scenario on 26.2.20. It was as worrying as the Imperial College COVID-19 Response Team report dated 16.3.20 that allegedly changed the UK Government’s mind on the 16th March. Meeting paper 26.2.20a described the assumption of an 80% infection attack rate and 50% clinical attack rate (i.e. 50% of the UK population would experience symptoms), which underpins the assumption of 3.6 million requiring hospital care of at least 8 days (11% of symptomatic), and 541,200 requiring ventilation (1.65% of symptomatic) for 16 days. While it lists excess deaths as unknown, its 1% infection mortality rate suggests 524,800 deaths. This RWC replaces a previous projection (in Meeting paper 10.2.20a: 1-3, based on pandemic flu assumptions) of 820,000 excess deaths (27.2.20: 1).
As such, the more important difference could come from SAGE’s discussion of ‘non-pharmaceutical interventions (NPIs)’ if it recommends ‘mitigation’ while the Imperial team recommends ‘suppression’. However, the language to describe each approach is too unclear to tell (see Theme 1. The language of intervention; also note that NPIs were often described from March as ‘behavioural and social interventions’ following an SPI-B recommendation, Meeting paper 3.2.20: 1, but the language of NPI seems to have stuck).
March 2020
In March, SAGE focused initially (Meetings 12-14) on preparing for the peak of infection on the assumption that it had time to transition towards a series of isolation and social distancing measures that would be sustainable (and therefore unlikely to contribute to a second peak if lifted too soon). Early meetings and meeting papers express caution about the limited evidence for intervention and the potential for their unintended consequences. This approach began to change somewhat from mid-March (Meeting 15), and accelerate from Meetings 16-18, when it became clear that incidence and virus transmission were much larger than expected, before a new phase began from Meeting 19 (after the UK lockdown was announced on the 23rd).
Meeting 12 (3.3.18) describes preparations to gather and consolidate information on the epidemic and the likely relative effect of each intervention, while its meeting papers emphasise:
‘It is highly likely that there is sustained transmission of COVID-19 in the UK at present’, and a peak of infection ‘might be expected approximately 3-5 months after the establishment of widespread sustained transmission’ (SPI-M Meeting paper 2.3.20: 1)
the need the prepare the public while giving ‘clear and transparent reasons for different strategies’ and reducing ambiguity whenever giving guidance (SPI-B Meeting paper 3.2.20: 1-2)
The need to combine different measures (e.g. school closure, self-isolation, household isolation, isolating over-65s) at the right time; ‘implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave’ (Meeting paper 4.3.20a: 3).
Meeting 13 (5.3.20) describes staying in the ‘containment’ phase (which, I think, means isolating people with positive tests at home or in hospital) , and introducing: a 12-week period of individual and household isolation measures in 1-2 weeks, on the assumption of 50% compliance; and a longer period of shielding over-65s 2 weeks later. It describes ‘no evidence to suggest that banning very large gatherings would reduce transmission’, while closing bars and restaurants ‘would have an effect, but would be very difficult to implement’, and ‘school closures would have smaller effects on the epidemic curve than other options’ (5.3.20: 1). Its SPI-B Meeting paper (4.3.20b) expresses caution about limited evidence and reliance on expert opinion, while identifying:
potential displacement problems (e.g. school closures prompt people to congregate elsewhere, or be looked after by vulnerable older people, while parents to lose the chance to work)
the visibility of groups not complying
the unequal impact on poorer and single parent families of school closure and loss of school meals, lost income, lower internet access, and isolation
how to reduce discontent about only isolating at-risk groups (the view that ‘explaining that members of the community are building some immunity will make this acceptable’ is not unanimous) (4.3.20b: 2).
Meeting 14 (10.3.20) states that the UK may have 5-10000 cases and ‘10-14 weeks from the epidemic peak if no mitigations are introduced’ (10.3.20: 2). It restates the focus on isolation first, followed by additional measures in April, and emphasizes the need to transition to measures that are acceptable and sustainable for the long term:
‘SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods’ …’the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2)
Meeting 15 (13.3.20: 1) describes an update to its data, suggesting ‘more cases in the UK than SAGE previously expected at this point, and we may therefore be further ahead on the epidemic curve, but the UK remains on broadly the same epidemic trajectory and time to peak’. It states that ‘household isolation and social distancing of the elderly and vulnerable should be implemented soon, provided they can be done well and equitably’, noting that there are ‘no strong scientific grounds’ to accelerate key measures but ‘there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic’ (13.3.20: 1) and ‘more intensive actions’ will be required to maintain NHS capacity (13.3.20: 2).
*******
On the 16th March, the UK Prime Minister Boris Johnson describes an ‘emergency’ (one week before declaring a ‘national emergency’ and UK-wide lockdown)
*******
Meeting 16 (16.3.20) describes the possibility that there are 5-10000 new cases in the UK (there is great uncertainty on the estimate’), doubling every 5-6 days. Therefore, to stay within NHS capacity, ‘the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1). SPI-M Meeting paper (16.3.20: 1) describes:
‘a combination of case isolation, household isolation and social distancing of vulnerable groups is very unlikely to prevent critical care facilities being overwhelmed … it is unclear whether or not the addition of general social distancing measures to case isolation, household isolation and social distancing of vulnerable groups would curtail the epidemic by reducing the reproduction number to less than 1 … the addition of both general social distancing and school closures to case isolation, household isolation and social distancing of vulnerable groups would be likely to control the epidemic when kept in place for a long period. SPI-M-O agreed that this strategy should be followed as soon as practical’
Meeting 17 (18.3.20) marks a major acceleration of plans, and a de-emphasis of the low-certainty/ beware-the-unintended-consequences approach of previous meetings (on the assumption that it was now 2-4 weeks behind Italy). It recommends school closures as soon as possible (and it, and SPIM Meeting paper 17.3.20b, now downplays the likely displacement effect). It focuses particularly on London, as the place with the largest initial numbers:
‘Measures with the strongest support, in terms of effect, were closure of a) schools, b) places of leisure (restaurants, bars, entertainment and indoor public spaces) and c) indoor workplaces. … Transport measures such as restricting public transport, taxis and private hire facilities would have minimal impact on reducing transmission’ (18.3.20: 2)
Meeting 18 (23.3.20) states that the R is higher than expected (2.6-2.8), requiring ‘high rates of compliance for social distancing’ to get it below 1 and stay under NHS capacity (23.3.20: 1). There is an urgent need for more community testing/ surveillance (and to address the global shortage of test supplies). In the meantime, it needs a ‘clear rationale for prioritising testing for patients and health workers’ (the latter ‘should take priority’) (23.3.20: 3) Closing UK borders ‘would have a negligible effect on spread’ (23.3.20: 2).
*******
The lockdown. On the 23rd March 2020, the UK Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of coronavirus, including police powers to support public health, such as to disperse gatherings of more than two people (unless they live together), close events and shops, and limit outdoor exercise to once per day (at a distance of two metres from others).
*******
Meeting 19 (26.3.20) follows the lockdown. SAGE describes its priorities if the R goes below 1 and NHS capacity remains under 100%: ‘monitoring, maintenance and release’ (based on higher testing); public messaging on mass testing and varying interventions; understanding nosocomial transmission and immunology; clinical trials (avoiding hasty decisions’ on new drug treatment in absence of good data) and ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2). The optimistic scenario is 10,000 deaths from the first wave (SPIM-O Meeting paper 25.3.20: 4).
Meeting 20 Confirms RWC and optimistic scenarios (Meeting paper 25.3.20), but it needs a ‘clearer narrative, clarifying areas subject to uncertainty and sensitivities’ and to clarify that scenarios (with different assumptions on, for example, the R, which should be explained more) are not predictions (29.3.20).
Meeting 21 seeks to establish SAGE ‘scientific priorities’ (e.g. long term health impacts of COVID-19, including socioeconomic impact on health (including mental health), community testing, international work (‘comorbidities such as malaria and malnutrition) (31.3.20: 1-2). NHS to set up an interdisciplinary group (including science and engineering) to ‘understand and tackle nosocomial transmission’ in the context of its growth and urgent need to define/ track it (31.3.20: 1-2). SAGE to focus on testing requirements, not operational issues. It notes the need to identify a single source of information on deaths.
April 2020
The meetings in April highlight four recurring themes.
First, it stresses that it will not know the impact of lockdown measures for some time, that it is too soon to understand the impact of releasing them, and there is high risk of failure: ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1; see also 14.4.20: 1-2). This problem remains even if a reliable testing and contact tracing system is in place, and if there are environmental improvements to reduce transmission (by keeping people apart).
Second, it notes signals from multiple sources (including CO-CIN and the RCGP) on the higher risk of major illness and death among black people, the ongoing investigation of higher risk to ‘BAME’ health workers (16.4.20), and further (high priority) work on ‘ethnicity, deprivation, and mortality’ (21.4.20: 1) (see also: Race, ethnicity, and the social determinants of health).
Third, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20). The need for far more testing is a feature of almost every meeting (see also The need to ramp up testing).
Fourth, SAGE describes the need for more short and long-term research, identifying nosocomial infection as a short term priority, and long term priorities in areas such as the long term health impacts of COVID-19 (including socioeconomic impacts on physical and mental health), community testing, and international work (31.3.20: 1-2).
Finally, it reflects shifting advice on the precautionary use of face masks. Previously, advisory bodies emphasized limited evidence of a clear benefit to the wearer, and worried that public mask use would reduce the supply to healthcare professionals and generate a false sense of security (compare with this Greenhalgh et al article on the precautionary principle, the subsequent debate, and work by the Royal Society). Even by April: ‘NERVTAG concluded that the increased use of masks would have minimal effect’ on general population infection (7.4.20: 1), while the WHO described limited evidence that facemasks are beneficial for community use (9.4.20). Still, general face mask use but could have small positive effect, particularly in ‘enclosed environments with poor ventilation, and around vulnerable people’ (14.4.20: 2) and ‘on balance, there is enough evidence to support recommendation of community use of cloth face masks, for short periods in enclosed spaces where social distancing is not possible’ (partly because people can be infectious with no symptoms), as long as people know that it is no substitute for social distancing and handwashing (21.4.20)
May 2020
In May, SAGE continues to discuss high uncertainty on relaxing lockdown measures, the details of testing systems, and the need for research.
Generally, it advises that relaxations should not happen before there is more understanding of transmission in hospitals and care homes, and ‘until effective outbreak surveillance and test and trace systems are up and running’ (14.5.20). It advises specifically ‘against reopening personal care services, as they typically rely on highly connected workers who may accelerate transmission’ (5.5.20: 3) and warns against the too-quick introduction of social bubbles. Relaxation runs the risk of diminishing public adherence to social distancing, and to overwhelm any contact tracing system put in place:
‘SAGE participants reaffirmed their recent advice that numbers of Covid-19 cases remain high (around 10,000 cases per day with wide confidence intervals); that R is 0.7-0.9 and could be very close to 1 in places across the UK; and that there is very little room for manoeuvre especially before a test, trace and isolate system is up and running effectively. It is not yet possible to assess the effect of the first set of changes which were made on easing restrictions to lockdown’ (28.5.20: 3).
It recommends extensive testing in hospitals and care homes (12.5.20: 3) and ‘remains of the view that a monitoring and test, trace & isolate system needs to be put in place’ (12.5.20: 1)
June 2020
In June, SAGE identifies the importance of clusters of infection (super-spreading events) and the importance of a contact tracing system that focuses on clusters (rather than simply individuals) (11.6.20: 3). It reaffirms the value of a 2-metre distance rule. It also notes that the research on immunology remains unclear, which makes immunity passports a bad idea (4.6.20).
It describes the result of multiple meeting papers on the unequal impact of COVID-19:
‘There is an increased risk from Covid-19 to BAME groups, which should be urgently investigated through social science research and biomedical research, and mitigated by policy makers’ … ‘SAGE also noted the importance of involving BAME groups in framing research questions, participating in research projects, sharing findings and implementing recommendations’ (4.6.20: 1-3)
We need to use a suppression strategy to reduce infection enough to avoid overwhelming health service capacity, and shield the people most vulnerable to major illness or death caused by COVID-19, to minimize deaths during at least one peak of infection.
We need to maintain suppression for a period of time that is difficult to predict, subject to compliance levels that are difficult to predict and monitor.
We need to avoid panicking the public in the lead up to suppression, avoid too-draconian enforcement, and maintain wide public trust in the government.
We need to avoid (a) excessive and (b) insufficient suppression measures, either of which could contribute to a second wave of the epidemic of the same magnitude as the first.
We need to transition safely from suppression measures to foster economic activity, find safe ways for people to return to work and education, and reinstate the full use of NHS capacity for non-COVID-19 illness.
In the absence of a vaccine, this strategy will likely involve social distancing and (voluntary) track-and-trace measures to isolate people with COVID-19.
This understanding in the UK, informed strongly by SAGE, also informs the ways in which SAGE (a) deals with uncertainty, and (b) describes the likely impact of each stage of action.
Manage suppression during the first peak to avoid a second peak
Most importantly, it stresses continuously the need to avoid excessive suppressive measures on the first peak that would contribute to a second peak [my emphasis added]:
‘Any combination of [non-pharmaceutical] measures would slow but not halt an epidemic’, 25.2.20: 1).
‘Mitigations can be expected to change the shape of the epidemic curve or the timing of a first or second peak, but are not likely to reduce the overall number of total infections’. Therefore, identify whose priorities matter (such as NHS England) on the assumption that, ‘The optimal shape of the epidemic curve will differ according to sectoral or organisational priorities’ (27.2.20: 2).
‘A combination of these measures [school closures, household isolation, social distancing] is expected to have a greater impact: implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave. In comparison combining stringent social distancing measures, school closures and quarantining cases, as a long-term policy, may have a similar impact to that seen in Hong Kong or Singapore, but this could result in a large second epidemic wave once the measures were lifted’ (Meeting paper 4.3.20a: 3).
‘SAGE was unanimous that measures seeking to completely suppress spread of Covid-19 will cause a second peak. SAGE advises that it is a near certainty that countries such as China, where heavy suppression is underway, will experience a second peak once measures are relaxed’ (also: ‘It was noted that Singapore had had an effective “contain phase” but that now new cases had appeared) (13.3.20: 2)
Its visual of each possible peak of infection emphasises the risk of a second peak (Meeting paper 4.3.20: 2).
‘The objective is to avoid critical cases exceeding NHS intensive care and other respiratory support bed capacity’ … SAGE ‘advice on interventions should be based on what the NHS needs’ (16.3.20: 1)
The fewer cases that happen as a result of the policies enacted, the larger subsequent waves are expected to be when policies are lifted (SPI-M-O Meeting paper 25.3.20: 1)
‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1)
Avoid the unintended consequences of epidemic suppression
This understanding intersects with (c) an emphasis of the loss of benefits caused by certain interventions (such as schools closures).
SPI-B (Meeting paper 4.3.20b: 1-4) expresses reluctance to close schools, partly to avoid the unintended consequences, including: displacement problems (e.g. school closures prompt children to be looked after by vulnerable older people, or parents to lose the chance to work); and, the unequal impact on poorer and single parent families (loss of school meals, lost income, lower internet access, exacerbating isolation and mental ill health). It then states that: ‘The importance of schools during a crisis should not be overlooked. This includes: Acting as a source of emotional support for children; Providing education (e.g. on hand hygiene) which is conveyed back to families; Provision of social service (e.g. free school meals, monitoring wellbeing); Acting as a point of leadership and communication within communities’ (4.3.20b: 4).
‘Long periods of social isolation may have significant risks for vulnerable people … SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods. Input from behavioural scientists is essential to policy development of cocooning measures, to increase public practicability and likelihood of compliance … the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2).
After the lockdown (23.3.20), SAGE describes a priority regarding: ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2).
Exhort and encourage, rather than impose
It also intersects with (d) a primary focus on exhortation and encouragement rather than the imposition of behavioural change (Table 1), largely based on the belief that the UK government would be unwilling or unable to enforce behavioural change in ways associated with China. In that context, the government’s willingness and ability to enforce social distancing and business closure from the 23rd March is striking.
Examples include:
when recommending ‘individual home isolation (symptomatic individuals to stay at home for 14 days) and whole family isolation (fellow household members of symptomatic individuals to stay at home for 14 days after last family member becomes unwell)’, it assumes a 50% compliance rate, and notes that ‘closing bars and restaurants ‘would have an effect, but would be very difficult to implement’ (5.3.20: 1).
It also contrasts with the approach described by several of the UK’s (expert) critics, including Professor Devi Sridhar (Professor of Global Public Health), who is critical of SAGE specifically, and more generally of the UK government’s rejection of an ‘elimination’ strategy:
Eradication -> getting rid of every case of COVID on the planet. Only possible in long-term w/vaccine.
Elimination -> moving towards 0 infections within national borders. Yes, imported cases will occur, but if identified quickly, can mean return to largely ‘normal’ daily life.
We need a concerted push across the 4 nations for elimination. Here is how New Zealand did it & why it's feasible over the summer with political will, leadership & public willingness to comply. Only way to get back to 'normal' life. https://t.co/7BXf5gZJjzhttps://t.co/Qk7PM13XrO
Table 1 sets out one way to describe the distinction between these approaches:
The UK government is addressing a chronic problem, being cautious about policy change without supportive evidence, identifying trigger points to new approaches (based on incidence), and assuming initially that the approach is based largely on exhortation.
One alternative is to pursue elimination aggressively, adopting a precautionary principle before there is supportive evidence of a major problem and the effectiveness of solutions, backed by measures such as contact tracing and quarantine, and assuming that the imposition of behaviour should be a continuous expectation.
One approach highlights the lack of evidence to support major policy change, and therefore gives primacy to the status quo. The other is more preventive, giving primacy to the precautionary principle until there is more clarity or certainty on the available evidence.
In that context, note (in Table 2) how frequently the SAGE minutes state that there is limited evidence to support policy change, and that an epidemic is inevitable (in other words, elimination without a vaccine is near-impossible). Both statements tend to support a UK government policy that was, until mid-March, based on reluctance to enforce a profound lockdown to impose social distancing.
As the next post describes, the chronology of Table 2 is instructive, since it demonstrates a degree of path dependence based on initial uncertainty and hesitancy. This approach was understandable at first (particularly when connected to an argument about reducing the peak of infection then avoiding a second wave), before being so heavily criticised only two months later.
Dominic Cummings’ tweets 38-55 (22-24 May 2021) describe much of the initial UK Government approach (described above) as a ‘herd immunity’ strategy:
38/ Media generally abysmal on covid but even I’ve been surprised by 1 thing: how many hacks have parroted Hancock’s line that ‘herd immunity wasn’t the plan’ when 'herd immunity by Sep' was *literally the official plan in all docs/graphs/meetings* until it was ditched
51/ It was in week of 9/3 that we started to figure out Plan B to dodge herd immunity until vaccines. Even AFTER we shifted to PlanB, COBR documents had the ‘OPTIMAL single peak strategy’ graphs showing 260k dead cos the system was so confused in the chaos, see below pic.twitter.com/kXsgfkQdbF
I discuss here why I think ‘herd immunity’ has become a damagingly ambiguous term, used too loosely and misleadingly by too many people to help us understand what happened:
However, clearly these tweets are crucial to our understanding of the influence of initial advice and strategies, based on the idea of acting to mitigate a first peak while avoiding a second.
The issue of science advice to government, and the role of SAGE in particular, became unusually high profile in the UK, particularly in relation to four factors:
The SAGE minutes and papers – including a record of SAGE members and attendees – were initially unpublished, in line with the previous convention of government to publish after, rather than during, a crisis.
‘SAGE is keen to make the modelling and other inputs underpinning its advice available to the public and fellow scientists’ (13.3.20: 1)
When it agrees to publish SAGE papers/ documents, it stresses: ‘It is important to demonstrate the uncertainties scientists have faced, how understanding of Covid-19 has developed over time, and the science behind the advice at each stage’ (16.3.20: 2)
‘SAGE discussed plans to release the academic models underpinning SAGE and SPI-M discussions and judgements. Modellers agreed that code would become public but emphasised that the effort to do this immediately would distract from other analyses. It was agreed that code should become public as soon as practical, and SPI-M would return to SAGE with a proposal on how this would be achieved. ACTION: SPI-M to advise on how to make public the source code for academic models, working with relevant partners’ (18.3.20: 2).
SAGE welcomes releasing names of SAGE participants (if willing) and notes role of Ian Boyd as ‘independent challenge function’ (28.4.20: 1)
SAGE also describes the need for a better system to allow SAGE participants to function effectively and with proper support (given the immense pressure/ strain on their time and mental health) (7.5.20: 1)
There were growing concerns that ministers would blame their advisers for poor choices (compare Freedman and Snowdon) or at least use science advice as ‘an insurance policy’, and
There was some debate about the appropriateness of Dominic Cummings (Prime Minister Boris Johnson’s special adviser) attending some meetings.
Therefore, its official description reflects its initial role plus a degree of clarification on the role of science advice mechanisms during the COVID-19 pandemic. The SAGE webpage on the gov.uk sites describes its role as:
‘provides scientific and technical advice to support government decision makers during emergencies … SAGE is responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR). The advice provided by SAGE does not represent official government policy’.
‘SAGE’s role is to provide unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants. This includes everything from latest knowledge of the virus to modelling the disease course, understanding the clinical picture, and effects of and compliance with interventions. This advice together with a descriptor of uncertainties is then passed onto government ministers. The advice is used by Ministers to allow them to make decisions and inform the government’s response to the COVID-19 outbreak …
The government, naturally, also considers a range of other evidence including economic, social, and broader environmental factors when making its decisions…
SAGE is comprised of leading lights in their representative fields from across the worlds of academia and practice. They do not operate under government instruction and expert participation changes for each meeting, based on the expertise needed to address the crisis the country is faced with …
SAGE is also attended by official representatives from relevant parts of government. There are roughly 20 such officials involved in each meeting and they do not frequently contribute to discussions, but can play an important role in highlighting considerations such as key questions or concerns for policymakers that science needs to help answer or understanding Civil Service structures. They may also ask for clarification on a scientific point’ (emphasis added by yours truly).
Note that the number of participants can be around 60 people, which is more like an assembly with presentations and a modest amount of discussion, than a decision-making function (the Zoom meeting on 4.6.20 lists 76 participants). Even a Cabinet meeting is about 20 and that is too much for coherent discussion/ action (hence separate, smaller, committees).
Further, each set of now-published minutes contains an ‘addendum’ to clarify its operation. For example, its first minutes in 2020 seek to clarify the role of participants. Note that the participants change somewhat at each meeting (see the full list of members/ attendees), and some names are redacted. Dominic Cummings’ name only appears (I think) on 5.3.20, 14.4.20, and two meetings on 1.5.20 (although, as Freedman notes, ‘his colleague Ben Warner was a more regular presence’).
More importantly, the minutes from late February begin to distinguish between three types of potential science advice:
to describe the size of the problem (e.g. surveillance of cases and trends, estimating a reasonable worst case scenario)
to estimate the relative impact of many possible interventions (e.g. restrictions on travel, school closures, self-isolation, household quarantine, and social distancing measures)
to recommend the level and timing of state action to achieve compliance in relation to those interventions.
SAGE focused primarily on roles 1 and 2, arguing against role 3 on the basis that state intervention is a political choice to be taken by ministers. Ministers are responsible for weighing up the potential public health benefits of each measure in relation to their social and economic costs (see also: The relationship between science, science advice, and policy).
Example 1: setting boundaries between advice and strategy
‘It is a political decision to consider whether it is preferable to enact stricter measures at first, lifting them gradually as required, or to start with fewer measures and add further measures if required. Surveillance data streams will allow real-time monitoring of epidemic growth rates and thus allow approximate evaluation of the impact of whatever package of interventions is implemented’ (Meeting paper 26.2.20b: 1)
This example highlights a limitation in performing role 2 to inform 3: SAGE would not be able to compare the relative impact of measures without knowing their level of imposition and its impact on compliance. Further, the way in which it addressed this problem is crucial to our interpretation and evaluation of the timing and substance of the UK government’s response.
In short, it simultaneously assumed awayandmaintained attention to this problem by stating:
‘The measures outlined below assume high levels of compliance over long periods of time. This may be unachievable in the UK population’ (26.2.20b: 1).
‘advice on interventions should be based on what the NHS needs and what modelling of those interventions suggests, not on the (limited) evidence on whether the public will comply with the interventions in sufficient numbers and over time’ (16.3.20: 1)
Example 2: setting boundaries between advice and value judgements
‘SAGE has not provided a recommendation of which interventions, or package of interventions, that Government may choose to apply. Any decision must consider the impacts these interventions may have on society, on individuals, the workforce and businesses, and the operation of Government and public services’ (Meeting paper 4.3.20a: 1).
To all intents and purposes, SAGE is noting that governments need to make value-based choices to:
Weigh up the costs and benefits of any action (as described by Layard et al, with reference to wellbeing measures and the assumed price of a life), and
Decide whose wellbeing, and lives, matter the most (because any action or inaction will have unequal consequences across a population).
In other words, policy analysis is one part evidence and one part value judgement. Both elements are contested in different ways, and different questions inform political choices (e.g. whose knowledge counts versus whose wellbeing counts?).
[see also:
‘Determining a tolerable level of risk from imported cases requires consideration of a number of non-science factors and is a policy question’ (28.4.20: 3)
‘SAGE reemphasises that its own focus should always be on providing clear scientific advice to government and the principles behind that advice’ (7.5.20: 1)]
Future reflections
Any future inquiry will be heavily contested, since policy learning and evaluation are political acts (and the best way to gather and use evidence during a pandemic is highly contested). Still, hopefully, it will promote reflection on how, in practice, governments and advisory bodies negotiate the blurry boundary between scientific advice and political choice when they are so interdependent and rely so heavily on judgement in the face of ambiguity and uncertainty (or ‘radical uncertainty’). I discuss this issue in the next post, which highlights the ways in which UK ministers relied on SAGE (and advisers) to define the policy problem.
I have summarized SAGE’s minutes (41 meetings, from 22 January to 11 June) and meeting/ background papers (125 papers, estimated range 1-51 pages, median 4, not-peer-reviewed, often produced a day after a request) in a ridiculously long table. This thing is huge (40 pages and 20000 words). It is the sequoia table. It is the humongous fungus. Even Joey Chestnut could not eat this table in one go. To make your SAGE meal more palatable, here is a series of blog posts that situate these minutes and papers in their wider context. This initial post is unusually long, so I’ve put in a photo to break it up a bit.
Did the UK government ‘follow the science’?
I use the overarching question Did the UK Government ‘follow the science’? initially for the clickbait. I reckon that, like a previous favourite (people have ‘had enough of experts’), ‘following the science’ is a phrase used by commentators more frequently than the original users of the phrase. It is easy to google and find some valuable commentaries with that hook (Devlin & Boseley, Siddique, Ahuja, Stevens, Flinders, Walker, Lancet Infectious Diseases, FT; see also Vallance) but also find ministers using a wider range of messages with more subtle verbs and metaphors:
‘We will take the right steps at the right time, guided by the science’ (Prime Minister Boris Johnson, 3.20)
‘We will be guided by the science’ (Health Secretary Matt Hancock, 2.20)
‘At all stages, we have been guided by the science, and we will do the right thing at the right time’ (Johnson, 3.20)
‘The plan is driven by the science and guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.20)
‘The plan does not set out what the government will do, it sets out the steps we could take at the right time along the basis of the scientific advice’ (Johnson, 3.20).
Still, clearly they are saying ‘the science’ as a rhetorical device, and it raises many questions or objections, including:
There is no such thing as ‘the science’.
Rather, there are many studies described as scientific (generally with reference to a narrow range of accepted methods), and many people described as scientists (with reference to their qualifications and expertise). The same can be said for the rhetorical phrase ‘the evidence’ and the political slogan ‘evidence based policymaking’ (which often comes with its notionally opposite political slogan ‘policy based evidence’). In both cases, a reference to ‘the science’ or ‘the evidence’ often signals one or both of:
a particular, restrictive, way to describe evidence that lives up to a professional quality standard created by some disciplines (e.g. based on a hierarchy of evidence, in which the systematic review of randomized control trials is often at the top)
2. Ministers often mean ‘following our scientists’
When Johnson (12.3.20) describes being ‘guided by the science’, he is accompanied by Professor Patrick Vallance (Government Chief Scientific Adviser) and Professor Chris Whitty (the UK government’s Chief Medical Adviser). Hancock (3.3.20) describes being ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.3.20).
In other words, following ‘the science’ means ‘following the advice of our scientific advisors’, via mechanisms such as SAGE.
As the SAGE minutes and meeting papers show, government scientists and SAGE participants necessarily tell a partial story about the relevant evidence from a particular perspective (note: this is not a criticism of SAGE; it is a truism). Other interpreters of evidence, and sources of advice, are available.
Therefore, the phrase ‘guided by the science’ is, in practice, a way to:
associate policy with particular advisors or advisory bodies, often to give ministerial choices more authority, and often as ‘an insurance policy’ to take the heat off ministers.
What exactly is ‘the science’ guiding?
Let’s make a simple distinction between two types of science-guided action. Scientists provide evidence and advice on:
the scale and urgency of a potential policy problem, such as describing and estimating the incidence and transmission of coronavirus
the likely impact of a range of policy interventions, such as contact tracing, self-isolation, and regulations to oblige social distancing
Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
Ambiguity describes the ability to entertain more than one interpretation of a policy problem.
Put both together to produce a wide range of possibilities for policy ‘guided by the science’, from (a) simply providing facts to help reduce uncertainty on the incidence of coronavirus (minimal), to (b) providing information and advice on how to define and try to solve the policy problem (maximal).
If so, note that being guided by science does not signal more or less policy change. Ministers can use scientific uncertainty to defend limited action, or use evidence selectively to propose rapid change. In either case, it can argue – sincerely – that it is guided by science. Therefore, analyzing critically the phraseology of ministers is only a useful first step. Next, we need to identify the extent to which scientific advisors and advisory bodies, such as SAGE, guided ministers.
The role of SAGE: advice on evidence versus advice on strategy and values
It shows that, although science advice to government is necessarily political, the coronavirus has heightened attention to science and advice, and you can see the (subtle and not subtle) ways in SAGE members and its secretariat are dealing with its unusually high level of politicization. SAGE has responded by clarifying its role, and trying to set boundaries between:
Advice versus strategy
Advice versus value judgements
These aims are understandable, but difficult to do in theory (the fact/value distinction is impossible) and practice (plus, policymakers may not go along with the distinction anyway). I argue that it also had some unintended consequences, which should prompt some further reflection on facts-versus-values science advice during crises.
The ways in which UK ministers followed SAGE advice
With these caveats in mind, my reading of this material is that UK government policy was largely consistent with SAGE evidence and advice in the following ways:
Defining the policy problem
This post (and a post on oral evidence to the Health and Social Care Committee) identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows (although the post provides a more expansive discussion):
coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
don’t impose or relax measures too quickly (which will cause a second peak of infection)
reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).
While SAGE minutes suggest a general reluctance to comment too much on the point 4, government discussions were underpinned by 1-3. For me, this context is the most important. It provides a lens through which to understand all of SAGE advice: how it shapes, and is shaped by, UK government policy.
The timing and substance of interventions before lockdown, maintenance of lockdown for several months, and gradual release of lockdown measures
This post presents a long chronological story of SAGE minutes and papers, divided by month (and, in March, by each meeting). Note the unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, can only be appreciated fully if you read the minutes from 1 to 41. Or, you know, take my word for it.
In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China (albeit while developing initially-good estimates of R, doubling rate, incubation period, window of infectivity, and symptoms). In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.
In other words, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice (and it would not be outrageous to argue that it went ahead of it).
It is more difficult to describe the consistency between UK government policy & SAGE advice in relation to the relaxation of lockdown measures.
SAGE’s minutes and meeting papers describe very low certainty about what will happen after the release of lockdown. Their models do not hide this unusually high level of uncertainty, and they use models (built on assumptions) to generate scenarios rather than estimate what will happen. In this sense, ‘following the science’ could relate to (a) a level of buy-in for this kind of approach, and (b) making choices when scientific groups cannot offer much (if any) advice on what to do or what will happen. The example of reopening schools is a key example, since SPI-M and SPI-B focused intensely on the issue, but their conclusions could not underpin a specific UK government choice.
There are two ways to interpret what happened next.
First, there will always be a mild gap between hesitant SAGE advice and ministerial action. SAGE advice tends to be based on the amount and quality of evidence to support a change, which meant it was hesitant to recommend (a) a full lockdown and (b) a release from lockdown. Just as UK government policy seemed to go ahead of the evidence to enter lockdown on the 23rd March, so too does it seem to go ahead of the cautious approach to relaxing it.
Second, UK ministers are currently going too far ahead of the evidence. SPI-M papers state repeatedly that the too-quick release of measures will cause the R to go above 1 (in some papers, it describes reaching 1.7; in some graphs it models up to 3).
The use of behavioural insights to inform and communicate policy
In March, you can find a lot of external debate about the appropriate role for ‘behavioural science’ and ‘behavioural public policy’ (BPP) (in other words, using insights from psychology to inform policy). Part of the initial problem related to the lack of transparency of the UK government, which prompted concerns that ministers were basing choices on limited evidence (see Hahn et al, Devlin, Mills). Oliver also describes initial confusion about the role of BPP when David Halpern became mildly famous for describing the concept of ‘herd immunity’ rather than sticking to psychology.
External concern focused primarily on the argument that the UK government (and many other governments) used the idea of ‘behavioural fatigue’ to justify delayed or gradual lockdown measures. In other words, if you do it too quickly and for too long, people will tire of it and break the rules.
Yet, this argument about fatigue is not a feature of the SAGE minutes and SPI-B papers (indeed, Oliver wonders if the phrase came from Whitty, based on his experience of people tiring of taking medication).
Rather, the papers tend to emphasise:
There is high uncertainty about behavioural change in key scenarios, and this reference to uncertainty should inform any choice on what to do next.
The need for effective and continuous communication with citizens, emphasizing transparency, honesty, clarity, and respect, to maintain high trust in government and promote a sense of community action (‘we are all in this together’).
John and Stoker argue that ‘much of behavioural science lends itself to’ a ‘top-down approach because its underlying thinking is that people tend to be limited in cognitive terms, and that a paternalistic expert-led government needs to save them from themselves’. Yet, my overall impression of the SPI-B (and related) work is that (a) although SPI-B is often asked to play that role, to address how to maximize adherence to interventions (such as social distancing), (b) its participants try to encourage the more deliberative or collaborative mechanisms favoured by John and Stoker (particularly when describing how to reopen schools and redesign work spaces). If so, my hunch is that they would not be as confident that UK ministers were taking their advice consistently (for example, throughout table 2, have a look at the need to provide a consistent narrative on two different propositions: we are all in this together, but the impact of each action/inaction will be profoundly unequal).
Expanded themes in SAGE minutes
Throughout this period, I think that one – often implicit – theme is that members of SAGE focused quite heavily on what seemed politically feasible to suggest to ministers, and for ministers to suggest to the public (while also describing technical feasibility – i.e. will it work as intended if implemented?). Generally, it seemed to anticipate policymaker concern about, and any unintended public reactions, to a shift towards more social regulation. For example:
‘Interventions should seek to contain, delay and reduce the peak incidence of cases, in that order. Consideration of what is publicly perceived to work is essential in any decisions’ (25.2.20: 1)
All of these shorter posts delay your reading of a ridiculously long table summarizing each meeting’s discussion and advice/ action points (Table 2, which also includes a way to chase up the referencing in the blog posts: dates alone refer to SAGE minutes; multiple meeting papers are listed as a, b, c if they have the same date stamp rather than same authors).
I hope to get through all of this material (and equivalent material in the devolved governments) somehow, but also to find time to live, love, eat, and watch TV, so please bear with me if you want to know what happened but don’t want to do all of the reading to find out.
If you would rather just read all of this discussion in one document:
If you would like some other analyses, compare with:
Freedman (7.6.20) ‘Where the science went wrong. Sage minutes show that scientific caution, rather than a strategy of “herd immunity”, drove the UK’s slow response to the Covid-19 pandemic’. Concludes that ‘as the epidemic took hold the government was largely following Sage’s advice’, and that the government should have challenged key parts of that advice (to ensure an earlier lockdown).
More or Less (1.7.20) ‘Why Did the UK Have Such a Bad Covid-19 Epidemic?’. Relates the delays in ministerial action to inaccurate scientific estimates of the doubling time of infection (discussed further in Theme 2).
Snowden (28.5.20) ‘The lockdown’s founding myth. We’ve forgotten that the Imperial model didn’t even call for a full lockdown’. Challenges the argument that ministers dragged their feet while scientists were advising quick and extensive interventions (an argument he associates with Calvert et al (23.5.20) ‘22 days of dither and delay on coronavirus that cost thousands of British lives’). Rather, ministers were following SAGE advice, and the lockdown in Italy had a far bigger impact on ministers (since it changed what seemed politically feasible).
Paul Cairney (2020) ‘The UK Government’s COVID-19 policy: assessing evidence-informed policy analysis in real time’, British Politicshttps://rdcu.be/b9zAk (PDF)
The coronavirus feels like a new policy problem that requires new policy analysis. The analysis should be informed by (a) good evidence, translated into (b) good policy. However, don’t be fooled into thinking that either of those things are straightforward. There are simple-looking steps to go from defining a problem to making a recommendation, but this simplicity masks the profoundly political process that must take place. Each step in analysis involves political choices to prioritise some problems and solutions over others, and therefore prioritise some people’s lives at the expense of others.
My article in British Politics takes us through those steps in the UK, and situates them in a wider political and policymaking context. This post is shorter, and only scratches the surface of analysis.
5 steps to policy analysis
Define the problem.
Perhaps we can sum up the initial UK government approach as: (a) the impact of this virus and illness will be a level of death and illness that could overwhelm the population and exceed the capacity of public services, so (b) we need to contain the virus enough to make sure it spreads in the right way at the right time, so (c) we need to encourage and make people change their behaviour (primarily via hygiene and social distancing). However, there are many ways to frame this problem to emphasise the importance of some populations over others, and some impacts over others.
Identify technically and politically feasible solutions.
Solutions are not really solutions: they are policy instruments that address one aspect of the problem, including taxation and spending, delivering public services, funding research, giving advice to the population, and regulating or encouraging changes to social behaviour. Each new instrument contributes an existing mix, with unpredictable and unintended consequences. Some instruments seem technically feasible (they will work as intended if implemented), but will not be adopted unless politically feasible (enough people support their introduction). Or vice versa. From the UK government’s perspective, this dual requirement rules out a lot of responses.
Use values and goals to compare solutions.
Typical judgements combine: (a) broad descriptions of values such as efficiency, fairness, freedom, security, and human dignity, (b) instrumental goals, such as sustainable policymaking (can we do it, and for how long?), and political feasibility (will people agree to it, and will it make me more or less popular or trusted?), and (c) the process to make choices, such as the extent to which a policy process involves citizens or stakeholders (alongside experts) in deliberation. They combine to help policymakers come to high profile choices (such as the balance between individual freedom and state coercion), and low profile but profound choices (to influence the level of public service capacity, and level of state intervention, and therefore who and how many people will die).
Predict the outcome of each feasible solution.
It is difficult to envisage a way for the UK Government to publicise all of the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation. People often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic about who should live or die, or provide a frank account without unintended consequences for public trust or anxiety. If so, one aspect of government policy is to keep some choices implicit and avoid a lot of debate on trade-offs. Another is to make choices continuously without knowing what their impact will be (the most likely scenario right now).
Make a choice, or recommendation to your client.
Your recommendation or choice would build on these four steps. Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem seemed to change. If you are writing your analysis, maybe keep it down to one sheet of paper (in other words, fewer words than in this post up to this point).
Policy analysis is not as simple as these steps suggest, and further analysis of the wider policymaking environment helps describe two profound limitations to simple analytical thought and action.
Policymakers must ignore almost all evidence
The amount of policy relevant information is infinite, and capacity is finite. So, individuals and governments need ways to filter out almost all of it. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information. They include: define a problem and a feasible response, seek information that is available, understandable, and actionable, and identify credible sources of information and advice. In that context, the vague idea of trusting or not trusting experts is nonsense, and the larger post highlights the many flawed ways in which all people decide whose expertise counts.
They do not control the policy process.
Policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome.
There are many policymakers and influencers spread across a political system. For example, consider the extent to which each government department, devolved governments, and public and private organisations are making their own choices that help or hinder the UK government approach.
Most choices in government are made in ‘subsystems’, with their own rules and networks, over which ministers have limited knowledge and influence.
The social and economic context, and events, are largely out of their control.
The take home messages (if you accept this line of thinking)
The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results do not match their hopes or expectations.
This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing. No one is helping their government solve the problem by saying stupid shit on the internet (OK, that last bit was a message of despair).
Further reading:
The article (PDF) sets out these arguments in much more detail, with some links to further thoughts and developments.
This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.
These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.
This is the long version. It is long. Too long to call a blog post. Let’s call it a ‘living document’ that I update and amend as new developments arise (then start turning into a more organised paper). In most cases, I am adding tweets, so the date of the update is embedded. If I add a new section, I will add a date. If you seek specific topics (like ‘herd immunity’), it might be worth doing a search. The short version is shorter.
The coronavirus feels like a new policy problem. Governments already have policies for public health crises, but the level of uncertainty about the spread and impact of this virus seems to be taking it to a new level of policy, media, and public attention. The UK Government’s Prime Minister calls it ‘the worst public health crisis for a generation’.
As such, there is no shortage of opinions on what to do, but there is a shortage of well-considered opinions, producing little consensus. Many people are rushing to judgement and expressing remarkably firm opinions about the best solutions, but their contributions add up to contradictory evaluations, in which:
the government is doing precisely the right thing or the completely wrong thing,
we should listen to this expert saying one thing or another expert saying the opposite.
Lots of otherwise-sensible people are doing what they bemoan in politicians: rushing to judgement, largely accepting or sharing evidence only if it reinforces that judgement, and/or using their interpretation of any new development to settle scores with their opponents.
Yet, anyone who feels, without uncertainty, that they have the best definition of, and solution to, this problem is a fool. If people are also sharing bad information and advice, they are dangerous fools. Further, as Professor Madley puts it (in the video below), ‘anyone who tells you they know what’s going to happen over the next six months is lying’.
In that context, how can we make sense of public policy to address the coronavirus in a more systematic way?
Studies of policy analysis and policymaking do not solve a policy problem, but they at least give us a language to think it through.
In each step, note how quickly it is possible to be overwhelmed by uncertainty and ambiguity, even when the issue seems so simple at first.
Note how difficult it is to move from Step 1, and to separate Step 1 from the others. It is difficult to define the problem without relating it to the solution (or to the ways in which we will evaluate each solution).
Let’s relate that analysis to research on policymaking, to understand the wider context in which people pay attention to, and try to address, important problems that are largely out of their control.
Throughout, note that I am describing a thought process as simply as I can, not a full examination of relevant evidence. I am highlighting the problems that people face when ‘diagnosing’ policy problems, not trying to diagnose it myself. To do so, I draw initially on common advice from the key policy analysis texts (summaries of the texts that policy analysis students are most likely to read) that simplify the process a little too much. Still, the thought process that it encourages took me hours alone (spread over three days) to produce no real conclusion. Policymakers and advisers, in the thick of this problem, do not have that luxury of time or uncertainty.
In our latest guest blog, Jonny Pearson-Stuttard, RSPH Trustee and Public Health Doctor @imperialcollege sets out what we know about the spread of coronavirus to date, and why the Government has taken the measures it hashttps://t.co/XM7zKKjwtE
Provide a diagnosis of a policy problem, using rhetoric and eye-catching data to generate attention.
Identify its severity, urgency, cause, and our ability to solve it. Don’t define the wrong problem, such as by oversimplifying.
Problem definition is a political act of framing, as part of a narrative to evaluate the nature, cause, size, and urgency of an issue.
Define the nature of a policy problem, and the role of government in solving it, while engaging with many stakeholders.
‘Diagnose the undesirable condition’ and frame it as ‘a market or government failure (or maybe both)’.
Coronavirus as a physical problem is not the same as a coronavirus policy problem. To define the physical problem is to identify the nature, spread, and impact of a virus and illness on individuals and populations. To define a policy problem, we identify the physical problem and relate it (implicitly or explicitly) to what we think a government can, and should, do about it. Put more provocatively, it is only a policy problem if policymakers are willing and able to offer some kind of solution.
This point may seem semantic, but it raises a profound question about the capacity of any government to solve a problem like an epidemic, or for governments to cooperate to solve a pandemic. It is easy for an outsider to exhort a government to ‘do something!’ (or ‘ACT NOW!’) and express certainty about what would happen. However, policymakers inside government:
Do not enjoy the same confidence that they know what is happening, or that their actions will have their intended consequences, and
Will think twice about trying to regulate social behaviour under those circumstances, especially when they
Know that any action or inaction will benefit some and punish others.
For example, can a government make people wash their hands? Or, if it restricts gatherings at large events, can it stop people gathering somewhere else, with worse impact? If it closes a school, can it stop children from going to their grandparents to be looked after until it reopens? There are 101 similar questions and, in each case, I reckon the answer is no. Maybe government action has some of the desired impact; maybe not. If you agree, then the question might be: what would it really take to force people to change their behaviour?
The answer is: often too much for a government to consider (in a liberal democracy), particularly if policymakers are informed that it will not have the desired impact.
A couple of key takeaways from our analysis of early COVID-19 dynamics in Wuhan:
1. We estimated that the control measures introduced – unprecedented interventions that will have had a huge social and psychological toll – reduced transmission by around 55% in space of 2 weeks 1/
If so, the UK government’s definition of the policy problem will incorporate this implicit question: what can we do if we can influence, but not determine (or even predict well) how people behave?
Uncertainty about the coronavirus plus uncertainty about policy impact
Now, add that general uncertainty about the impact of government to this specific uncertainty about the likely nature and spread of the coronavirus:
The ideal spread involves all well people sharing the virus first, while all vulnerable people (e.g. older, and/or with existing health problems that affect their immune systems) protected in one isolated space, but it won’t happen like that; so, we are trying to minimise damage in the real world.
We mainly track the spread via deaths, with data showing a major spike appearing one month later, so the problem may only seem real to most people when it is too late to change behaviour
A lot of the spread will happen inside homes, where the role of government is minimal (compared to public places). So, for example, the impact of school closures could be good (isolation) or make things worse (children spreading the virus to vulnerable relatives) (see also ‘we don’t know [if the UKG decision not to close schools] was brilliant or catastrophic’). [Update 18.3.20: as it turned out, the First Minister’s argument for closing Scottish schools was that there were too few teachers available).
The choice in theory is between a rapid epidemic with a high peak, or a slowed-down epidemic over a longer period, but ‘anyone who tells you they know what’s going to happen over the next six months is lying’.
Maybe this epidemic will be so memorable as to shift social behaviour, but so much depends on trying to predict (badly) if individuals will actually change (see also Spiegelhalter on communicating risk).
None of this account tells policymakers what to do, but at least it helps them clarify three key aspects of their policy problem:
The impact of this virus and illness could overwhelm the population, to the extent that it causes mass deaths, causes a level of illness that exceeds the capacity of health services to treat, and contributes to an unpredictable amount of social and economic damage.
We need to contain the virus enough to make sure it (a) spreads at the right speed and/or (b) peaks at the right time. The right speed seems to be: a level that allows most people to recover alone, while the most vulnerable are treated well in healthcare settings that have enough capacity. The right time seems to be the part of the year with the lowest demand on health services (e.g. summer is better than winter). In other words, (a) reduce the size of the peak by ‘flattening the curve’, and/or (b) find the right time of year to address the peak, while (c) anticipating more than one peak.
My impression is that the most frequently-expressed aim is (a) …
Yesterday we entered the Delay phase of our #COVID_19uk Action Plan. @UKScienceChief explained why this is important.
It allows us to #FlattenTheCurve, which means reducing the impact in the short-term to ensure our health & care system can effectively protect vulnerable people pic.twitter.com/1I45C3v38V
— Department of Health and Social Care (@DHSCgovuk) March 13, 2020
… while the UK Government’s Deputy Chief Medical Officer also seems to be describing (b):
Dr Jenny Harries, Deputy Chief Medical Officer, came into Downing Street to answer some of the most commonly asked questions on coronavirus. pic.twitter.com/KCdeHsaz6a
We need to encourage or coerce people to change their behaviour, to look after themselves (e.g. by handwashing) and forsake their individual preferences for the sake of public health (e.g. by self-isolating or avoiding vulnerable people). Perhaps we can foster social trust and empathy to encourage responsible individual action. Perhaps people will only protect others if obliged to do so (compare Stone; Ostrom; game theory).
See also: From across the Ditch: How Australia has to decide on the least worst option for COVID-19 (Prof Tony Blakely on three bad options: (1) the likelihood of ‘elimination’ of the virus before vaccination is low; (2) an 18-month lock-down will help ‘flatten the curve’; (3) ‘to prepare meticulously for allowing the pandemic to wash through society over a period of six or so months. To tool up the production of masks and medical supplies. To learn as quickly as possible which treatments of people sick with COVID-19 saves lives. To work out our strategies for protection of the elderly and those with a chronic condition (for whom the mortality from COVID-19 is much higher’).
Why politicians fear being accused of over reaction. Which in turn might prevent them from reacting appropriately when a real crisis hits 👇🏽👇🏽 https://t.co/UrxHTAs2z5
If you are still with me, I reckon you would have worded those aims slightly differently, right? There is some ambiguity about these broad intentions, partly because there is some uncertainty, and partly because policymakers need to set rather vague intentions to generate the highest possible support for them. However, vagueness is not our friend during a crisis involving such high anxiety. Further, they are only delaying the inevitable choices that people need to make to turn a complex multi-faceted problem into something simple enough to describe and manage. The problem may be complex, but our attention focuses only on a small number of aspects, at the expense of the rest. Examples that have arisen, so far, include to accentuate:
The health of the whole population or people who would be affected disproportionately by the illness.
For example, the difference in emphasis affects the health advice for the relatively vulnerable (and the balance between exhortation and reassurance)
Inequalities in relation to health, socio-economic status (e.g. income, gender, race, ethnicity), or the wider economy.
For example, restrictive measures may reduce the risk of harm to some, but increase the burden on people with no savings or reliable sources of income.
For example, some people are hoarding large quantities of home and medical supplies that (a) other people cannot afford, and (b) some people cannot access, despite having higher need.
For example, social distancing will limit the spread of the virus (see the nascent evidence), but also produce highly unequal forms of social isolation that increase the risk of domestic abuse (possibly exacerbated by school closures) and undermine wellbeing. Or, there will be major policy changes, such as to the rules to detain people under mental health legislation, regarding abortion, or in relation to asylum (note: some of these tweets are from the US, partly because I’m seeing more attention to race – and the consequence of systematic racism on the socioeconomic inequalities so important to COVID-19 mortality – than in the UK).
COVID-19 has brought new focus to women’s continued inequality. Without a gendered response to both the health and economic crises, gender inequality will be further cemented. Read more on the blog: https://t.co/zYxSFpUTNE
“The epidemic has had a huge impact on domestic violence,” said Wan. “According to our statistics, 90% of the causes of violence are related to the COVID-19 epidemic.” https://t.co/xswemtf548
I just asked a DC cop what he’s noticed since the coronavirus sent people home. “More domestic violence,” he said, without missing a beat. https://t.co/kv9zH5VNj1
While black people make up about 12% of Michigan’s population, they make up about 40% of all COVID-19 deaths reported.
A social epidemiologist says the numbers don’t say everything, but there's something that can’t be ignored: inequality. @MichiganRadiohttps://t.co/bWsqFaCrUJ
Available evidence (though injuriously limited) shows that Black people are being infected & dying of #coronavirus at higher rates. Disproportionate Black suffering is what many of us have suspected and feared because it is consistent with the entirety of American history. https://t.co/qzmXvGCGvV
#Coronavirus is not the 'great equalizer'—race matters:
“I believe that the actions and omissions of world leaders in charge of fighting the #COVID19 pandemic will reveal historical and current impacts of colonial violence and continued health inequities” https://t.co/nUuBIKfrVL
— Dr. Malinda S. Smith (@MalindaSmith) April 6, 2020
BAME lives matter, so far they account for:
– 100% of Dr deaths – 50% nurse deaths – 35% of Patients in ICU
Yet account for only 14% of population and account for 44% of NHS staff. Who is asking the questions, why the disparity?https://t.co/VOL8FAmy45
BBC news reports on the disproportionate deaths of African Americans & minorities in the US from #COVID19, but silence on similar issues in the UK. Why? Where is the reporting? Where is the accountability? https://t.co/DkGPjfnWG1
What the coronavirus bill will do: https://t.co/qoBdKKr64H Mental Health Act – detention implemented using just one doctor’s opinion (not 2) & AMHP, & temporarily allow extension or removal of time limits to allow for greater flexibility where services are less able to respond
English obviously, but fascinating that have issued an explicitly ethical framework for COVID decisions re mental health and incapacity. Can Scotland do same? https://t.co/WccPntZOwf
WOW – government has relaxed restrictions on WHERE abortions can take place, temporary inclusion of 'the home' as a legal site for abortion: https://t.co/Vw714fWXEM
Abortion services for women from Northern Ireland remain available free of charge in England. This provision will continue until services are available to meet these needs in Northern Ireland. For more information, visit: https://t.co/YYjop5lSgUpic.twitter.com/M8k95aIisM
BREAKING NEWS!!!! The Home Office have confirmed that ALL evictions and terminations of asylum support have been paused for 3 months. Find out more and read the letter from Home Office Minister Chris Philp confirming this on our website at: https://t.co/KDlVr4PHyP
NEW Editorial: While responding to #COVID19, policy makers should consider the risk of deepening health inequalities. If vulnerable groups are not properly identified, the consequences of this pandemic will be even more devastating https://t.co/BrypuXH6vSpic.twitter.com/hka3nLzxdv
In relation to Prison Rule Changes – these would only ever be used as an absolute last resort, in order to protect staff & those in our care. I can confirm that emergency changes to showering have not been implemented in any establishment.
For example, governments cannot ignore the impact of their actions on the economy, however much they emphasise mortality, health, and wellbeing. Most high-profile emphasis was initially on the fate of large and small businesses, and people with mortgages, but a long period of crisis will a tip the balance from low income to unsustainable poverty (even prompting Iain Duncan Smith to propose policy change), and why favour people who can afford a mortgage over people scraping the money together for rent?
So…. Govt income protection package includes….. 1. 80% of wage costs up to £2500 2. Deferred VAT. 3. £7 billion uplift to Universal Credit and Woring Tax crdit. 4. £1 billion to cover 30% of house rental costs. 5. Self employed to get same as sickness benefit payments.
A need for more communication and exhortation, or for direct action to change behaviour.
The short term (do everything possible now) or long term (manage behaviour over many months).
The Imperial College COVID report is being discussed. But a major takeaway from it will likely survive discussion: the human cost of a pure mitigation strategy is inacceptable, whilst a pure suppression strategy is unsustainable over time (thread)
How to maintain trust in the UK government when (a) people are more or less inclined to trust a the current part of government and general trust may be quite low, and (b) so many other governments are acting differently from the UK.
For example, note the visible presence of the Prime Minister, but also his unusually high deference to unelected experts such as (a) UK Government senior scientists providing direct advice to ministers and the public, and (b) scientists drawing on limited information to model behaviour and produce realistic scenarios (we can return to the idea of ‘evidence-based policymaking’ later). This approach is not uncommon with epidemics/ pandemics (LD was then the UK Government’s Chief Medical Officer):
For example, note how often people are second guessing and criticising the UK Government position (and questioning the motives of Conservative ministers).
For example, people often try to lay blame for viruses on certain populations, based on their nationality, race, ethnicity, sexuality, or behaviour (e.g. with HIV).
For example, the (a) association between the coronavirus and China and Chinese people (e.g. restrict travel to/ from China; e.g. exacerbate racism), initially overshadowed (b) the general role of international travellers (e.g. place more general restrictions on behaviour), and (c) other ways to describe who might be responsible for exacerbating a crisis.
For social scientists wondering “what can I do now?” here’s a challenge:@cp_roth@LukasHenselEcon & others ran a survey with 2500 Italians yday & found that:
Under ‘normal’ policymaking circumstances, we would expect policymakers to resolve this ambiguity by exercising power to set the agenda and make choices that close off debate. Attention rises at first, a choice is made, and attention tends to move on to something else. With the coronavirus, attention to many different aspects of the problem has been lurching remarkably quickly. The definition of the policy problem often seems to be changing daily or hourly, and more quickly than the physical problem. It will also change many more times, particularly when attention to each personal story of illness or death prompts people to question government policy every hour. If the policy problem keeps changing in these ways, how could a government solve it?
@alexwickham doing fine work as a journalist again. Gets right into Government somehow and tells people what is going on.
10 Days That Changed Britain: "Heated" Debate Between Scientists Forced Boris Johnson To Act On Coronavirus https://t.co/hDLEAPT3Z0
Public expenditure (e.g. to boost spending for emergency care, crisis services, medical equipment)
Economic incentives and disincentives (e.g. to reduce the cost of business or borrowing, or tax unhealthy products)
Linking spending to entitlement or behaviour (e.g. social security benefits conditional on working or seeking work, perhaps with the rules modified during crises)
Formal regulations versus voluntary agreements (e.g. making organisations close, or encouraging them to close)
Public services: universal or targeted, free or with charges, delivered directly or via non-governmental organisations
As a result, what we call ‘policy’ is really a complex mix of instruments adopted by one or more governments. A truism in policy studies is that it is difficult to define or identify exactly what policy is because (a) each new instrument adds to a pile of existing measures (with often-unpredictable consequences), and (b) many instruments designed for individual sectors tend, in practice, to intersect in ways that we cannot always anticipate. When you think through any government response to the coronavirus, note how every measure is connected to many others.
Further, it is a truism in public policy that there is a gap between technical and political feasibility: the things that we think will be most likely to work as intended if implemented are often the things that would receive the least support or most opposition. For example:
Redistributing income and wealth to reduce socio-economic inequalities (e.g. to allay fears about the impact of current events on low-income and poverty) seems to be less politically feasible than distributing public services to deal with the consequences of health inequalities.
Providing information and exhortation seems more politically feasible than the direct regulation of behaviour. Indeed, compared to many other countries, the UK Government seems reluctant to introduce ‘quarantine’ style measures to restrict behaviour.
Under ‘normal’ circumstances, governments may be using these distinctions as simple heuristics to help them make modest policy changes while remaining sufficiently popular (or at least looking competent). If so, they are adding or modifying policy instruments during individual ‘windows of opportunity’ for specific action, or perhaps contributing to the sense of incremental change towards an ambitious goal.
Right now, we may be pushing the boundaries of what seems possible, since crises – and the need to address public anxiety – tend to change what seems politically feasible. However, many options that seem politically feasible may not be possible (e.g. to buy a lot of extra medical/ technology capacity quickly), or may not work as intended (e.g. to restrict the movement of people). Think of technical and political feasibility as necessary but insufficient on their own, which is a requirement that rules out a lot of responses.
Add in the UK legislation and we see that it is a major feat simply to account for all of the major moving parts (while noting that much policy change is not legislative)https://t.co/gKsIx7aHr2pic.twitter.com/Ms6fjaDbhF
A few 'somewhat overwritten' newspaper stories today using some of our quotes on PPE. Here is exactly what we are saying, in the tone in which we are saying it: https://t.co/j6PO420WSF
Typical value judgements relate to efficiency, equity and fairness, the trade-off between individual freedom and collective action, and the extent to which a policy process involves citizens in deliberation.
Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions
‘Specify the objectives to be attained in addressing the problem and the criteria to evaluate the attainment of these objectives as well as the satisfaction of other key considerations (e.g., equity, cost, equity, feasibility)’.
‘Effectiveness, efficiency, fairness, and administrative efficiency’ are common.
Identify (a) the values to prioritise, such as ‘efficiency’, ‘equity’, and ‘human dignity’, and (b) ‘instrumental goals’, such as ‘sustainable public finance or political feasibility’, to generate support for solutions.
Instrumental questions may include: Will this intervention produce the intended outcomes? Is it easy to get agreement and maintain support? Will it make me popular, or diminish trust in me even further?
How to weigh the many future health problems and deaths caused by the lockdown against those saved? How to account for the worse effects of the lockdown on the young and the poor? Near impossible ethical choices that government will have to make. https://t.co/DJgwE4b3rd
Step 3 is the most simple-looking but difficult task. Remember that it is a political, not technical, process. It is also a political process that most people would like to avoid doing (at least publicly) because it involves making explicit the ways in which we prioritise some people over others. Public policy is the choice to help some people and punish or refuse to help others (and includes the choice to do nothing).
Policy analysis texts describe a relatively simple procedure of identifying criteria and producing a table (with a solution in each row, and criteria in each column) to compare the trade-offs between each solution. However, these criteria are notoriously difficult to define, and people resolve that problem by exercising power to decide what each term means, and whose interests should be served when they resolve trade-offs. For example, see Stone on whose needs come first, who benefits from each definition of fairness, and how technical-looking processes such as ‘cost benefit analysis’ mask political choices.
Right now, the most obvious and visible trade-off, accentuated in the UK, is between individual freedom and collective action, or the balance between state, communal, and market/ individual solutions. In comparison with many countries (and China and Italy in particular), the UK Government seems to be favouring individual action over state quarantine measures. However, most trade-offs are difficult to categorise
What should be the balance between efforts to minimise the deaths of some (generally in older populations) and maximise the wellbeing of others? This is partly about human dignity during crisis, how we treat different people fairly, and the balance of freedom and coercion.
How much should a government spend to keep people alive using intensive case or expensive medicines, when the money could be spent improving the lives of far more people? This is partly about human dignity, the relative efficiency of policy measures, and fairness.
If you are like me, you don’t really want to answer such questions (indeed, even writing them looks callous). If so, one way to resolve them is to elect policymakers to make such choices on our behalf (perhaps aided by experts in moral philosophy, or with access to deliberative forums). To endure, this unusually high level of deference to elected ministers requires some kind of reciprocal act:
"I hope the UK government will be transparent about its decision-making; willing to listen to NHS staff concerns; humble in learning from other countries’ experiences; and pro-active in building relationships with them."https://t.co/CYUyvij2bK
I agree. There is a need to show that divergent opinons in the public health/virology expert sector have been heard, debates have been had and conclusions explained. This is what I need as a citizen. Also casting the public not a bog roll stowing mob is not helpful or kind. https://t.co/g61Nypcqlc
The Guardian calls this document a “secret” briefing from Public Health England. At a time of national crisis there is no place for secrecy from health experts. If you want public support, share your data, scenarios, and forecasts. Now. https://t.co/O8BpDlCJ7H
I am glad Johnson has listened, but we shouldn't have to drag him kicking and screaming to these decisions. A daily update is a basic step. Transparency, honesty, compassion are vital in this time of a global crisis! no more secret briefings PM.https://t.co/eMxZnMehUp
The CSA and CMO say they will publish the models underlying their strategy on Covid-19. Sharing the data and models is important for accountability, testing and learning. https://t.co/rOuJWwy93i
Dear Boris – Number 10 needs a professional communications operation, immediately. (Open letter to the Prime Minister. Britain has some great comms specialists. He needs to hire one of them urgently) https://t.co/8w6MBYHHbm
Still, I doubt that governments are making reportable daily choices with reference to a clear and explicit view of what the trade-offs and priorities should be, because their choices are about who will die, and their ability to predict outcomes is limited.
Focus on the outcomes that key actors care about (such as value for money), and quantify and visualise your predictions if possible. Compare the pros and cons of each solution, such as how much of a bad service policymakers will accept to cut costs.
‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.
Estimate the cost of a new policy, in comparison with current policy, and in relation to factors such as savings to society or benefits to certain populations. Use your criteria and projections to compare each alternative in relation to their likely costs and benefits.
Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
Short deadlines dictate that you use ‘logic and theory, rather than systematic empirical evidence’ to make predictions efficiently.
Monitoring is crucial because it is difficult to predict policy success, and unintended consequences are inevitable. Try to measure the outcomes of your solution, while noting that evaluations are contested.
It is difficult to envisage a way for the UK Government to publicise the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation, rather than a highly technical debate between a small number of academics:
Ferguson et al (link) simulate outbreak response. Positive: They show suppression (lockdown R0<1) is essential as mitigation (R0>1, “flattening the curve”) results in massive hospital overload and many dead. BUT 1/3 (review attached)https://t.co/srbBS7F1s5pic.twitter.com/qbEymBdOqm
I’m conscious that lots of people would like to see and run the pandemic simulation code we are using to model control measures against COVID-19. To explain the background – I wrote the code (thousands of lines of undocumented C) 13+ years ago to model flu pandemics…
Further, people often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic, or provide a frank account without unintended consequences for public trust or anxiety. If so, government policy involves (a) to keep some choices implicit to avoid a lot of debate on trade-offs, and (b) to make general statements about choices when they do not know what their impact will be.
Examine your case through the eyes of a policymaker. Keep it simple and concise.
Make a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
Client-oriented advisors identify the beliefs of policymakers and tailor accordingly.
‘Unless your client asks you not to do so, you should explicitly recommend one policy’
I now invite you to make a recommendation (step 5) based on our discussion so far (steps 1-4). Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem would seem to change. If you are writing your analysis, maybe keep it down to one sheet of paper (and certainly far fewer words than in this post). Better you than me.
Please now watch this video before I suggest that things are not so simple.
Would that policy analysis were so simple
Imagine writing policy analysis in an imaginary world, in which there is a single powerful ‘rational’ policymaker at the heart of government, making policy via an orderly series of stages.
Your audience would be easy to identify at each stage, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change (since the selection of a solution would lead to implementation). You could adopt a simple 5 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.
Studies of policy analysts describe how unrealistic this expectation tends to be (Radin, Brans, Thissen).
For example, there are many policymakers, analysts, influencers, and experts spread across political systems, and engaging with 101 policy problems simultaneously, which suggests that it is not even clear how everyone fits together and interacts in what we call (for the sake of simplicity) ‘the policy process’.
Instead, we can describe real world policymaking with reference to two factors.
The wider policymaking environment: 1. Limiting the use of evidence
First, policymakers face ‘bounded rationality’, in which they only have the ability to pay attention to a tiny proportion of available facts, are unable to separate those facts from their values (since we use our beliefs to evaluate the meaning of facts), struggle to make clear and consistent choices, and do not know what impact they will have. The consequences can include:
Limited attention, and lurches of attention. Policymakers can only pay attention to a tiny proportion of their responsibilities, and policymaking organizations struggle to process all policy-relevant information. They prioritize some issues and information and ignore the rest.
Power and ideas. Some ways of understanding and describing the world dominate policy debate, helping some actors and marginalizing others.
Beliefs and coalitions. Policymakers see the world through the lens of their beliefs. They engage in politics to turn their beliefs into policy, form coalitions with people who share them, and compete with coalitions who don’t.
Dealing with complexity. They engage in ‘trial-and-error strategies’ to deal with uncertain and dynamic environments (see the new section on trial-and-error- at the end).
Framing and narratives. Policy audiences are vulnerable to manipulation when they rely on other actors to help them understand the world. People tell simple stories to persuade their audience to see a policy problem and its solution in a particular way.
The social construction of populations. Policymakers draw on quick emotional judgements, and social stereotypes, to propose benefits to some target populations and punishments for others.
Rules and norms. Institutions are the formal rules and informal understandings that represent a way to narrow information searches efficiently to make choices quickly.
Learning. Policy learning is a political process in which actors engage selectively with information, not a rational search for truth.
Evidence-based or expert-informed policymaking
Don’t think science can or should make decisions Donna. In conditions of uncertainty, it must inform decision makers who must be transparent about the choices they make and be held to account for them https://t.co/Wj4s9IS6fO
Put simply, policymakers cannot oversee a simple process of ‘evidence-based policymaking’. Rather, to all intents and purposes:
They need to find ways to ignore most evidence so that they can focus disproportionately on some. Otherwise, they will be unable to focus well enough to make choices. The cognitive and organisational shortcuts, described above, help them do it almost instantly.
They also use their experience to help them decide – often very quickly – what evidence is policy-relevant under the circumstances. Relevance can include:
How it relates to the policy problem as they define it (Step 1).
If it relates to a feasible solution (Step 2).
If it is timely, available, understandable, and actionable.
If it seems credible, such as from groups representing wider populations, or from people they trust.
They use a specific shortcut: relying on expertise.
However, the vague idea of trusting or not trusting experts is a nonsense, largely because it is virtually impossible to set a clear boundary between relevant/irrelevant experts and find a huge consensus on (exactly) what is happening and what to do. Instead, in political systems, we define the policy problem or find other ways to identify the most relevant expertise and exclude other sources of knowledge.
In the UK Government’s case, it appears to be relying primarily on expertise from its own general scientific advisers, medical and public health advisers, and – perhaps more controversially – advisers on behavioural public policy.
Not a thread but an interesting exchange on #coronavirus & Behavioural Sciences including readings from @LiamDelaneyEcon https://t.co/7Yn89XwOk6
Here’s my article on why I wish my fellow psychologists and “behavioural scientists” would just stop talking about the coronavirus: https://t.co/ofjJWdIY9v
Right now, it is difficult to tell exactly how and why it relies on each expert (at least when the expert is not in a clearly defined role, in which case it would be irresponsible not to consider their advice). Further, there are regular calls on Twitter for ministers to be more open about their decisions.
Key point from @jameswilsdon 'It is problematic if political choices are being made and then the science advice system has to front them up. There needs to be a clearer sense of where science advice ends and political judgement begins.'https://t.co/TjLCJDZijO via @timeshighered
However, don’t underestimate the problems of identifying why we make choices, then justifying one expert or another (while avoiding pointless arguments), or prioritising one form of advice over another. Look, for example, at the kind of short-cuts that intelligent people use, which seem sensible enough, but would receive much more intense scrutiny if presented in this way by governments:
Sophisticated speculation by experts in a particular field, shared widely (look at the RTs), but questioned by other experts in another field:
2. This all assumes I'm correct in what I think the govt are doing and why. I could be wrong – and wouldn't be surprised. But it looks to me like. . .
— Professor Ian Donald 3.5% (@iandonald_psych) March 13, 2020
As many have said, it would be good to get an official version of this, with acknowledged uncertainties and sources of evidence https://t.co/jxgoysYb3L
But what happened is that they have as a group fallen into a logical error in their attempts to model what will bring this epidemic under control. They have not appreciated that the answer to this question is adaptive behavior change. 3/17
It would be really helpful to project risk of covid death with and without mitigation strategies? Possible to map with inside / outside projections (ie what we gain/ don’t gain with current measures ?)
Experts in one field trusting certain experts in another field based on personal or professional interaction:
Lots of concern about UK's approach to #COVID19. I'm not an epidemiologist or a virologist (=> can't judge the detail) but I knew Patrick Vallance before he was famous and I believe he is a man of integrity. Same for Chris Whitty. Science, not politics, is driving their thinking.
— Trisha Greenhalgh 😷 #BlackLivesMatter (@trishgreenhalgh) March 14, 2020
Experts in one field not trusting a government’s approach based on its use of one (of many) sources of advice:
Why is UK government listening to the ‘nudge unit’ on the pandemic, and not expert epidemiologists and the WHO? You would think the ‘anti-experts’ approach would have at least on this occasion, with so many lives at risk, given way to a scientific approach https://t.co/QZIicXYpsj
Experts representing a community of experts, criticising another expert (Prof John Ashton), for misrepresenting the amount of expert scepticism of government experts (yes, I am trying to confuse you):
The Chief Medical Officer @CMO_England and his team have the 100% support and backing of the Public Health community. Every DPH I know thinks he is doing an amazing job in difficult circumstances Sorry but JRA is just demonstrating he is out of touch on this https://t.co/ExmOjEgum0
Expert debate on how well policymakers are making policy based on expert advice
Disagree.
Not much audible consensus amongst scientists anywhere for UK approach. Science can only illuminate value judgements yet now used a shield for determining them. UK science advice has always been characterised by old boys, political operators. Blurring is concerning. https://t.co/iBt07QfvqH
Finding quite-sensible ways to trust certain experts over others, such as because they can be held to account in some way (and may be relatively worried about saying any old shit on the internet):
My current approach to making sense of conflicting expert opinion on #coronavirus: no expert is infallible, but some are accountable and others are not, and I will value the opinions of those who are accountable above the opinions of those who are not.
There are many more examples in which the shortcut to expertise is fine, but not particularly better than another shortcut (and likely to include a disproportionately high number of white men with STEM backgrounds).
Update: of course, they are better than the volume trumps expertise approach:
This meme is spreading (you could say, in a not very funny joke, that it has gone viral). The WHO Director-General did not say this (brief thread). https://t.co/3eMfy70tKZ
For what it’s worth, I tend to favour experts who:
(a) establish the boundaries of their knowledge, (b) admit to high uncertainty about the overall problem:
After having spent considerable time thinking how to mitigate and manage this pandemic, and analysing the available data. I failed to identify the best course of action. Even worse, I'm not sure there is such a thing as an acceptable solution to the problem we are facing. (2/12)
— Prof Francois Balloux (@BallouxFrancois) March 14, 2020
I would challenge anyone to provide an accurate estimate of prevalence. The difference between models & real life is that with models we can set the parameters as if they are known. In real life these parameters are as clear as mud. Extract 04/13/2020 https://t.co/Qg2OrCo8tR
(c) (in this case) make it clear that they are working on scenarios, not simple prediction
I am deeply uncomfortable with the message that UK is actively pursuing ‘herd immunity’ as the main COVID-19 strategy. Our group’s scenario modelling has focused on reducing two main things: peak healthcare demand and deaths… 1/
"Prediction models are just estimates of what might happen and a model is only as good as the data that goes into it." https://t.co/KXDILsbZgr via @ConversationUK
(d) examine critically the too-simple ideas that float around, such as the idea that the UK Government should emulate ‘what works’ somewhere else
It's easy to say 'let's just do what Wuhan did', but the measures there have involved a change to daily life that really has been unimaginable in scale and impact. And as we've seen, China cannot sustain them indefinitely. 3/
A lot of my colleagues in the @LSHTM modelling centre (@cmmid_lshtm) have been working extremely hard to help expand the COVID-19 evidence base over the past two months. I'd like to take a moment to highlight some of their work… 1/
8. There's no gotcha-ism. Updating your models and predictions in light of new evidence and new inferential methods and insightful counterpoints from colleagues isn't a sign of weakness, it's *doing science*.
I do not agree with this interpretation. Multiple papers that tested people at high risk found that asymptomatic infection is relatively uncommon, in the range of 6-32%. https://t.co/gv5e2upEwz
(e) situate their own position (in Prof Sridhar’s case, for mass testing) within a broader debate
Scientific community is well-intentioned but split in two camps: one argues why sacrifice short-term social/economic well-being if everyone will get virus regardless, & other which says we have to buy time in short-term & save lives now while figuring out exit plan.
How much effort does your govt want to put into suppressing this outbreak? There is no quick fix or easy solution. S.Korea & Germany show what huge govt effort, planning, strong leadership, & doing utmost to protect population look like. Do everything v. do minimum.
Been saying 3 objectives for weeks. Not to attack anyone, but to highlight what we have learned so far: 1. Testing, tracing, isolating 2. Protect health workers with PPE & testing 3. Buy time for NHS
Two weeks ago Boris Johnson said Britain was aiming to eventually test 250,000 people a day. The reality is still far off the aspiration https://t.co/2SHX40B9Ul
My new blog on whether Covid raises everyone’s relative risk of dying by a similar amount. https://t.co/76NSNuDJ3i Latest ONS data shows that, of recent death registrations, the proportion linked to Covid does not depend on age.
However, note that most of these experts are from a very narrow social background, and from very narrow scientific fields (first in modelling, then likely in testing), despite the policy problem being largely about (a) who, and how many people, a government should try to save, and (b) how far a government should go to change behaviour to do it (Update 2.4.20: I wrote that paragraph before adding so many people to the list). It is understandable to defer in this way during a crisis, but it also contributes to a form of ‘depoliticisation’ that masks profound choices that benefit some people and leave others vulnerable to harm.
— Louis M M Coiffait (@LouisMMCoiffait) April 6, 2020
See also: ‘What’s important is social distancing’ coronavirus testing ‘is a side issue’, says Deputy Chief Medical Officer [Professor Jonathan Van-Tam talks about the important distinction between a currently available test to see if someone has contracted the virus (an antigen test) and a forthcoming test to see if someone has had and recovered from COVID-19 (an antibody test)]. The full interview is here (please feel free to ignore the editorialising of the uploader):
We might need to change our criteria to decide on capacity and resources. COVID-19 showed that the standard CEO approach of doing more with less is no good. German planners have apparently safely ignored this holy managerial mantra. @Breconomicshttps://t.co/MKi3f1Pueq
Cross country comparisons of the efficacy of anti covid19 policies are going to be hard. There are so many likely inputs; and data on them is scarce and noisy.
The UK Govts chief medical officer has conceded that Germany “got ahead” in testing people for Covid-19 and said the UK needed to learn from that. Ministers have been challenged repeatedly during the pandemic over their failure to increase testing. https://t.co/V0bgcMR7l0
He says there is not as much scrutiny as we might normally wish and says concerns raised about human rights, the length of powers and need for safeguards should be heeded in Westminster. He also commits to legislate for reporting requirements for use of powers by SG 4/5
Glad Scottish Government recognise need for ethical guidance on Covid 19, and hope they can say more on human rights in next version https://t.co/GiyTd2Xksu
This is an excellent initiative from @policescotland – commissioning @johndscott to provide independent scrutiny of new Coronavirus Emergency Powers. Policing is by consent of the people, this step hopefully gives further public reassurance on the application of powers https://t.co/6MtrqdTqIm
Unprecedented restrictions are in force in order to limit social contact and slow the spread of the coronavirus. But the govt and police must make clear what is enforceable and what is guidance if they are to retain the trust and confidence of the public https://t.co/ieLcg2qVE5pic.twitter.com/mBOK2fppH2
— Institute for Gov (@instituteforgov) April 5, 2020
The wider policymaking environment: 2. Limited control
Second, policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome. I normally use the following figure to think through the nature of a complex and unwieldy policymaking environment of which no ‘centre’ of government has full knowledge or control.
It helps us identify (further) the ways in which we can reject the idea that the UK Prime Minister and colleagues can fully understand and solve policy problems:
Actors. The environment contains many policymakers and influencers spread across many levels and types of government (‘venues’).
For example, consider how many key decisions that (a) have been made by organisations not in the UK central government, and (b) are more or less consistent with its advice, including:
Devolved governments announcing their own healthcare and public health responses (although the level of UK coordination seems more significant than the level of autonomy).
Public sector employers initiating or encouraging at-home working (and many Universities moving quickly from in-person to online teaching)
Private organisations cancelling cultural and sporting events.
There’s some coverage today suggesting Scotland proposing different policy to rest of UK on over 70s. This isn’t so. The policy of social distancing, not isolation, set out here by @jasonleitch is the policy all 4 nations have been discussing at COBR – and will do so again today. https://t.co/D89nwUDZTb
This is interesting, particularly the contrast with the approach to Brexit. The key difference is that Brexit blurred the boundaries between reserved and devolved competences in a way that health does not. https://t.co/4kSIcQFmJf
Context and events. Policy solutions relate to socioeconomic context and events which can be impossible to ignore and out of the control of policymakers. The coronavirus, and its impact on so many aspects on population health and wellbeing, is an extreme example of this problem.
Networks, Institutions, and Ideas. Policymakers and influencers operate in subsystems (specialist parts of political systems). They form networks or coalitions built on the exchange of resources or facilitated by trust underpinned by shared beliefs or previous cooperation. Many different parts of government have practices driven by their own formal and informal rules. Formal rules are often written down or known widely. Informal rules are the unwritten rules, norms and practices that are difficult to understand, and may not even be understood in the same way by participants. Political actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so established as to be taken for granted. These dominant frames of reference establish the boundaries of the political feasibility of policy solutions. These kinds of insights suggest that most policy decisions are considered, made, and delivered in the name of – but not in the full knowledge of – government ministers.
Trial and error policymaking in complex policymaking systems (17.3.20)
One way of viewing the UK's COVID-19 policy is that it changed to reflect changing evidence. That is fair; it's both how science-guided policy *should* work, and how I think the govt's advisors are seeing it, as per the Imperial paper. But… 1/
There are many ways to conceptualise this policymaking environment, but few theories provide specific advice on what to do, or how to engage effectively in it. One notable exception is the general advice that comes from complexity theory, including:
Law-like behaviour is difficult to identify – so a policy that was successful in one context may not have the same effect in another.
Policymaking systems are difficult to control; policy makers should not be surprised when their policy interventions do not have the desired effect.
Policy makers in the UK have been too driven by the idea of order, maintaining rigid hierarchies and producing top-down, centrally driven policy strategies. An attachment to performance indicators, to monitor and control local actors, may simply result in policy failure and demoralised policymakers.
Policymaking systems or their environments change quickly. Therefore, organisations must adapt quickly and not rely on a single policy strategy.
On this basis, there is a tendency in the literature to encourage the delegation of decision-making to local actors:
Rely less on central government driven targets, in favour of giving local organisations more freedom to learn from their experience and adapt to their rapidly-changing environment.
To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly.
Encourage better ways to deal with alleged failure by treating ‘errors’ as sources of learning (rather than a means to punish organisations) or setting more realistic parameters for success/ failure (although see this example and this comment).
Encourage a greater understanding, within the public sector, of the implications of complex systems and terms such as ‘emergence’ or ‘feedback loops’.
In other words, this literature, when applied to policymaking, tends to encourage a movement from centrally driven targets and performance indicators towards a more flexible understanding of rules and targets by local actors who are more able to understand and adapt to rapidly-changing local circumstances.
Now, just imagine the UK Government taking that advice right now. I think it is fair to say that it would be condemned continuously (even more so than right now). Maybe that is because it is the wrong way to make policy in times of crisis. Maybe it is because too few people are willing and able to accept that the role of a small group of people at the centre of government is necessarily limited, and that effective policymaking requires trial-and-error rather than a single, fixed, grand strategy to be communicated to the public. The former highlights policy that changes with new information and perspective. The latter highlights errors of judgement, incompetence, and U-turns. In either case, the advice is changing as estimates of the coronavirus’ impact change:
I think this tension, in the way that we understand UK government, helps explain some of the criticism that it faces when changing its advice to reflect changes in its data or advice. This criticism becomes intense when people also question the competence or motives of ministers (and even people reporting the news) more generally, leading to criticism that ranges from mild to outrageous:
Incredible detail in this FT story: up until last week, the UK was basing its coronavirus control policy on a model based on hospitalisation rates for 😲a different disease😲 with lower rates of intensive care need than coronavirus pic.twitter.com/7rJYh9sqg2
Laura Kuenssberg says (BBC) that, “The science has changed.” This is not true. The science has been the same since January. What has changed is that govt advisors have at last understood what really took place in China and what is now taking place in Italy. It was there to see.
We can’t keep changing our #COVID19 control policies whenever the results of the “mathematical modelling” change. We need to implement standard WHO-approved epidemic control policies hard and fast, as well as providing more support to frontline NHS staff. https://t.co/HAM9OqbmqW
There may be perfectly valid or at least debatable reasons for each but obfuscation does not help public to understand uncertainty around decisions. In other words, not communicating rationale = incompetence (as in incompetent in terms of state craft, not nec individual decision)
One wonders if Brit leaders have decided that the ultimate way to cut national budgets is to cull the herd of the weak, those who require costly NHS care, and pray for "herd immunity" among the rest. Cruel, cost effective #COVID19 strategy?@richardhorton1
For me, this casual reference to a government policy to ‘cull the heard of the weak’ is outrageous, but you can find much worse on Twitter. It reflects wider debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation of government statements, based on levels of trust/distrust in the UK Government, its Prime Minister and Secretaries of State, and the Prime Minister’s special adviser
I have enormous respect for the SAGE team and scientific advisors trying to understand the situation & inform the UK's response. If this article is accurate & partisan hacks were deliberately sacrificing lives based on their information, its scandalous. A week ago I was saying… https://t.co/WYsHbj6o0a
If you read the whole article you will see that Dominic Cummings has been, for the last 10 days, the most zealous advocate of a tough lockdown. Which is what his critics seem to want. The world is not black and white
1. Wilful misinterpretation (particularly on Twitter). For example, in the early development and communication of policy, Boris Johnson was accused (in an irresponsibly misleading way) of advocating for herd immunity rather than restrictive measures.
Below is one of the most misleading videos of its type. Look at how it cuts each segment into a narrative not provided by ministers or their advisors (see also this stinker):
The herd immunity strategy would’ve likely caused hundreds of thousands of deaths. They even told us so.
2. The accentuation of a message not being emphasised by government spokespeople.
See for example this interview, described by Sky News (13.3.20) as: The government’s chief scientific adviser Sir Patrick Vallance has told Sky News that about 60% of people will need to become infected with coronavirus in order for the UK to enjoy “herd immunity”. You might be forgiven for thinking that he was on Sky extolling the virtues of a strategy to that end (and expressing sincere concerns on that basis). This was certainly the write-up in respected papers like the FT (UK’s chief scientific adviser defends ‘herd immunity’ strategy for coronavirus). Yet, he was saying nothing of the sort. Rather, when prompted, he discussed herd immunity in relation to the belief that COVID-19 will endure long enough to become as common as seasonal flu.
The same goes for Vallance’s interview on the same day (13.3.20) during Radio 4’s Today programme (transcribed by the Spectator, which calls Vallance the author, and gives ittheheadline “How ‘herd immunity’ can help fight coronavirus” as if it is his main message). The Today Programme also tweeted only 30 seconds to single out that brief exchange:
Sir Patrick Vallance, the govt chief scientific adviser, says the thinking behind current approach to #coronavirus is to try and "reduce the peak" and to build up a "degree of herd immunity so that more people are immune to the disease". #R4Today
Yet, clearly his overall message – in this and other interviews – was that some interventions (e.g. staying at home; self-isolating with symptoms) would have bigger effects than others (e.g. school closures; prohibiting mass gatherings) during the ‘flattening of the peak’ strategy (‘What we don’t want is everybody to end up getting it in a short period of time so that we swamp and overwhelm NHS services’). Rather than describing ‘herd immunity’ as a strategy, he is really describing how to deal with its inevitability (‘Well, I think that we will end up with a number of people getting it’).
For anyone who thinks it was all obvious in January and February reading these minutes is a sobering experience. What comes over is the real uncertainty about what could be foretold from the Chinese experience and the ease with which the disease could be transmitted.4/n
Toby Young 'expert'. Nobody, including the Oxford team, believes this is true. Shame on The Sun for publishing this irresponsible rubbish. Shame on Toby Young for cynical misrepresentation of the science. pic.twitter.com/17hrOPW9b8
[OK, that proved to be a big departure from the trial-and-error discussion. Here we are, back again]
In some cases, maybe people are making the argument that trial-and-error is the best way to respond quickly, and adapt quickly, in a crisis but that the UK Government version is not what, say, the WHO thinks of as good kind of adaptive response. It is not possible to tell, at least from the general ways in which they justify acting quickly.
Dr Michael J Ryan, Executive Director at WHO. An off the cuff answer to a question at today's virtual press conference. Inspiring stuff! pic.twitter.com/Q4EUs8V1dG
The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results to not match their hopes or expectations.
This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing.
Further reading, until I can think of a better conclusion:
This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.
These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.
The scientific response to COVID-19 demands speed. But changing incentives and norms in academic science may be pushing the enterprise toward fast science at the expense of good science. Read Dan Sarewitz's editor's journal in the Spring 2020 ISSUES: https://t.co/JSSS45eTze
— Issues in Science and Technology (@ISSUESinST) April 7, 2020
#politvirus Public Health has always been #political because it’s actions impact on politics, economics, commercial interests, personal freedoms – this becomes most obvious in crisis – it will be key to analyse the political responses to #Covid_19 if we want to be better prepared https://t.co/JkUZrVeAxv
An assessment of the Government's response to date – written by Chair of Global Health at Edinburgh University..Prof Devi Sridhar https://t.co/N31QtFmQ2p
This is a really important paper. Partisanship is a huge influence on timing of state public health measures- Republican governors and Trump majorities slow adoption of measures. This might have big mortality effects in a few weeks. https://t.co/BEOAM69aSw
One reason Germany has so many ventilators (and intensive care beds) given in The Times: Not just more money in the system but design of hospital payment rates through the insurance system has driven up ICU investment be hospital managers pic.twitter.com/7R062IJI2k
This is worrying. Singapore was held up as one of the models for how to control #COVID19 through a sophisticated programme of testing and tracing without having to resort to the kinds of lockdowns many other countries are going through. https://t.co/6R0LY4IhuO
Today’s reflection- A number of Swedes are pretty shit at social distancing and probably need at least a modicum of discipline- the notion that we should be so very different here is ludicrous
WATCH: "Some countries initially talked about herd immunity as a strategy. In New Zealand we never, ever considered that. It would have meant tens of thousands of New Zealanders dying" — New Zealand Prime Minister @jacindaardernpic.twitter.com/W1ei6OUUyr
An online form to report lockdown breaches undermines the trust we have in each other – unhelpful in even the most benign of situations, and downright dangerous right now, writes Michael Macaulay. https://t.co/XCrnpfEVJt
Speechless every time someone says that this was totally unexpected & nobody saw this coming. See chapter 3: 'Preparing for the Worst: A Rapidly Spreading, Lethal Respiratory Pathogen' published by the @WHO Sept 2019. https://t.co/23qTrz7dN9
People are facing uncertainty for days, weeks & months. We need a manageable way forward to keep the health, social & economic costs at a minimum. My analysis on where COVID-19 response is heading & how it could end: https://t.co/qLDm8tv8a9
I wish the late great Mick Moran were still around – it feels like the next chapter of his analysis of the modern British state urgently needs to be written. https://t.co/ffxegGKVCu
I’m writing a book about @ExtinctionR. Here are some thoughts about today’s controversy. 1. This may or may not be a legit XR group. 2. That may matter because it may be done in order to smear XR & climate activism generally 1/n https://t.co/NyQhbv53a3
Cautionary words for anyone tempted to say "this must be good for the climate" or, worse, "this shows we can tackle climate change".
COVID19 is a re-framing of the climate issues – a dramatically changed context for the response – but those climate issues haven't gone away. https://t.co/gixVwnk6gq
We are concerned about regulation rollbacks which impact the food system slipping under the radar at the moment – we are going to be keeping an eye on things and use hashtag #Covid19Watchdoghttps://t.co/niinfSWv6f#TuesdayThoughts
A study in politics – when leadership fails. Would those that were ready to bash the @WHO take the time to read this? The critical issue for all countries is: what did they do after the PHEIC was declared? Why did USA and China not work together to fight #COVID19https://t.co/zK7hcEbU80
Not a single voice from the Global South – that’s not good enough if you are reporting on a global organisation – @who has 194 member states – it’s not the donors who should be running it #COVID19#geopoliticshttps://t.co/xqTaFEYLap
— Professor Paul Cairney (@CairneyPaul) April 9, 2020
The Australian #COVID19 modelling was published today. My thanks to James McCaw (@j_mccaw) for checking this thread. I’ll do two threads – one explaining the results and how we might interpret them; and another to try to explain how these models work. https://t.co/O6sGwggY9W
This was so predictable. Ireland was already closing pubs and restaurants. #COVIDー19 . Cheltenham Festival ‘spread coronavirus across country’ | News | The Times https://t.co/QVQnJblJiH
— Andrea Catherwood (@acatherwoodnews) April 3, 2020
expert comments about comparison between the COVID-19 situation in Ireland and the UKhttps://t.co/y4OBOhdbtT
This post first appeared on the MIHE blog to help sell my book.
During elections, many future leaders give the impression that they will take control of public policy. They promise major policy change and give little indication that anything might stand in their way.
This image has been a major feature of Donald Trump’s rhetoric on his US Presidency. It has also been a feature of campaigns for the UK withdrawal from the European Union (‘Brexit’) to allow its leaders to take back control of policy and policymaking. According to this narrative, Brexit would allow (a) the UK government to make profound changes to immigration and spending, and (b) Parliament and the public to hold the UK government directly to account, in contrast to a distant EU policy process less subject to direct British scrutiny.
Such promises are built on the false image of a single ‘centre’ of government, in which a small number of elected policymakers take responsibility for policy outcomes. This way of thinking is rejected continuously in the modern literature. Instead, policymaking is ‘multi-centric’: responsibility for policy outcomes is spread across many levels and types of government (‘centres’), and shared with organisations outside of government, to the extent that it is not possible to simply know who is in charge and to blame. This arrangement helps explain why leaders promise major policy change but most outcomes represent a minor departure from the status quo.
Some studies of politics relate this arrangement to the choice to share power across many centres. In the US, a written constitution ensures power sharing across different branches (executive, legislative, judicial) and between federal and state or local jurisdictions. In the UK, central government has long shared power with EU, devolved, and local policymaking organisations.
However, policy theories show that most aspects of multi-centric governance are necessary. The public policy literature provides many ways to describe such policy processes, but two are particularly useful.
The first approach is to explain the diffusion of power with reference to an enduring logic of policymaking, as follows:
The size and scope of the state is so large that it is always in danger of becoming unmanageable. Policymakers manage complexity by breaking the state’s component parts into policy sectors and sub-sectors, with power spread across many parts of government.
Elected policymakers can only pay attention to a tiny proportion of issues for which they are responsible. They pay attention to a small number and ignore the rest. They delegate policymaking responsibility to other actors such as bureaucrats, often at low levels of government.
At this level of government and specialisation, bureaucrats rely on specialist organisations for information and advice. Those organisations trade that information/advice and other resources for access to, and influence within, the government.
Most public policy is conducted primarily through small and specialist ‘policy communities’ that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement.
This description suggests that senior elected politicians are less important than people think, their impact on policy is questionable, and elections may not provide major changes in policy. Most decisions are taken in their name but without their intervention.
A second, more general, approach is to show that elected politicians deal with such limitations by combining cognition and emotion to make choices quickly. Although such action allows them to be decisive, they occur within a policymaking environment over which governments have limited control. Government bureaucracies only have the coordinative capacity to direct policy outcomes in a small number of high priority areas. In most other cases, policymaking is spread across many venues, each with their own rules, networks, ways of seeing the world, and ways of responding to socio-economic factors and events.
In that context, we should always be sceptical when election candidates and referendum campaigners (or, in many cases, leaders of authoritarian governments) make such promises about political leadership and government control.
A more sophisticated knowledge of policy processes allows us to identify the limits to the actions of elected policymakers, and develop a healthier sense of pragmatism about the likely impact of government policy. The question of our age is not: how can governments take back control? Rather, it is: how can we hold policymakers to account in a complex system over which they have limited knowledge and even less control?
This post – by Dr Kathryn Oliver and me – originally appeared on the LSE Impact Blog. I have replaced the picture of a thumb up with a cat hanging in there.
Many academics want to see their research have an impact on policy and practice, and there is a lot of advice on how to seek it. It can be helpful to take advice from experienced and successful people. However, is this always the best advice? Guidance based on best practice and success stories in particular, often reflect unequal access to policymakers, institutional support, and credibility attached to certain personal characteristics.
To take stock of the vast amount of advice being offered to academics, we decided to compare it with the more systematic analyses available in the peer-reviewed literature, on the ‘barriers’ between evidence and policy, and policy studies. This allowed us to situate this advice in a wider context, see whether it was generalisable across settings and career stages, and to think through the inconsistencies and dilemmas which underlie these suggestions.
The advice: Top tips on influencing policy
The key themes and individual recommendations we identified from the 86 most-relevant publications are:
Do high quality research: Use well-established research designs, methods, or metrics.
Make your research relevant and readable: Provide easily-understandable, clear, relevant and high-quality research. Aim for the general reader. Produce good stories based on emotional appeals or humour.
Understand the policymaking context. Note the busy and constrained lives of policy actors. Maximise established ways to engage, such as in advisory committees. Be pragmatic, accepting that research rarely translates directly into policy.
Be ‘accessible’ to policymakers. This may involve discussing topics beyond your narrow expertise. Be humble, courteous, professional, and recognise the limits to your skills.
Decide if you want to be an ‘issue advocate’. Decide whether to simply explain the evidence, remain an ‘honest broker, or recommend specific policy options. Negative consequences may include peer criticism, being seen as an academic lightweight, being used to add legitimacy to a policy position, and burnout.
Build relationships (and ground rules) with policymakers: Relationship-building requires investment and skills, but working collaboratively is often necessary. Academics could identify policy actors to provide insights into policy problems, act as champions for their research, and identify the most helpful policy actors.
Be ‘entrepreneurial’ or find someone who is. Be a daring, persuasive scientist, comfortable in policy environments and available when needed. Or, seek brokers to act on your behalf.
Reflect continuously: should you engage, do you want to, and is it working? Academics may enjoy the work or are passionate about the issue. Even so, keep track of when and how you have had impact, and revise your practices continuously.
Inconsistencies and dilemmas
This advice tends not to address wider issues. For example, there is no consensus over what counts as good evidence for policy, or therefore how best to communicate good evidence. We know little about how to gain the wide range of skills that researchers and policymakers need to act collectively, including to: produce evidence syntheses, manage expert communities, ‘co-produce’ research and policy with a wide range of stakeholders, and be prepared to offer policy recommendations as well as scientific advice. Further, a one-size fits-all model won’t help researchers navigate a policymaking environment where different venues have different cultures and networks. Researchers therefore need to decide what policy engagement is for—to frame problems or simply measure them according to an existing frame—and how far researchers should go to be useful and influential. If academics need to go ‘all in’ to secure meaningful impact, we need to reflect on the extent to which they have the resources and support to do so. This means navigating profound dilemmas:
Can academics try to influence policy? The financial costs of seeking impact are prohibitive for junior or untenured researchers, while women and people of colour may be more subject to personal abuse. Such factors undermine the diversity of voices available.
How should academics influence policy? Many of these new required skills – such as storytelling – are not a routine part of academic training, and may be looked down on by our colleagues.
What is the purpose of academics engagement in policymaking? To go beyond tokenistic and instrumental engagement is to build genuine rapport with policymakers, which may require us to co-produce knowledge and cede some control over the research process. It involves a fundamentally different way of doing public engagement: one with no clear aim in mind other than to listen and learn, with the potential to transform research practices and outputs.
Where is the evidence that this advice helps us improve impact?
The existing advice offered to academics on how to create impact is – although often well-meaning – not based on systematic research or comprehensive analysis of empirical evidence. Few advice-givers draw clearly on key literatures on policymaking or evidence use. This leads to significant misunderstandings, which can have potentially costly repercussions for research, researchers and policy. These limitations matter, as they lead to advice which fails to address core dilemmas for academics—whether to engage, how to engage, and why—which have profound implications for how scientists and universities should respond to the calls for increased impact.
Most tips focus on individual experience, whereas engagement between research and policy is driven by systemic factors. Many of the tips may be sensible and effective, but often only within particular settings. The advice is likely to be useful mostly to a relatively similar group of people who are confident and comfortable in policy environments, and have access and credibility within policy arenas. Thus, the current advice and structures may help reproduce and reinforce existing power dynamics and an underrepresentation of people who do not fit a very narrow mould.
The overall result may be that each generation of scientists has to fight the same battles, and learn the same lessons over again. Our best response as a profession is to interrogate current advice, shape and frame it, and to help us all to find ways to navigate the complex practical, political, moral and ethical challenges associated with being researchers today. The ‘how to’ literature can help, but only if authors are cognisant of their wider role in society and complex policymaking systems.
Kathryn Oliver is Associate Professor of Sociology and Public Health, London School of Hygiene and Tropical Medicine (@oliver_kathryn ). Her interest is in how knowledge is produced, mobilized and used in policy and practice, and how this affects the practice of research. She co-runs the research collaborative Transforming Evidence with Annette Boaz. https://transformure.wordpress.com and her writings can be found here: https://kathrynoliver.wordpress.com
Paul Cairney is Professor of Politics and Public Policy, University of Stirling, UK (@Cairneypaul). His research interests are in comparative public policy and policy theories, which he uses to explain the use of evidence in policy and policymaking, in one book (The Politics of Evidence-Based Policy Making, 2016), several articles, and many, many blog posts: https://paulcairney.wordpress.com/ebpm/
See also:
Adam Wellstead, Paul Cairney, and Kathryn Oliver (2018) ‘Reducing ambiguity to close the science-policy gap’, Policy Design and Practice, 1, 2, 115-25 PDF
Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-xPDFAM
Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, 76, 3, 399–402 DOI:10.1111/puar.12555PDF
“The ‘Brexit’ referendum was dominated by a narrative of taking back control of policy and policy making. Control of policy would allow the UK government to make profound changes to immigration and spending. Control of policymaking would allow Parliament and the public to hold the UK government directly to account, in contrast to a more complex and distant EU policy process less subject to direct British scrutiny.
Such high level political debate is built on the false image of a small number of elected policymakers – and the Prime Minister in particular – responsible for the outcomes of the policy process.
There is a strange disconnect between the ways in which elected politicians and elected policymakers describe UK policymaking. Ministers have mostly given up the language of control; modern manifestos no longer make claims – such as to secure ‘full employment’ or eradicate health inequalities – that suggest they control the economy or can solve problems by providing public services. Yet, much Brexit rhetoric suggests that a vote to leave the EU will put control back in the hands of ministers to solve major problems.
The main problem with the latter way of thinking is that it is rejected continuously in the modern literature on policymaking. Policymaking is multi-centric: responsibility for outcomes is spread across many levels and types of government, to the extent that it is not possible to simply know who is in charge and to blame.
Somemulti-level governance (MLG) relates to the choice to share power with EU, devolved, and local policymaking organisations.
However, most MLG is necessary because ministers do not have the cognitive or coordinative capacity to control policy outcomes.
They can only pay attention to a tiny proportion of their responsibilities, and have to delegate the rest. Most decisions are taken in their name but without their intervention. They occur within a policymaking environment over which ministers have limited knowledge and control.
The problem with using Brexit as a lens through which to understand British politics is that it emphasises the choice to no longer spread power across a political system, without acknowledging the necessity of doing so.
Our understanding of the future of UK policy and policymaking is incomplete without a focus on the concepts and evidence that help us understand why UK ministers must accept their limitations and act accordingly.
Yet, clearly the Westminster model archetype remains important even if it does not exist (Duggett, 2009). Policy studies have challenged successfully its image of central control, but, the model’s importance resides in its rhetorical power in wider politics when people maintain a simple argument during general election and referendum debates: we know who is – or should be – in charge. This perspective has a profound effect on the ways in which policymakers defend their actions, and political actors compete for votes, even when it is ridiculously misleading (Rhodes, 2013; Bevir, 2013)”
Notes for the #transformURE event hosted by Nuffield, 25th September 2018
I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:
When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.
The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.
It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.
Put less gloomily:
We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.
Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.
The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].
They include:
Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.
So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.
Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.
Kathryn Oliver and I describe these ‘how to’ tips in this post and, in this article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.
There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.
In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.
So I describe two cases in which its evidence-use seems cynical, when:
Declaring complete success in turning around the lives of ‘troubled families’
Exploiting vivid neuroscientific images to support ‘early intervention’
Then I describe more difficult cases in which supportive evidence is not clear:
Family intervention project evaluations are of limited value and only tentatively positive
Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’
As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?
At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:
I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.
The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’
I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.
As such, it forms part of a far wider body of work …
In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.
In that wider context, it is worth comparing this talk with the one I gave at the IDS (which, I reckon is a good primer for – or prequel to – the UK talk):
These posts introduce you to key concepts in the study of public policy. They are all designed to turn a complex policymaking world into something simple enough to understand. Some of them focus on small parts of the system. Others present ambitious ways to explain the system as a whole. The wide range of concepts should give you a sense of a variety of studies out there, but my aim is to show you that these studies have common themes.