Category Archives: Prevention policy

The UK Government’s COVID-19 policy: assessing evidence-informed policy analysis in real time

abstract 25k words

On the 23rd March 2020, the UK Government’s Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of COVID-19 , including new regulations on behaviour, police powers to support public health, budgetary measures to support businesses and workers during their economic inactivity, the almost-complete closure of schools, and the major expansion of healthcare capacity via investment in technology, discharge to care homes, and a consolidation of national, private, and new health service capacity (note that many of these measures relate only to England, with devolved governments responsible for public health in Northern Ireland, Scotland, and Wales). Overall, the coronavirus prompted almost-unprecedented policy change, towards state intervention, at a speed and magnitude that seemed unimaginable before 2020.

Yet, many have criticised the UK government’s response as slow and insufficient. Criticisms include that UK ministers and their advisors did not:

  • take the coronavirus seriously enough in relation to existing evidence (when its devastating effect was increasingly apparent in China in January and Italy from February)
  • act as quickly as some countries to test for infection to limit its spread, and/ or introduce swift measures to close schools, businesses, and major social events, and regulate social behaviour (such as in Taiwan, South Korea, or New Zealand)
  • introduce strict-enough measures to stop people coming into contact with each other at events and in public transport.

They blame UK ministers for pursuing a ‘mitigation’ strategy, allegedly based on reducing the rate of infection and impact of COVID-19 until the population developed ‘herd immunity’, rather than an elimination strategy to minimise its spread until a vaccine or antiviral could be developed. Or, they criticise the over-reliance on specific models, which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown.

Many cite this delay, compounded by insufficient personal protective equipment (PPE) in hospitals and fatal errors in the treatment of care homes, as the biggest contributor to the UK’s unusually high number of excess deaths (Campbell et al, 2020; Burn-Murdoch and Giles, 2020; Scally et al, 2020; Mason, 2020; Ball, 2020; compare with Freedman, 2020a; 2020b and Snowden, 2020).

In contrast, scientific advisers to UK ministers have emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term (e.g. Vallance). Throughout, they emphasised the need for individual behavioural change (hand washing and social distancing), supplemented by government action, in a liberal democracy in which direct imposition is unusual and, according to UK ministers, unsustainable in the long term.

We can relate these debates to the general limits to policymaking identified in policy studies (summarised in Cairney, 2016; 2020a; Cairney et al, 2019) and underpinning the ‘governance thesis’ that dominates the study of British policymaking (Kerr and Kettell, 2006: 11; Jordan and Cairney, 2013: 234).

First, policymakers must ignore almost all evidence. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information.

Second, policymakers have a limited understanding, and even less control, of their policymaking environments. No single centre of government has the power to control policy outcomes. Rather, there are many policymakers and influencers spread across a political system, and most choices in government are made in subsystems, with their own rules and networks, over which ministers have limited knowledge and influence. Further, the social and economic context, and events such as a pandemic, often appear to be largely out of their control.

Third, even though they lack full knowledge and control, governments must still make choices. Therefore, their choices are necessarily flawed.

Fourth, their choices produce unequal impacts on different social groups.

Overall, the idea that policy is controlled by a small number of UK government ministers, with the power to solve major policy problems, is still popular in media and public debate, but dismissed in policy research .

Hold the UK government to account via systematic analysis, not trials by social media

To make more sense of current developments in the UK, we need to understand how UK policymakers address these limitations in practice, and widen the scope of debate to consider the impact of policy on inequalities.

A policy theory-informed and real-time account helps us avoid after-the-fact wisdom and bad-faith trials by social media.

UK government action has been deficient in important ways, but we need careful and systematic analysis to help us separate (a) well-informed criticism to foster policy learning and hold ministers to account, from (a) a naïve and partisan rush to judgement that undermines learning and helps let ministers off the hook.

To that end, I combine insights from policy analysis guides, policy theories, and critical policy analysis to analyse the UK government’s initial coronavirus policy. I use the lens of 5-step policy analysis models to identify what analysts and policymakers need to do, the limits to their ability to do it, and the distributional consequences of their choices.

I focus on sources in the public record, including oral evidence to the House of Commons Health and Social Care committee, and the minutes and meeting papers of the UK Government’s Scientific Advisory Group for Emergencies (SAGE) (and NERVTAG), transcripts of TV press conferences and radio interviews, and reports by professional bodies and think tanks.

The short version is here. The long version – containing a huge list of sources and ongoing debates – is here. Both are on the COVID-19 page.

Leave a comment

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

This post is part 8 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The table is too big to reproduce here, so you have the following options:

Table 2 in PDF

Table 2 as a word document

Or, if you prefer not to read the posts individually:

The whole thing in PDF

The whole thing as a Word document

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 3. Communicating to the public

This post is part 7 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE’s emphasis on uncertainty and limited knowledge extended to the evidence on how to influence behaviour via communication:

‘there is limited evidence on the best phrasing of messages, the barriers and stressors that people will encounter when trying to follow guidance, the attitudes of the public to the interventions, or the best strategies to promote adherence in the long-term’ (SPI-B Meeting paper 3.3.20: 2)

Early on, SAGE minutes described continuously the potential problems of communicating risk and encouraging behavioural change through communication (in other words, based on low expectations for the types of quarantine measures associated with China and South Korea).

  • It sought ‘behavioural science input on public communication’ and ‘agreed on the importance of behavioural science informing policy – and on the importance of public trust in HMG’s approach’ (28.1.20: 2).
  • It worried about how the public might interpret ‘case fatality rate’, given the different ways to describe and interpret frequencies and risks (4.2.20: 3).
  • It stated that ‘Epidemiological terms need to be made clearer in the planning documents to avoid ambiguity’ (11.2.20: 3).
  • Its extensive discussion of behavioural science (13.2.20: 2-3) includes: there will be public scepticism and inaction until first deaths are confirmed; the main aim is to motivate people by relating behavioural change to their lives; messaging should stress ‘personal responsibility and responsibility to others’ and be clear on which measures are effective’, and ‘National messaging should be clear and definitive: if such messaging is presented as both precautionary and sufficient, it will reduce the likelihood of the public adopting further unnecessary or contradictory behaviours’ (13.2.20: 2-3)
  • Banning large public events could signal the need to change behaviour more generally, but evidence for its likely impact is unavailable (SPI-M-O, 11.2.20: 1).

Generally speaking, the assumption underpinning communication is that behavioural change will come largely from communication (encouragement and exhortation) rather than imposition. Hence, for example, the SPI-B (25.2.20: 2) recommendation on limiting the ‘risk of public disorder’:

  • ‘Provide clear and transparent reasons for different strategies: The public need to understand the purpose of the Government’s policy, why the UK approach differs to other countries and how resources are being allocated. SPI-B agreed that government should prioritise messaging that explains clearly why certain actions are being taken, ahead of messaging designed solely for reassuring the public.
  • This should also set clear expectations on how the response will develop, g. ensuring the public understands what they can expect as the outbreak evolves and what will happen when large numbers of people present at hospitals. The use of early messaging will help, as a) individuals are likely to be more receptive to messages before an issue becomes controversial and b) it will promote a sense the Government is following a plan.
  • Promote a sense of collectivism: All messaging should reinforce a sense of community, that “we are all in this together.” This will avoid increasing tensions between different groups (including between responding agencies and the public); promote social norms around behaviours; and lead to self-policing within communities around important behaviours’.

The underpinning assumption is that the government should treat people as ‘rational actors’: explain risk and how to reduce it, support existing measures by the public to socially distance, be transparent, explain if UK is doing things differently to other countries, and recognise that these measures are easier for some more than others (13.3.20: 3).

In that context, SPI-B Meeting paper 22.3.20 describes how to enable social distancing with reference to the ‘behaviour change wheel’ (Michie et al, 2011): ‘There are nine broad ways of achieving behaviour change: Education, Persuasion, Incentivisation, Coercion, Enablement, Training, Restriction, Environmental restructuring, and Modelling’ and many could reinforce each other (22.3.20: 1). The paper comments on current policy in relation to 5 elements:

  1. Education – clarify guidance (generally, and for shielding), e.g. through interactive website, tailored to many audiences
  2. Persuasion – increase perceived threat among ‘those who are complacent, using hard-hitting emotional messaging’ while providing clarity and positive messaging (tailored to your audience’s motivation) on what action to take (22.3.20: 1-2).
  3. Incentivisation – emphasise social approval as a reward for behaviour change
  4. Coercion – ‘Consideration should be given to enacting legislation, with community involvement, to compel key social distancing measures’ (combined with encouraging ‘social disapproval but with a strong caveat around unwanted negative consequences’ (22.3.20: 2)
  5. Enablement – make sure that people have alternative access to social contact, food, and other resources for people feeling the unequal impact of lockdown (particularly for vulnerable people shielding, aided by community support).

Apparently, section 3 of SPI-B’s meeting paper (1.4.20b: 2) had been redacted because it was critical of a UK Government ‘Framework; with 4 new proposals for greater compliance: ‘17) increasing the financial penalties imposed; 18) introducing self-validation for movements; 19) reducing exercise and/or shopping; 20) reducing non-home working’. On 17, it suggests that the evidence base for (e.g.) fining someone exercising more than 1km from their home could contribute to lower support for policy overall. On 17-19, it suggests that most people are already complying, so there is no evidence to support more targeted measures. It is more positive about 20, since it could reduce non-home working (especially if financially supported). Generally, it suggests that ministers should ‘also consider the role of rewards and facilitations in improving adherence’ and use organisational changes, such as staggered work hours and new use of space, rather than simply focusing on individuals.

Communication after the lockdown

SAGE suggests that communication problems are more complicated during the release of lockdown measures (in other words, without the ability to present the relatively-low-ambiguity message ‘stay at home’). Examples (mostly from SPI-B and its contributors) include:

  • Address potential confusion, causing false concern or reassurance, regarding antigen and antibody tests (meeting papers 1.4.20c: 3; 13.4.20b: 1-4; 22.4.20b: 1-5; 29.4.20a: 1-4)
  • When notifying people about the need to self-isolate, address the trade-offs between symptom versus positive test based notifications (meeting paper 29.4.20a: 1-4; 5.5.20: 1-8)
  • If you are worried about public ‘disorder’, focus on clear, effective, tailored communication, using local influencers, appealing to sympathetic groups (like NHS staff), and co-producing messages between the police and public (in other words, police via consent, and do not exacerbate grievances) (meeting papers 19.4.20: 1-4; 21.4.20: 1-3; 4.5.20: 1-11)
  • Be wary of lockdowns specific to very small areas, which undermine the ‘all in it together’ message (REDACTED and Clifford Stott, no date: 1). If you must to it, clarify precisely who is affected and what they should do, support the people most vulnerable and impacted (e.g. financially), and redesign physical spaces (meeting paper SPI-B 22.4.20a)
  • When reopening schools (fully or partly), communication is key to the inevitably complex and unpredictable behavioural consequences (so, for example, work with parents, teachers, and other stakeholders to co-produce clear guidance) (29.4.20d: 1-10)
  • On the introduction of Alert Levels, as part of the Joint Biosecurity Centre work on local outbreaks (described in meeting paper 20.5.20a: 1-9): build public trust and understanding regarding JBC alert levels, and relate them very clearly to expected behaviour (SAGE 28.5.20). Each Alert Level should relate clearly to a required response in that area, and ‘public communications on Alert Levels needs many trusted messengers giving the same advice, many times’ (meeting paper 27.5.20b: 3).
  • On transmission between social networks, ‘Communicate two key principles: 1. People whose work involves large numbers of contacts with different people should avoid close, prolonged, indoor contact with anyone as far as possible … 2. People with different workplace networks should avoid meeting or sharing the same spaces’ (meeting paper 27.5.20b: 1).
  • On outbreaks in ‘forgotten institutional settings’ (including Prisons, Homeless Hostels, Migrant dormitories, and Long stay mental health): address the unusually low levels of trust in (or awareness of) government messaging among so-called ‘hard to reach groups’ (meeting paper 28.5.20a: 1).

See also:

SPI-M (Meeting paper 17.3.20b: 4) list of how to describe probabilities. This is more important than it looks, since there is a potentially major gap between the public and advisory group understanding of words like ‘probably’ (compare with the CIA’s Words of Estimative Probability).

SAGE language of probability 17.3.20b p4

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

This post is part 6 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

Limited testing

Oral evidence to the Health and Social Care committee highlights the now-well-documented limits to UK testing capacity and PPE stocks (see also NERVTAG on PPE). SAGE does not discuss testing capacity much in the beginning, although on 10.3.20 it lists as an action point: ‘Plans for how PHE can move from 1,000 serology tests to 10,000 tests per week’ and by 16.3.20 it describes the urgent need to scale up testing – perhaps with commercial involvement and to test at home (if can ensure accuracy) – and to secure sufficient data to track the epidemic well enough to inform operational decisions. From April, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20), and the need for far more testing is a feature of almost every meeting from then.

Limited contact tracing

Initially, SAGE describes a quite-low contact tracing capacity: ‘Currently, PHE can cope with five new cases a week (requiring isolation of 800 contacts). Modelling suggests this capacity could be increased to 50 new cases a week (8,000 contact isolations)’ (18.2.20: 1).

Previously, it had noted that the point would come when transmission was too high to make contact tracing worthwhile, particularly since many (e.g. asymptomatic) cases may already have been missed (20.2.20: 2) and the necessary testing capacity was not in place (16.4.20): ‘PHE to work with SPI-M to develop criteria for when contact tracing is no longer worthwhile. This should include consideration of any limiting factors on testing and alternative methods of identifying epidemic evolution and characteristics’ (11.2.20: 3; see also Testing and contact tracing).

It returned to the feasibility question after the lockdown, with:

  • SPI-M (meeting paper 4.20d: 1-3) estimating that effective contact tracing (80% of non-household cases, in 2 days) could reduce the R by 30-60% if you could quarantine many people, multiple times; and,
  • SPI-B (meeting paper 4.20a: 1-3) advising on the need to clarify to people how it would work and what they should do, redesign physical spaces, and conduct new qualitative research and stakeholder engagement to ‘help us to understand more clearly the specific drivers, enablers and barriers for new behavioural recommendations’ to address an unprecedented problem in the UK (22.4.20a: 2). SPI-B also describes the trade-offs between app-informed systems (notification based on symptoms would suit people seeking to be precautionary, but could reduce compliance among people who believe the risk to be low) (see meeting papers 29.4.20: 3 and 5.5.20: 1-8)
  • SAGE noting ongoing work on clusters and super-spreading events, which necessitate cluster-based contact tracing (11.6.20: 3)
  • A more general message that contact tracing will be overwhelmed if lockdown measures are released too soon, raising R well above 1 and causing incidence to rise too quickly (e.g. 14.5.20)

Low capacity to achieve high levels of information necessary for forecasting

This type of discussion exemplifies a general and continuous focus on the lack of data to inform advice:

‘24. Real-time forecasting models rely on deriving information on the epidemic from surveillance. If transmission is established in the UK there will necessarily be a delay before sufficiently accurate forecasts in the UK are available. 25. Decisions being made on whether to modify or lift non-pharmaceutical interventions require accurate understanding of the state of the epidemic. Large-scale serological data would be ideal, especially combined with direct monitoring of contact behaviour. 26. Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK (or a similar country). While some estimates may be available before this time their accuracy will be much more limited. 27. The UK hospitalisation rate and CFR will be very important for operational planning and will be estimated over a similar timeframe. They may take longer depending on the availability of data’ (Meeting paper 2.3.20: 3-4).

A limited capacity to reach a relatively cautious consensus?

These limitations to information contributed to the difference between SAGE’s estimate on UK transmission (such as in comparison with Italy) and the UK’s much faster rate of transmission:

‘the UK likely has thousands of cases – as many as 5,000 to 10,000 – which are geographically spread nationally … The UK is considered to be 4-5 weeks behind Italy but on a similar curve (6-8 weeks behind if interventions are applied)’ (10.3.20: 1)

‘Based on limited available evidence, SAGE considers that the UK is 2 to 4 weeks behind Italy in terms of the epidemic curve’ (18.3.20: 1)

Rather, the UK was under 2 weeks behind Italy on the 10th March, suggesting that its lockdown measures were put in place too late.

At the heart of this estimate was the under-estimated doubling time of infection (‘the time it takes for the number of cases to double in size’, Meeting paper 3.2.20a):

  • although described as 3-4 days (28.1.20: 1) then 4-6 days (Meeting paper 2.3.20) based on Wuhan, and 3-5 days based on Hubei (Meeting paper 3.2.20a),
  • SAGE estimates ‘every 5-6 days’ (16.3.20: 1) and states that ‘Assuming a doubling time of around 5-7 days continues to be reasonable’ (18.3.20: 1).
  • Only by meeting 18 does SAGE estimate the doubling time (ICU patients) at 3-4 days (23.3.20). By meeting 19, it describes the doubling time in hospitals as 3.3 days (26.3.20: 1).

Kit Yates suggests that (a) the UK exhibited a 3-day doubling time during this period (Huffington Post), and (b) although many members of SAGE and SPI-M would have preferred to model on the assumption of 3-days:

Having spoken to some of the modellers on SPI-M, not all of them were missing this. Many of the groups had fitted models to data and come up with shorter and more realistic doubling times, maybe around the 3-day mark, but their estimates never found consensus within the group, so some members of SPI-M have communicated their concerns to me that some of the modelling groups had more influence over the consensus decision than others, which meant that some opinions or estimates which might have been valid, didn’t get heard, and consequently weren’t passed on up the line to SAGE, and then further towards the government, so an over-reliance on certain models or modelling groups might have been costly in this situation (interview, Kit Yates, More or Less, 10.6.20: 4m47s-5m27s)

Yates then suggests that the most listened-to model – led by Neil Ferguson, published 16.3.20 –  estimates a doubling time of 5-days, based on early data from Wuhan, using estimate of R2.4 (and generation time of 6.5 days), ‘which we now know to be way too low’ when we look at the UK data:

If they had just plotted the early trajectory of the epidemics against the current UK data at that point, they would have seen [by 14.3.20] that their model was starting to underestimate the number of cases and then the number of deaths which were occurring in the UK’ (interview, Kit Yates, More or Less, 10.6.20: 7m2s-7m15s)

Yates’ account highlights not only

  1. the effect of uncertainty and limited capacity to generate more information, but also
  2. the wider effect of path dependence, in which the (a) written and unwritten rules and norms of organisations, and (b) enduring ways of thinking (in individuals and groups, and political systems) place limits on new action. These limits are often necessary and beneficial, and often unnecessary and harmful.

Compare with Vallance’s oral evidence to the Health and Social Care committee (17.3.20: q96):

‘If you thought SAGE and the way SAGE works was a cosy consensus of agreeing scientists, you would be very mistaken. It is a lively, robust discussion, with multiple inputs. We do not try to get everybody saying exactly the same thing’.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 1. The language of intervention

This post is part 5 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

There is often a clear distinction between a strategy designed to (a) eliminate a virus/ the spread of disease quickly, and (b) manage the spread of infection over the long term (see The overall narrative).

However, generally, the language of virus management is confusing. We need to be careful with interpreting the language used in these minutes, and other sources such as oral evidence to House of Commons committees, particularly when comparing the language at the beginning (when people were also unsure what to call SARS-CoV-2 and COVID-19) to present day debates.

For example, in January, it is tempting to contrast ‘slow down the spread of the outbreak domestically’ (28.1.20: 2) with a strategy towards ‘extinction’, but the proposed actions may be the same even if the expectations of impact are different. Some people interpret these differences as indicative of a profoundly different approach (delay versus eradicate); some describe the semantic differences as semantics.

By February, SAGE’s expectation is of an inevitable epidemic and inability to contain COVID-19, prompting it to describe the inevitable series of stages:

‘Priorities will shift during a potential outbreak from containment and isolation on to delay and, finally, to case management … When there is sustained transmission in the UK, contact tracing will no longer be useful’ (18.2.20: 1; its discussion on 20.2.20: 2 also concludes that ‘individual cases could already have been missed – including individuals advised that they are not infectious’).

Mitigation versus suppression

On the face of it, it looks like there is a major difference in the ways on which (a) the Imperial College COVID-19 Response Team and (b) SAGE describe possible policy responses. The Imperial paper makes a distinction between mitigation and suppression:

  1. Its ‘mitigation strategy scenarios’ highlight the relative effects of partly-voluntary measures on mortality and demand for ‘critical care beds’ in hospitals: (voluntary) ‘case isolation in the home’ (people with symptoms stay at home for 7 days), ‘voluntary home quarantine’ (all members of the household stay at home for 14 days if one member has symptoms), (government enforced) ‘social distancing of those over 70’ or ‘social distancing of entire population’ (while still going to work, school or University), and closure of most schools and universities. It omits ‘stopping mass gatherings’ because ‘the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants’ (2020a: 8). Assuming 70-75% compliance, it describes the combination of ‘case isolation, home quarantine and social distancing of those aged over 70’ as the most impactful, but predicts that ‘mitigation is unlikely to be a viable option without overwhelming healthcare systems’ (2020a: 8-10). These measures would only ‘reduce peak critical care demand by two-thirds and halve the number of deaths’ (to approximately 250,000).
  2. Its ‘suppression strategy scenarios’ describe what it would take to reduce the rate of infection (R) from the estimated 2.0-2.6 to 1 or below (in other words, the game-changing point at which one person would infect no more than one other person) and reduce ‘critical care requirements’ to manageable levels. It predicts that a combination of four options – ‘case isolation’, ‘social distancing of the entire population’ (the measure with the largest impact), ‘household quarantine’ and ‘school and university closure’ – would reduce critical care demand from its peak ‘approximately 3 weeks after the interventions are introduced’, and contribute to a range of 5,600-48,000 deaths over two years (depending on the current R and the ‘trigger’ for action in relation to the number of occupied critical care beds) (2020a: 13-14).

In comparison, the SAGE meeting paper (26.2.20b: 1-3), produced 2-3 weeks earlier, pretty much assumes away the possible distinction between mitigation versus suppression measures (which Vallance has described as semantic rather than substantive – scroll down to The distinction between mitigation and suppression measures). In other words, it assumes ‘high levels of compliance over long periods of time’ (26.2.20b: 1). As such, we can interpret SAGE’s discussion as (a) requiring high levels of compliance for these measures to work (the equivalent of Imperial’s description of suppression), while (b) not describing how to use (more or less voluntary versus impositional) government policy to secure compliance. In comparison, Imperial equates suppression with the relatively-short-term measures associated with China and South Korea (while noting uncertainty about how to maintain such measures until a vaccine is produced).

One reason for SAGE to assume compliance in its scenario building is to focus on the contribution of each measure, generally taking place over 13 weeks, to delaying the peak of infection (while stating that ‘It will likely not be feasible to provide estimates of the effectiveness of individual control measures, just the overall effectiveness of them all’, 26.2.20b: 1), while taking into account their behavioural implications (26.2.20b: 2-3).

  • School closures could contribute to a 3-week delay, especially if combined with FE/ HE closures (but with an unequal impact on ‘Those in lower socio-economic groups … more reliant on free school meals or unable to rearrange work to provide childcare’).
  • Home isolation (65% of symptomatic cases stay at home for 7 days) could contribute to a 2-3 week delay (and is the ‘Easiest measure to explain and justify to the public’).
  • ‘Voluntary household quarantine’ (all member of the household isolate for 14 days) would have a similar effect – assuming 50% compliance – but with far more implications for behavioural public policy:

‘Resistance & non-compliance will be greater if impacts of this policy are inequitable. For those on low incomes, loss of income means inability to pay for food, heating, lighting, internet. This can be addressed by guaranteeing supplies during quarantine periods.

Variable compliance, due to variable capacity to comply, may lead to dissatisfaction.

Ensuring supplies flow to households is essential. A desire to help among the wider community (e.g. taking on chores, delivering supplies) could be encouraged and scaffolded to support quarantined households.

There is a risk of stigma, so ‘voluntary quarantine’ should be portrayed as an act of altruistic civic duty’.

  • ‘Social distancing’ (‘enacted early’), in which people restrict themselves to essential activity (work and school) could produce a 3-5 week delay (and likely to be supported in relation to mass leisure events, albeit less so when work activities involve a lot of contact.

[Note that it is not until May that it addresses this issue of feasibility directly (and, even then, it does not distinguish between technical and political feasibility: ‘It was noted that a useful addition to control measures SAGE considers (in addition to scientific uncertainty) would be the feasibility of monitoring/ enforcement’ (7.5.20: 3)]

As theme 2 suggests, there is a growing recognition that these measures should have been introduced by early March (such as via the Coronavirus Act 2020 not passed until 25.3.20), and likely would if the UK government and SAGE had more information (or interpreted its information in a different way). However, by mid-March, SAGE expresses a mixture of (a) growing urgency, but also (b) the need to stick to the plan, to reduce the peak and avoid a second peak of infection). On 13th March, it states:

‘There are no strong scientific grounds to hasten or delay implementation of either household isolation or social distancing of the elderly or the vulnerable in order to manage the epidemiological curve compared to previous advice. However, there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic. Household isolation is modelled to have the biggest effect of the three interventions currently planned, but with some risks. SAGE therefore thinks there is scientific evidence to support household isolation being implemented as soon as practically possible’ (13.3.20: 1)

‘SAGE further agreed that one purpose of behavioural and social interventions is to enable the NHS to meet demand and therefore reduce indirect mortality and morbidity. There is a risk that current proposed measures (individual and household isolation and social distancing) will not reduce demand enough: they may need to be coupled with more intensive actions to enable the NHS to cope, whether regionally or nationally’ (13.3.20: 2)

On 16th March, it states:

‘On the basis of accumulating data, including on NHS critical care capacity, the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1)

Overall, we can conclude two things about the language of intervention:

  1. There is now a clear difference between the ways in which SAGE and its critics describe policy: to manage an inevitably long-term epidemic, versus to try to eliminate it within national borders.
  2. There is a less clear difference between terms such as suppress and mitigate, largely because SAGE focused primarily on a comparison of different measures (and their combination) rather than the question of compliance.

See also: There is no ‘herd immunity strategy’, which argues that this focus on each intervention was lost in radio and TV interviews with Vallance.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: SAGE meetings from January-June 2020

This post is part 4 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE began a series of extraordinary meetings from 22nd January 2020. The first was described as ‘precautionary’ (22.1.20: 1) and includes updates from NERVTAG which met from 13th January. Its minutes state that ‘SAGE is unable to say at this stage whether it might be required to reconvene’ (22.1.20: 2). The second meeting notes that SAGE will meet regularly (e.g. 2-3 times per week in February) and coordinate all relevant science advice to inform domestic policy, including from NERVTAG and SPI-M (Scientific Pandemic Influenza Group on Modelling) which became a ‘formal sub-group of SAGE for the duration of this outbreak’ (SPI-M-O) (28.1.20: 1). It also convened an additional Scientific Pandemic Influenza subgroup (SPI-B) in February. I summarise these developments by month, but you can see that, by March, it is worth summarising each meeting. The main theme is uncertainty.

January 2020

The first meeting highlights immense uncertainty. Its description of WN-CoV (Wuhan Coronavirus), and statements such as ‘There is evidence of person-to-person transmission. It is unknown whether transmission is sustainable’, sum up the profound lack of information on what is to come (22.1.20: 1-2). It notes high uncertainty on how to identify cases, rates of infection, infectiousness in the absence of symptoms, and which previous experience (such as MERS) offers the most useful guidance. Only 6 days later, it estimates an R between 2-3, doubling rate of 3-4 days, incubation period of around 5 days, 14-day window of infectivity, varied symptoms such as coughing and fever, and a respiratory transmission route (different from SARS and MERS) (28.1.20: 1). These estimates are fairly constant from then, albeit qualified with reference to uncertainty (e.g. about asymptomatic transmission), some key outliers (e.g. the duration of illness in one case was 41 days – 4.2.20: 1), and some new estimates (e.g. of a 6-day ‘serial interval’, or ‘time between successive cases in a chain of transmission’, 11.2.20: 1). By now, it is preparing a response: modelling a ‘reasonable worst case scenario’ (RWC) based on the assumption of an R of 2.5 and no known treatment or vaccine, considering how to slow the spread, and considering how behavioural insights can be used to encourage self-isolation.

February 2020

SAGE began to focus on what measures might delay or reduce the impact of the epidemic. It described travel restrictions from China as low value, since a 95% reduction would have to be draconian to achieve and only secure a one month delay, which might be better achieved with other measures (3.2.20: 1-2). It, and supporting papers, suggested that the evidence was so limited that they could draw ‘no meaningful conclusions … as to whether it is possible to achieve a delay of a month’ by using one or a combination of these measures: international travel restrictions, domestic travel restrictions, quarantine people coming from infected areas, close schools, close FE/ HE, cancel large public events, contact tracing, voluntary home isolation, facemasks, hand washing. Further, some could undermine each other (e.g. school closures impact on older people or people in self-isolation) and have major societal or opportunity costs (SPI-M-O, 3.2.20b: 1-4). For example, the ‘SPI-M-O: Consensus view on public gatherings’ (11.2.20: 1) notes the aim to reduce duration and closeness of (particularly indoor) contact. Large outdoor gatherings are not worse than small, and stopping large events could prompt people to go to pubs (worse).

Throughout February, the minutes emphasize high uncertainty:

  • if there will be an epidemic outside of China (4.2.20: 2)
  • if it spreads through ‘air conditioning systems’ (4.2.20: 3)
  • the spread from, and impact on, children and therefore the impact of closing schools (4.2.20: 3; discussed in a separate paper by SPI-M-O, 10.2.20c: 1-2)
  • ‘SAGE heard that NERVTAG advises that there is limited to no evidence of the benefits of the general public wearing facemasks as a preventative measure’ (while ‘symptomatic people should be encouraged to wear a surgical face mask, providing that it can be tolerated’ (4.2.20: 3)

At the same time, its meeting papers emphasized a delay in accurate figures during an initial outbreak: ‘Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK’ (SPI-M-O, 3.2.20a: 3).

This problem proved to be crucial to the timing of government intervention. A key learning point will be the disconnect between the following statement and the subsequent realisation (3-4 weeks later) that the lockdown measures from mid-to-late March came too late to prevent an unanticipated number of excess deaths:

‘SAGE advises that surveillance measures, which commenced this week, will provide

actionable data to inform HMG efforts to contain and mitigate spread of Covid-19’ … PHE’s surveillance approach provides sufficient sensitivity to detect an outbreak in its early stages. This should provide evidence of an epidemic around 9- 11 weeks before its peak … increasing surveillance coverage beyond the current approach would not significantly improve our understanding of incidence’ (25.2.20: 1)

It also seems clear from the minutes and papers that SAGE highlighted a reasonable worst case scenario on 26.2.20. It was as worrying as the Imperial College COVID-19 Response Team report dated 16.3.20 that allegedly changed the UK Government’s mind on the 16th March. Meeting paper 26.2.20a described the assumption of an 80% infection attack rate and 50% clinical attack rate (i.e. 50% of the UK population would experience symptoms), which underpins the assumption of 3.6 million requiring hospital care of at least 8 days (11% of symptomatic), and 541,200 requiring ventilation (1.65% of symptomatic) for 16 days. While it lists excess deaths as unknown, its 1% infection mortality rate suggests 524,800 deaths. This RWC replaces a previous projection (in Meeting paper 10.2.20a: 1-3, based on pandemic flu assumptions) of 820,000 excess deaths (27.2.20: 1).

As such, the more important difference could come from SAGE’s discussion of ‘non-pharmaceutical interventions (NPIs)’ if it recommends ‘mitigation’ while the Imperial team recommends ‘suppression’. However, the language to describe each approach is too unclear to tell (see Theme 1. The language of intervention; also note that NPIs were often described from March as ‘behavioural and social interventions’ following an SPI-B recommendation, Meeting paper 3.2.20: 1, but the language of NPI seems to have stuck).

March 2020

In March, SAGE focused initially (Meetings 12-14) on preparing for the peak of infection on the assumption that it had time to transition towards a series of isolation and social distancing measures that would be sustainable (and therefore unlikely to contribute to a second peak if lifted too soon). Early meetings and meeting papers express caution about the limited evidence for intervention and the potential for their unintended consequences. This approach began to change somewhat from mid-March (Meeting 15), and accelerate from Meetings 16-18, when it became clear that incidence and virus transmission were much larger than expected, before a new phase began from Meeting 19 (after the UK lockdown was announced on the 23rd).

Meeting 12 (3.3.18) describes preparations to gather and consolidate information on the epidemic and the likely relative effect of each intervention, while its meeting papers emphasise:

  • ‘It is highly likely that there is sustained transmission of COVID-19 in the UK at present’, and a peak of infection ‘might be expected approximately 3-5 months after the establishment of widespread sustained transmission’ (SPI-M Meeting paper 2.3.20: 1)
  • the need the prepare the public while giving ‘clear and transparent reasons for different strategies’ and reducing ambiguity whenever giving guidance (SPI-B Meeting paper 3.2.20: 1-2)
  • The need to combine different measures (e.g. school closure, self-isolation, household isolation, isolating over-65s) at the right time; ‘implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave’ (Meeting paper 4.3.20a: 3).

Meeting 13 (5.3.20) describes staying in the ‘containment’ phase (which, I think, means isolating people with positive tests at home or in hospital) , and introducing: a 12-week period of individual and household isolation measures in 1-2 weeks, on the assumption of 50% compliance; and a longer period of shielding over-65s 2 weeks later. It describes ‘no evidence to suggest that banning very large gatherings would reduce transmission’, while closing bars and restaurants ‘would have an effect, but would be very difficult to implement’, and ‘school closures would have smaller effects on the epidemic curve than other options’ (5.3.20: 1). Its SPI-B Meeting paper (4.3.20b) expresses caution about limited evidence and reliance on expert opinion, while identifying:

  • potential displacement problems (e.g. school closures prompt people to congregate elsewhere, or be looked after by vulnerable older people, while parents to lose the chance to work)
  • the visibility of groups not complying
  • the unequal impact on poorer and single parent families of school closure and loss of school meals, lost income, lower internet access, and isolation
  • how to reduce discontent about only isolating at-risk groups (the view that ‘explaining that members of the community are building some immunity will make this acceptable’ is not unanimous) (4.3.20b: 2).

Meeting 14 (10.3.20) states that the UK may have 5-10000 cases and ‘10-14 weeks from the epidemic peak if no mitigations are introduced’ (10.3.20: 2). It restates the focus on isolation first, followed by additional measures in April, and emphasizes the need to transition to measures that are acceptable and sustainable for the long term:

‘SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods’ …’the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2)

Meeting 15 (13.3.20: 1) describes an update to its data, suggesting ‘more cases in the UK than SAGE previously expected at this point, and we may therefore be further ahead on the epidemic curve, but the UK remains on broadly the same epidemic trajectory and time to peak’. It states that ‘household isolation and social distancing of the elderly and vulnerable should be implemented soon, provided they can be done well and equitably’, noting that there are ‘no strong scientific grounds’ to accelerate key measures but ‘there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic’ (13.3.20: 1) and ‘more intensive actions’ will be required to maintain NHS capacity (13.3.20: 2).

*******

On the 16th March, the UK Prime Minister Boris Johnson describes an ‘emergency’ (one week before declaring a ‘national emergency’ and UK-wide lockdown)

*******

Meeting 16 (16.3.20) describes the possibility that there are 5-10000 new cases in the UK (there is great uncertainty on the estimate’), doubling every 5-6 days. Therefore, to stay within NHS capacity, ‘the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1). SPI-M Meeting paper (16.3.20: 1) describes:

‘a combination of case isolation, household isolation and social distancing of vulnerable groups is very unlikely to prevent critical care facilities being overwhelmed … it is unclear whether or not the addition of general social distancing measures to case isolation, household isolation and social distancing of vulnerable groups would curtail the epidemic by reducing the reproduction number to less than 1 … the addition of both general social distancing and school closures to case isolation, household isolation and social distancing of vulnerable groups would be likely to control the epidemic when kept in place for a long period. SPI-M-O agreed that this strategy should be followed as soon as practical’

Meeting 17 (18.3.20) marks a major acceleration of plans, and a de-emphasis of the low-certainty/ beware-the-unintended-consequences approach of previous meetings (on the assumption that it was now 2-4 weeks behind Italy). It recommends school closures as soon as possible (and it, and SPIM Meeting paper 17.3.20b, now downplays the likely displacement effect). It focuses particularly on London, as the place with the largest initial numbers:

‘Measures with the strongest support, in terms of effect, were closure of a) schools, b) places of leisure (restaurants, bars, entertainment and indoor public spaces) and c) indoor workplaces. … Transport measures such as restricting public transport, taxis and private hire facilities would have minimal impact on reducing transmission’ (18.3.20: 2)

Meeting 18 (23.3.20) states that the R is higher than expected (2.6-2.8), requiring ‘high rates of compliance for social distancing’ to get it below 1 and stay under NHS capacity (23.3.20: 1). There is an urgent need for more community testing/ surveillance (and to address the global shortage of test supplies). In the meantime, it needs a ‘clear rationale for prioritising testing for patients and health workers’ (the latter ‘should take priority’) (23.3.20: 3) Closing UK borders ‘would have a negligible effect on spread’ (23.3.20: 2).

*******

The lockdown. On the 23rd March 2020, the UK Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of coronavirus, including police powers to support public health, such as to disperse gatherings of more than two people (unless they live together), close events and shops, and limit outdoor exercise to once per day (at a distance of two metres from others).

*******

Meeting 19 (26.3.20) follows the lockdown. SAGE describes its priorities if the R goes below 1 and NHS capacity remains under 100%: ‘monitoring, maintenance and release’ (based on higher testing); public messaging on mass testing and varying interventions; understanding nosocomial transmission and immunology; clinical trials (avoiding hasty decisions’ on new drug treatment in absence of good data) and ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2). The optimistic scenario is 10,000 deaths from the first wave (SPIM-O Meeting paper 25.3.20: 4).

Meeting 20 Confirms RWC and optimistic scenarios (Meeting paper 25.3.20), but it needs a ‘clearer narrative, clarifying areas subject to uncertainty and sensitivities’ and to clarify that scenarios (with different assumptions on, for example, the R, which should be explained more) are not predictions (29.3.20).

Meeting 21 seeks to establish SAGE ‘scientific priorities’ (e.g. long term health impacts of COVID-19, including socioeconomic impact on health (including mental health), community testing, international work (‘comorbidities such as malaria and malnutrition) (31.3.20: 1-2). NHS to set up an interdisciplinary group (including science and engineering) to ‘understand and tackle nosocomial transmission’ in the context of its growth and urgent need to define/ track it (31.3.20: 1-2). SAGE to focus on testing requirements, not operational issues. It notes the need to identify a single source of information on deaths.

April 2020

The meetings in April highlight four recurring themes.

First, it stresses that it will not know the impact of lockdown measures for some time, that it is too soon to understand the impact of releasing them, and there is high risk of failure: ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1; see also 14.4.20: 1-2). This problem remains even if a reliable testing and contact tracing system is in place, and if there are environmental improvements to reduce transmission (by keeping people apart).

Second, it notes signals from multiple sources (including CO-CIN and the RCGP) on the higher risk of major illness and death among black people, the ongoing investigation of higher risk to ‘BAME’ health workers (16.4.20), and further (high priority) work on ‘ethnicity, deprivation, and mortality’ (21.4.20: 1) (see also: Race, ethnicity, and the social determinants of health).

Third, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20). The need for far more testing is a feature of almost every meeting (see also The need to ramp up testing).

Fourth, SAGE describes the need for more short and long-term research, identifying nosocomial infection as a short term priority, and long term priorities in areas such as the long term health impacts of COVID-19 (including socioeconomic impacts on physical and mental health), community testing, and international work (31.3.20: 1-2).

Finally, it reflects shifting advice on the precautionary use of face masks. Previously, advisory bodies emphasized limited evidence of a clear benefit to the wearer, and worried that public mask use would reduce the supply to healthcare professionals and generate a false sense of security (compare with this Greenhalgh et al article on the precautionary principle, the subsequent debate, and work by the Royal Society). Even by April: ‘NERVTAG concluded that the increased use of masks would have minimal effect’ on general population infection (7.4.20: 1), while the WHO described limited evidence that facemasks are beneficial for community use (9.4.20). Still, general face mask use but could have small positive effect, particularly in ‘enclosed environments with poor ventilation, and around vulnerable people’ (14.4.20: 2) and ‘on balance, there is enough evidence to support recommendation of community use of cloth face masks, for short periods in enclosed spaces where social distancing is not possible’ (partly because people can be infectious with no symptoms), as long as people know that it is no substitute for social distancing and handwashing (21.4.20)

May 2020

In May, SAGE continues to discuss high uncertainty on relaxing lockdown measures, the details of testing systems, and the need for research.

Generally, it advises that relaxations should not happen before there is more understanding of transmission in hospitals and care homes, and ‘until effective outbreak surveillance and test and trace systems are up and running’ (14.5.20). It advises specifically ‘against reopening personal care services, as they typically rely on highly connected workers who may accelerate transmission’ (5.5.20: 3) and warns against the too-quick introduction of social bubbles. Relaxation runs the risk of diminishing public adherence to social distancing, and to overwhelm any contact tracing system put in place:

‘SAGE participants reaffirmed their recent advice that numbers of Covid-19 cases remain high (around 10,000 cases per day with wide confidence intervals); that R is 0.7-0.9 and could be very close to 1 in places across the UK; and that there is very little room for manoeuvre especially before a test, trace and isolate system is up and running effectively. It is not yet possible to assess the effect of the first set of changes which were made on easing restrictions to lockdown’ (28.5.20: 3).

It recommends extensive testing in hospitals and care homes (12.5.20: 3) and ‘remains of the view that a monitoring and test, trace & isolate system needs to be put in place’ (12.5.20: 1)

June 2020

In June, SAGE identifies the importance of clusters of infection (super-spreading events) and the importance of a contact tracing system that focuses on clusters (rather than simply individuals) (11.6.20: 3). It reaffirms the value of a 2-metre distance rule. It also notes that the research on immunology remains unclear, which makes immunity passports a bad idea (4.6.20).

It describes the result of multiple meeting papers on the unequal impact of COVID-19:

‘There is an increased risk from Covid-19 to BAME groups, which should be urgently investigated through social science research and biomedical research, and mitigated by policy makers’ … ‘SAGE also noted the importance of involving BAME groups in framing research questions, participating in research projects, sharing findings and implementing recommendations’ (4.6.20: 1-3)

See also: Race, ethnicity, and the social determinants of health

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: The overall narrative underpinning SAGE advice and UK government policy

This post is part 3 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

I discuss the UK government’s definition of the COVID-19 policy problem in some other posts (1. in a now-dated post on early developments, and 2. in relation to oral evidence to the Health and Social Care committee). It includes the following elements:

  • We need to use a suppression strategy to reduce infection enough to avoid overwhelming health service capacity, and shield the people most vulnerable to major illness or death caused by COVID-19, to minimize deaths during at least one peak of infection.
  • We need to maintain suppression for a period of time that is difficult to predict, subject to compliance levels that are difficult to predict and monitor.
  • We need to avoid panicking the public in the lead up to suppression, avoid too-draconian enforcement, and maintain wide public trust in the government.
  • We need to avoid (a) excessive and (b) insufficient suppression measures, either of which could contribute to a second wave of the epidemic of the same magnitude as the first.
  • We need to transition safely from suppression measures to foster economic activity, find safe ways for people to return to work and education, and reinstate the full use of NHS capacity for non-COVID-19 illness.
  • In the absence of a vaccine, this strategy will likely involve social distancing and (voluntary) track-and-trace measures to isolate people with COVID-19.

This understanding in the UK, informed strongly by SAGE, also informs the ways in which SAGE (a) deals with uncertainty, and (b) describes the likely impact of each stage of action.

Manage suppression during the first peak to avoid a second peak

Most importantly, it stresses continuously the need to avoid excessive suppressive measures on the first peak that would contribute to a second peak [my emphasis added]:

  • ‘Any combination of [non-pharmaceutical] measures would slow but not halt an epidemic’, 25.2.20: 1).
  • ‘Mitigations can be expected to change the shape of the epidemic curve or the timing of a first or second peak, but are not likely to reduce the overall number of total infections’. Therefore, identify whose priorities matter (such as NHS England) on the assumption that, ‘The optimal shape of the epidemic curve will differ according to sectoral or organisational priorities’ (27.2.20: 2).
  • ‘A combination of these measures [school closures, household isolation, social distancing] is expected to have a greater impact: implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave. In comparison combining stringent social distancing measures, school closures and quarantining cases, as a long-term policy, may have a similar impact to that seen in Hong Kong or Singapore, but this could result in a large second epidemic wave once the measures were lifted’ (Meeting paper 4.3.20a: 3).
  • SAGE was unanimous that measures seeking to completely suppress spread of Covid-19 will cause a second peak. SAGE advises that it is a near certainty that countries such as China, where heavy suppression is underway, will experience a second peak once measures are relaxed’ (also: ‘It was noted that Singapore had had an effective “contain phase” but that now new cases had appeared) (13.3.20: 2)
  • Its visual of each possible peak of infection emphasises the risk of a second peak (Meeting paper 4.3.20: 2).

SAGE image of 1st 2nd peaks 4.3.20

  • ‘The objective is to avoid critical cases exceeding NHS intensive care and other respiratory support bed capacity’ … SAGE ‘advice on interventions should be based on what the NHS needs’ (16.3.20: 1)
  • The fewer cases that happen as a result of the policies enacted, the larger subsequent waves are expected to be when policies are lifted (SPI-M-O Meeting paper 25.3.20: 1)
  • ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1)

Avoid the unintended consequences of epidemic suppression

This understanding intersects with (c) an emphasis of the loss of benefits caused by certain interventions (such as schools closures).

  • SPI-B (Meeting paper 4.3.20b: 1-4) expresses reluctance to close schools, partly to avoid the unintended consequences, including: displacement problems (e.g. school closures prompt children to be looked after by vulnerable older people, or parents to lose the chance to work); and, the unequal impact on poorer and single parent families (loss of school meals, lost income, lower internet access, exacerbating isolation and mental ill health). It then states that: ‘The importance of schools during a crisis should not be overlooked. This includes: Acting as a source of emotional support for children; Providing education (e.g. on hand hygiene) which is conveyed back to families; Provision of social service (e.g. free school meals, monitoring wellbeing); Acting as a point of leadership and communication within communities’ (4.3.20b: 4).
  • ‘Long periods of social isolation may have significant risks for vulnerable people … SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods. Input from behavioural scientists is essential to policy development of cocooning measures, to increase public practicability and likelihood of compliance … the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2).
  • After the lockdown (23.3.20), SAGE describes a priority regarding: ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2).

Exhort and encourage, rather than impose

It also intersects with (d) a primary focus on exhortation and encouragement rather than the imposition of behavioural change (Table 1), largely based on the belief that the UK government would be unwilling or unable to enforce behavioural change in ways associated with China. In that context, the government’s willingness and ability to enforce social distancing and business closure from the 23rd March is striking.

Examples include:

  • when recommending ‘individual home isolation (symptomatic individuals to stay at home for 14 days) and whole family isolation (fellow household members of symptomatic individuals to stay at home for 14 days after last family member becomes unwell)’, it assumes a 50% compliance rate, and notes that ‘closing bars and restaurants ‘would have an effect, but would be very difficult to implement’ (5.3.20: 1).

See also: oral evidence to the Health and Social Care committee, which suggests that the UK government and SAGE’s problem definition contrasts with approaches in countries such as South Korea (described by Kim et al, and Kim).

It also contrasts with the approach described by several of the UK’s (expert) critics, including Professor Devi Sridhar (Professor of Global Public Health), who is critical of SAGE specifically, and more generally of the UK government’s rejection of an ‘elimination’ strategy:

Table 1 sets out one way to describe the distinction between these approaches:

  • The UK government is addressing a chronic problem, being cautious about policy change without supportive evidence, identifying trigger points to new approaches (based on incidence), and assuming initially that the approach is based largely on exhortation.
  • One alternative is to pursue elimination aggressively, adopting a precautionary principle before there is supportive evidence of a major problem and the effectiveness of solutions, backed by measures such as contact tracing and quarantine, and assuming that the imposition of behaviour should be a continuous expectation.

One approach highlights the lack of evidence to support major policy change, and therefore gives primacy to the status quo. The other is more preventive, giving primacy to the precautionary principle until there is more clarity or certainty on the available evidence.

Table 1

In that context, note (in Table 2) how frequently the SAGE minutes state that there is limited evidence to support policy change, and that an epidemic is inevitable (in other words, elimination without a vaccine is near-impossible). Both statements tend to support a UK government policy that was, until mid-March, based on reluctance to enforce a profound lockdown to impose social distancing.

As the next post describes, the chronology of Table 2 is instructive, since it demonstrates a degree of path dependence based on initial uncertainty and hesitancy. This approach was understandable at first (particularly when connected to an argument about reducing the peak of infection then avoiding a second wave), before being so heavily criticised only two months later.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: The role of SAGE and science advice to government

This post is part 2 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The issue of science advice to government, and the role of SAGE in particular, became unusually high profile in the UK, particularly in relation to four factors:

  1. Ministers described ‘following the science’ to project a certain form of authority and control.
  2. The SAGE minutes and papers – including a record of SAGE members and attendees – were initially unpublished, in line with the previous convention of government to publish after, rather than during, a crisis.

‘SAGE is keen to make the modelling and other inputs underpinning its advice available to the public and fellow scientists’ (13.3.20: 1)

When it agrees to publish SAGE papers/ documents, it stresses: ‘It is important to demonstrate the uncertainties scientists have faced, how understanding of Covid-19 has developed over time, and the science behind the advice at each stage’ (16.3.20: 2)

‘SAGE discussed plans to release the academic models underpinning SAGE and SPI-M discussions and judgements. Modellers agreed that code would become public but emphasised that the effort to do this immediately would distract from other analyses. It was agreed that code should become public as soon as practical, and SPI-M would return to SAGE with a proposal on how this would be achieved. ACTION: SPI-M to advise on how to make public the source code for academic models, working with relevant partners’ (18.3.20: 2).

SAGE welcomes releasing names of SAGE participants (if willing) and notes role of Ian Boyd as ‘independent challenge function’ (28.4.20: 1)

SAGE also describes the need for a better system to allow SAGE participants to function effectively and with proper support (given the immense pressure/ strain on their time and mental health) (7.5.20: 1)

  1. There were growing concerns that ministers would blame their advisers for poor choices (compare Freedman and Snowdon) or at least use science advice as ‘an insurance policy’, and
  2. There was some debate about the appropriateness of Dominic Cummings (Prime Minister Boris Johnson’s special adviser) attending some meetings.

Therefore, its official description reflects its initial role plus a degree of clarification on the role of science advice mechanisms during the COVID-19 pandemic. The SAGE webpage on the gov.uk sites describes its role as:

provides scientific and technical advice to support government decision makers during emergencies … SAGE is responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR). The advice provided by SAGE does not represent official government policy’.

Its more detailed explainer describes:

‘SAGE’s role is to provide unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants. This includes everything from latest knowledge of the virus to modelling the disease course, understanding the clinical picture, and effects of and compliance with interventions. This advice together with a descriptor of uncertainties is then passed onto government ministers. The advice is used by Ministers to allow them to make decisions and inform the government’s response to the COVID-19 outbreak …

The government, naturally, also considers a range of other evidence including economic, social, and broader environmental factors when making its decisions…

SAGE is comprised of leading lights in their representative fields from across the worlds of academia and practice. They do not operate under government instruction and expert participation changes for each meeting, based on the expertise needed to address the crisis the country is faced with …

SAGE is also attended by official representatives from relevant parts of government. There are roughly 20 such officials involved in each meeting and they do not frequently contribute to discussions, but can play an important role in highlighting considerations such as key questions or concerns for policymakers that science needs to help answer or understanding Civil Service structures. They may also ask for clarification on a scientific point’ (emphasis added by yours truly).

Note that the number of participants can be around 60 people, which is more like an assembly with presentations and a modest amount of discussion, than a decision-making function (the Zoom meeting on 4.6.20 lists 76 participants). Even a Cabinet meeting is about 20 and that is too much for coherent discussion/ action (hence separate, smaller, committees).

Further, each set of now-published minutes contains an ‘addendum’ to clarify its operation. For example, its first minutes in 2020 seek to clarify the role of participants. Note that the participants change somewhat at each meeting (see the full list of members/ attendees), and some names are redacted. Dominic Cummings’ name only appears (I think) on 5.3.20, 14.4.20, and two meetings on 1.5.20 (although, as Freedman notes, ‘his colleague Ben Warner was a more regular presence’).

SAGE minutes 1 addendum 22.1.20

More importantly, the minutes from late February begin to distinguish between three types of potential science advice:

  1. to describe the size of the problem (e.g. surveillance of cases and trends, estimating a reasonable worst case scenario)
  2. to estimate the relative impact of many possible interventions (e.g. restrictions on travel, school closures, self-isolation, household quarantine, and social distancing measures)
  3. to recommend the level and timing of state action to achieve compliance in relation to those interventions.

SAGE focused primarily on roles 1 and 2, arguing against role 3 on the basis that state intervention is a political choice to be taken by ministers. Ministers are responsible for weighing up the potential public health benefits of each measure in relation to their social and economic costs (see also: The relationship between science, science advice, and policy).

Example 1: setting boundaries between advice and strategy

  • ‘It is a political decision to consider whether it is preferable to enact stricter measures at first, lifting them gradually as required, or to start with fewer measures and add further measures if required. Surveillance data streams will allow real-time monitoring of epidemic growth rates and thus allow approximate evaluation of the impact of whatever package of interventions is implemented’ (Meeting paper 26.2.20b: 1)

This example highlights a limitation in performing role 2 to inform 3: SAGE would not be able to compare the relative impact of measures without knowing their level of imposition and its impact on compliance. Further, the way in which it addressed this problem is crucial to our interpretation and evaluation of the timing and substance of the UK government’s response.

In short, it simultaneously assumed away and maintained attention to this problem by stating:

  • ‘The measures outlined below assume high levels of compliance over long periods of time. This may be unachievable in the UK population’ (26.2.20b: 1).
  • ‘advice on interventions should be based on what the NHS needs and what modelling of those interventions suggests, not on the (limited) evidence on whether the public will comply with the interventions in sufficient numbers and over time’ (16.3.20: 1)

The assumption of high compliance reduces the need for SAGE to make distinctions between terms such as mitigation versus suppression (see also: Confusion about the language of intervention and stages of intervention). However, it contributes to confusion within wider debates on UK action (see Theme 1. The language of intervention).

Example 2: setting boundaries between advice and value judgements

  • ‘SAGE has not provided a recommendation of which interventions, or package of interventions, that Government may choose to apply. Any decision must consider the impacts these interventions may have on society, on individuals, the workforce and businesses, and the operation of Government and public services’ (Meeting paper 4.3.20a: 1).

To all intents and purposes, SAGE is noting that governments need to make value-based choices to:

  1. Weigh up the costs and benefits of any action (as described by Layard et al, with reference to wellbeing measures and the assumed price of a life), and
  2. Decide whose wellbeing, and lives, matter the most (because any action or inaction will have unequal consequences across a population).

In other words, policy analysis is one part evidence and one part value judgement. Both elements are contested in different ways, and different questions inform political choices (e.g. whose knowledge counts versus whose wellbeing counts?).

[see also:

  • ‘Determining a tolerable level of risk from imported cases requires consideration of a number of non-science factors and is a policy question’ (28.4.20: 3)
  • ‘SAGE reemphasises that its own focus should always be on providing clear scientific advice to government and the principles behind that advice’ (7.5.20: 1)]

Future reflections

Any future inquiry will be heavily contested, since policy learning and evaluation are political acts (and the best way to gather and use evidence during a pandemic is highly contested).  Still, hopefully, it will promote reflection on how, in practice, governments and advisory bodies negotiate the blurry boundary between scientific advice and political choice when they are so interdependent and rely so heavily on judgement in the face of ambiguity and uncertainty (or ‘radical uncertainty’). I discuss this issue in the next post, which highlights the ways in which UK ministers relied on SAGE (and advisers) to define the policy problem.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

6 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE explainer

SAGE is the Scientific Advisory Group for Emergencies. The text up there comes from the UK Government description. SAGE is the main venue to coordinate science advice to the UK government on COVID-19, including from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group, reporting to PHE), and the SPI-M (Scientific Pandemic Influenza Group on Modelling) sub-groups on modelling (SPI-M) and behavioural public policy (SPI-B) which supply meeting papers to SAGE.

I have summarized SAGE’s minutes (41 meetings, from 22 January to 11 June) and meeting/ background papers (125 papers, estimated range 1-51 pages, median 4, not-peer-reviewed, often produced a day after a request) in a ridiculously long table. This thing is huge (40 pages and 20000 words). It is the sequoia table. It is the humongous fungus. Even Joey Chestnut could not eat this table in one go. To make your SAGE meal more palatable, here is a series of blog posts that situate these minutes and papers in their wider context. This initial post is unusually long, so I’ve put in a photo to break it up a bit.

Did the UK government ‘follow the science’?

I use the overarching question Did the UK Government ‘follow the science’? initially for the clickbait. I reckon that, like a previous favourite (people have ‘had enough of experts’), ‘following the science’ is a phrase used by commentators more frequently than the original users of the phrase. It is easy to google and find some valuable commentaries with that hook (Devlin & Boseley, Siddique, Ahuja, Stevens, Flinders, Walker, , FT; see also Vallance) but also find ministers using a wider range of messages with more subtle verbs and metaphors:

  • ‘We will take the right steps at the right time, guided by the science’ (Prime Minister Boris Johnson, 3.20)
  • ‘We will be guided by the science’ (Health Secretary Matt Hancock, 2.20)
  • ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’ (Johnson, 3.20)
  • ‘The plan is driven by the science and guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.20)
  • ‘The plan does not set out what the government will do, it sets out the steps we could take at the right time along the basis of the scientific advice’ (Johnson, 3.20).

Still, clearly they are saying ‘the science’ as a rhetorical device, and it raises many questions or objections, including:

  1. There is no such thing as ‘the science’.

Rather, there are many studies described as scientific (generally with reference to a narrow range of accepted methods), and many people described as scientists (with reference to their qualifications and expertise). The same can be said for the rhetorical phrase ‘the evidence’ and the political slogan ‘evidence based policymaking’ (which often comes with its notionally opposite political slogan ‘policy based evidence’). In both cases, a reference to ‘the science’ or ‘the evidence’ often signals one or both of:

  • a particular, restrictive, way to describe evidence that lives up to a professional quality standard created by some disciplines (e.g. based on a hierarchy of evidence, in which the systematic review of randomized control trials is often at the top)
  • an attempt by policymakers to project their own governing competence, relative certainty, control, and authority, with reference to another source of authority

2. Ministers often mean ‘following our scientists

PM_press_conference Vallance Whitty 12.3.20

When Johnson (12.3.20) describes being ‘guided by the science’, he is accompanied by Professor Patrick Vallance (Government Chief Scientific Adviser) and Professor Chris Whitty (the UK government’s Chief Medical Adviser). Hancock (3.3.20) describes being ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.3.20).

In other words, following ‘the science’ means ‘following the advice of our scientific advisors’, via mechanisms such as SAGE.

As the SAGE minutes and meeting papers show, government scientists and SAGE participants necessarily tell a partial story about the relevant evidence from a particular perspective (note: this is not a criticism of SAGE; it is a truism). Other interpreters of evidence, and sources of advice, are available.

Therefore, the phrase ‘guided by the science’ is, in practice, a way to:

  • narrow the search for information (and pay selective attention to it)
  • close down, or set the terms of, debate
  • associate policy with particular advisors or advisory bodies, often to give ministerial choices more authority, and often as ‘an insurance policy’ to take the heat off ministers.
  1. What exactly is ‘the science’ guiding?

Let’s make a simple distinction between two types of science-guided action. Scientists provide evidence and advice on:

  1. the scale and urgency of a potential policy problem, such as describing and estimating the incidence and transmission of coronavirus
  2. the likely impact of a range of policy interventions, such as contact tracing, self-isolation, and regulations to oblige social distancing

In both cases, let’s also distinguish between science advice to reduce uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Put both together to produce a wide range of possibilities for policy ‘guided by the science’, from (a) simply providing facts to help reduce uncertainty on the incidence of coronavirus (minimal), to (b) providing information and advice on how to define and try to solve the policy problem (maximal).

If so, note that being guided by science does not signal more or less policy change. Ministers can use scientific uncertainty to defend limited action, or use evidence selectively to propose rapid change. In either case, it can argue – sincerely – that it is guided by science. Therefore, analyzing critically the phraseology of ministers is only a useful first step. Next, we need to identify the extent to which scientific advisors and advisory bodies, such as SAGE, guided ministers.

The role of SAGE: advice on evidence versus advice on strategy and values

In that context, the next post examines the role of SAGE.

It shows that, although science advice to government is necessarily political, the coronavirus has heightened attention to science and advice, and you can see the (subtle and not subtle) ways in SAGE members and its secretariat are dealing with its unusually high level of politicization. SAGE has responded by clarifying its role, and trying to set boundaries between:

  • Advice versus strategy
  • Advice versus value judgements

These aims are understandable, but difficult to do in theory (the fact/value distinction is impossible) and practice (plus, policymakers may not go along with the distinction anyway). I argue that it also had some unintended consequences, which should prompt some further reflection on facts-versus-values science advice during crises.

The ways in which UK ministers followed SAGE advice

With these caveats in mind, my reading of this material is that UK government policy was largely consistent with SAGE evidence and advice in the following ways:

  1. Defining the policy problem

This post (and a post on oral evidence to the Health and Social Care Committee) identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows (although the post provides a more expansive discussion):

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

While SAGE minutes suggest a general reluctance to comment too much on the point 4, government discussions were underpinned by 1-3. For me, this context is the most important. It provides a lens through which to understand all of SAGE advice: how it shapes, and is shaped by, UK government policy.

  1. The timing and substance of interventions before lockdown, maintenance of lockdown for several months, and gradual release of lockdown measures

This post presents a long chronological story of SAGE minutes and papers, divided by month (and, in March, by each meeting). Note the unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, can only be appreciated fully if you read the minutes from 1 to 41. Or, you know, take my word for it.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China (albeit while developing initially-good estimates of R, doubling rate, incubation period, window of infectivity, and symptoms). In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

In other words, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice (and it would not be outrageous to argue that it went ahead of it).

It is more difficult to describe the consistency between UK government policy & SAGE advice in relation to the relaxation of lockdown measures.

SAGE’s minutes and meeting papers describe very low certainty about what will happen after the release of lockdown. Their models do not hide this unusually high level of uncertainty, and they use models (built on assumptions) to generate scenarios rather than estimate what will happen. In this sense, ‘following the science’ could relate to (a) a level of buy-in for this kind of approach, and (b) making choices when scientific groups cannot offer much (if any) advice on what to do or what will happen. The example of reopening schools is a key example, since SPI-M and SPI-B focused intensely on the issue, but their conclusions could not underpin a specific UK government choice.

There are two ways to interpret what happened next.

First, there will always be a mild gap between hesitant SAGE advice and ministerial action. SAGE advice tends to be based on the amount and quality of evidence to support a change, which meant it was hesitant to recommend (a) a full lockdown and (b) a release from lockdown. Just as UK government policy seemed to go ahead of the evidence to enter lockdown on the 23rd March, so too does it seem to go ahead of the cautious approach to relaxing it.

Second, UK ministers are currently going too far ahead of the evidence. SPI-M papers state repeatedly that the too-quick release of measures will cause the R to go above 1 (in some papers, it describes reaching 1.7; in some graphs it models up to 3).

  1. The use of behavioural insights to inform and communicate policy

In March, you can find a lot of external debate about the appropriate role for ‘behavioural science’ and ‘behavioural public policy’ (BPP) (in other words, using insights from psychology to inform policy). Part of the initial problem related to the lack of transparency of the UK government, which prompted concerns that ministers were basing choices on limited evidence (see Hahn et al, Devlin, Mills). Oliver also describes initial confusion about the role of BPP when David Halpern became mildly famous for describing the concept of ‘herd immunity’ rather than sticking to psychology.

External concern focused primarily on the argument that the UK government (and many other governments) used the idea of ‘behavioural fatigue’ to justify delayed or gradual lockdown measures. In other words, if you do it too quickly and for too long, people will tire of it and break the rules.

Yet, this argument about fatigue is not a feature of the SAGE minutes and SPI-B papers (indeed, Oliver wonders if the phrase came from Whitty, based on his experience of people tiring of taking medication).

Rather, the papers tend to emphasise:

  • There is high uncertainty about behavioural change in key scenarios, and this reference to uncertainty should inform any choice on what to do next.
  • The need for effective and continuous communication with citizens, emphasizing transparency, honesty, clarity, and respect, to maintain high trust in government and promote a sense of community action (‘we are all in this together’).

John and Stoker argue that ‘much of behavioural science lends itself to’ a ‘top-down approach because its underlying thinking is that people tend to be limited in cognitive terms, and that a paternalistic expert-led government needs to save them from themselves’. Yet, my overall impression of the SPI-B (and related) work is that (a) although SPI-B is often asked to play that role, to address how to maximize adherence to interventions (such as social distancing), (b) its participants try to encourage the more deliberative or collaborative mechanisms favoured by John and Stoker (particularly when describing how to reopen schools and redesign work spaces). If so, my hunch is that they would not be as confident that UK ministers were taking their advice consistently (for example, throughout table 2, have a look at the need to provide a consistent narrative on two different propositions: we are all in this together, but the impact of each action/inaction will be profoundly unequal).

Expanded themes in SAGE minutes

Throughout this period, I think that one – often implicit – theme is that members of SAGE focused quite heavily on what seemed politically feasible to suggest to ministers, and for ministers to suggest to the public (while also describing technical feasibility – i.e. will it work as intended if implemented?). Generally, it seemed to anticipate policymaker concern about, and any unintended public reactions, to a shift towards more social regulation. For example:

‘Interventions should seek to contain, delay and reduce the peak incidence of cases, in that order. Consideration of what is publicly perceived to work is essential in any decisions’ (25.2.20: 1)

Put differently, it seemed to operate within the general confines of what might work in a UK-style liberal democracy characterised by relatively low social regulation. This approach is already a feature of The overall narrative underpinning SAGE advice and UK government policy, and the remaining posts highlight key themes that arise in that context.

They include how to:

Delaying the inevitable

All of these shorter posts delay your reading of a ridiculously long table summarizing each meeting’s discussion and advice/ action points (Table 2, which also includes a way to chase up the referencing in the blog posts: dates alone refer to SAGE minutes; multiple meeting papers are listed as a, b, c if they have the same date stamp rather than same authors).

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

It is part of a wider project, in which you can also read about:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I hope to get through all of this material (and equivalent material in the devolved governments) somehow, but also to find time to live, love, eat, and watch TV, so please bear with me if you want to know what happened but don’t want to do all of the reading to find out.

If you would rather just read all of this discussion in one document:

The whole thing in PDF

Table 2 in PDF

The whole thing as a Word document

Table 2 as a word document

If you would like some other analyses, compare with:

  • Freedman (7.6.20) ‘Where the science went wrong. Sage minutes show that scientific caution, rather than a strategy of “herd immunity”, drove the UK’s slow response to the Covid-19 pandemic’. Concludes that ‘as the epidemic took hold the government was largely following Sage’s advice’, and that the government should have challenged key parts of that advice (to ensure an earlier lockdown).
  • More or Less (1.7.20) ‘Why Did the UK Have Such a Bad Covid-19 Epidemic?’. Relates the delays in ministerial action to inaccurate scientific estimates of the doubling time of infection (discussed further in Theme 2).
  • Both Freedman and More or Less focus on the mishandling of care home safety, exacerbated by transfers from hospital without proper testing.
  • Snowden (28.5.20) ‘The lockdown’s founding myth. We’ve forgotten that the Imperial model didn’t even call for a full lockdown’. Challenges the argument that ministers dragged their feet while scientists were advising quick and extensive interventions (an argument he associates with Calvert et al (23.5.20) ‘22 days of dither and delay on coronavirus that cost thousands of British lives’). Rather, ministers were following SAGE advice, and the lockdown in Italy had a far bigger impact on ministers (since it changed what seemed politically feasible).
  • Greg Clark MP (chair of the House of Commons Science and Technology Committee) Between science and policy – Scrutinising the role of SAGE in providing scientific advice to government

8 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

In this post, ‘following the science’ describes UK ministers taking the advice of their scientific advisers and SAGE (the Scientific Advisory Group for Emergencies).

If so, were UK ministers ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’?

The short answer is yes.

They followed advice in two profoundly important ways:

  1. Defining coronavirus as a policy problem.

My reading of the SAGE minutes and meeting papers identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows:

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

If you examine UK ministerial speeches and SAGE minutes, you will find very similar messages: a coronavirus epidemic is inevitable, we need to ease gradually into suppression measures to avoid a second peak of infection as big as the first, and our focus is exhortation and encouragement over imposition.

  1. The timing and substance of interventions before lockdown

I describe a long chronological story of SAGE minutes and papers. Its main theme is unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, should not be dismissed.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China. In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

Therefore, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice. It would not be outrageous to argue that it went ahead of that advice, at least as recorded in SAGE minutes and meeting papers (compare with Freedman, Snowden, More or Less).

The long answer

If you would like the long answer, I can offer you 35280 words, including a 22380-word table summarizing the SAGE minutes and meeting papers (meetings 1-41, 22.1.20-11.6.20).

It includes:

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

So far, the wider project includes:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I am also writing a paper based on this post, but don’t hold your breath.

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Coronavirus and the ‘social determinants’ of health inequalities: lessons from ‘Health in All Policies’ initiatives

Many public health bodies are responding to crisis by shifting their attention and resources from (1) a long-term strategic focus on reducing non-communicable diseases (such as heart diseases, cancers, diabetes), to (2) the coronavirus pandemic.

Of course, these two activities are not mutually exclusive, and smoking provides the most high-profile example of short-term and long-term warnings coming together (see Public Health England’s statement that ‘Emerging evidence from China shows smokers with COVID-19 are 14 times more likely to develop severe respiratory disease’).

There are equally important lessons – such as on health equity – from the experiences of longer-term and lower-profile ‘preventive’ public health agendas such as ‘Health in All Policies’ (HIAP).*

What is ‘Health in All Policies’?

HIAP is a broad (and often imprecise) term to describe:

  1. The policy problem. Address the ‘social determinants’ of health, defined by the WHO as ‘the unfair and avoidable differences in health status … shaped by the distribution of money, power and resources [and] the conditions in which people are born, grow, live, work and age’.
  2. The policy solutions. Identify a range of policy instruments, including redistributive measures to reduce economic inequalities, distributive measures to improve public services and the physical environment (including housing), regulations on commercial and individual behaviour, and health promotion via education and learning.
  3. The policy style. An approach to policymaking that encourages meaningful collaboration across multiple levels and types of government, and between governmental and non-governmental actors (partly because most policy solutions to improve health are not in the gift of health departments).
  4. Political commitment and will. High level political support is crucial to the production of a holistic strategy document, and to dedicate resources to its delivery, partly via specialist organisations and the means to monitor and evaluate progress.

As two distinctive ‘Marmot reviews’ demonstrate, this problem (and potential solutions) can be described differently in relation to:

Either way, each of the 4 HIAP elements highlights issues that intersect with the impact of the coronavirus: COVID-19 has a profoundly unequal impact on populations; there will be a complex mix of policy instruments to address it, and many responses will not be by health departments; an effective response requires intersectoral government action and high stakeholder and citizen ownership; and, we should not expect current high levels of public, media, and policymaker attention and commitment to continue indefinitely or help foster health equity (indeed, even well-meaning policy responses may exacerbate health inequalities). 

A commitment to health equity, or the reduction of health inequalities

At the heart of HIAP is a commitment to health equity and to reduce health inequalities. In that context, the coronavirus provides a stark example of the impact of health inequalities, since (a) people with underlying health conditions are the most vulnerable to major illness and death, and (b) the spread of underlying health conditions is unequal in relation to factors such as income and race or ethnicity. Further, there are major inequalities in relation to exposure to physical and economic risks.

A focus on the social determinants of health inequalities

A ‘social determinants’ focus helps us to place individual behaviour in a wider systemic context. It is tempting to relate health inequalities primarily to ‘lifestyles’ and individual choices, in relation to healthy eating, exercise, and the avoidance of smoking and alcohol. However, the most profound impacts on population health can come from (a) environments largely outside of an individual’s control (e.g. in relation to threats from others, such as pollution or violence), (b) levels of education and employment, and (c) economic inequality, influencing access to warm and safe housing, high quality water and nutrition, choices on transport, and access to safe and healthy environments.

In that context, the coronavirus provides stark examples of major inequalities in relation to self-isolation and social distancing: some people have access to food, private spaces to self-isolate, and open places to exercise away from others; many people have insufficient access to food, no private space, and few places to go outside (also note the disparity in resources between countries).

The pursuit of intersectoral action

A key aspect of HIAP is to identify the ways in which non-health sectors contribute to health. Classic examples include a focus on the sectors that influence early access to high quality education, improving housing and local environments, reducing vulnerability to crime, and reforming the built environment to foster sustainable public transport and access to healthy air, water, and food.

The response to the coronavirus also appears to be a good advert for the potential for intersectoral governmental action, demonstrating that measures with profound impacts on health and wellbeing are made in non-health sectors, including: treasury departments subsidising business and wages, and funding additional healthcare; transport departments regulating international and domestic travel; social care departments responsible for looking after vulnerable people outside of healthcare settings; and, police forces regulating social behaviour.

However, most (relevant) HIAP studies identify a general lack of effective intersectoral government action, related largely to a tendency towards ‘siloed’ policymaking within each department, exacerbated by ‘turf wars’ between departments (even if they notionally share the same aims) and a tendency for health departments to be low status, particularly in relation to economic departments (also note the frequently used term ‘health imperialism’ to describe scepticism about public health in other sectors).  Some studies highlight the potential benefits of ‘win-win’ strategies to persuade non-health sectors that collaboration on health equity also helps deliver their core business (e.g. Molnar et al 2015), but the wider public administration literature is more likely to identify a history of unsuccessful initiatives with a cumulative demoralising effect (e.g. Carey and Crammond, 2015; Molenveld et al, 2020).  

The pursuit of wider collaboration

HIAP ambitions extend to ‘collaborative’ or ‘co-produced’ forms of governance, in which citizens and stakeholders work with policymakers in health and non-health sectors to define the problem of health inequalities and inform potential solutions. These methods can help policymakers make sense of broad HIAP aims through the eyes of citizens, produce priorities that were not anticipated in a desktop exercise, help non-health sector workers understand their role in reducing health inequalities, and help reinforce the importance of collaborative and respectful ways of working.

An excellent example comes from Corburn et al’s (2014) study of Richmond, California’s statutory measures to encourage HIAP. They describe ‘coproducing health equity in all policies’ with initial reference to WHO definitions, but then to social justice in relation to income and wealth, which differs markedly according to race and immigration status. It then reports on a series of community discussions to identify key obstacles to health:

For example, Richmond residents regularly described how, in the same day, they might experience or fear violence, environmental pollution, being evicted from housing, not being able to pay health care bills, discrimination at work or in school, challenges accessing public services, and immigration and customs enforcement (ICE) intimidation … Also emerging from the workshops and health equity discussions was that one of the underlying causes of the multiple stressors experienced in Richmond was structural racism. By structural racism we meant that seemingly neutral policies and practices can function in racist ways by disempowering communities of color and perpetuating unequal historic conditions” (2014: 627-8).

Yet, a tiny proportion of HIAP studies identify this level of collaboration and new knowledge feeding into policy agendas to address health equity.

The cautionary tale: HIAP does not cause health equity

Rather, most of the peer-reviewed academic HIAP literature identifies a major gap between high expectations and low implementation. Most studies identify an urgent and strong impetus for policy action to be proportionate to the size of the policy problem, and ideas about the potential implementation of a HIAP agenda when agreed, but no studies identify implementation success in relation to health equity. In fact, the two most-discussed examples – in Finland and South Australia – seem to describe a successful reform of processes that have a negligible impact on equity.  

A window of opportunity for what?

It is common in the public health field to try to identify ‘windows of opportunity’ to adopt (a) HIAP in principle, and (b) specific HIAP-friendly policy instruments. It is also common to try to identify the factors that would aid HIAP implementation, and to assume that this success would have a major impact on the social determinants of health inequalities. Yet, the cumulative experience from HIAP studies is that governments can pursue health promotion and intersectoral action without reducing health inequalities.

For me, this is the context for current studies of the unequal impact of the coronavirus across the globe and within each country. In some cases, there are occasionally promising discussions of major policymaking reforms, or to use the current crisis as an impetus for social justice as well as crisis response. Yet, the history of the pursuit of HIAP-style reforms should help us reject the simple notion that some people saying the right things will make that happen. Instead, right now, it seems more likely that – in the absence of significantly new action** – the same people and systems that cause inequalities will undermine attempts to reduce them. In other words, health equity will not happen simply because it seems like the right thing to do. Rather, it is a highly contested concept, and many people will use their power to make sure that it does not happen, even if they claim otherwise.

*These are my early thoughts based on work towards a (qualitative) systematic review of the HIAP literature, in partnership with Emily St Denny, Sean Kippin, and Heather Mitchell.

**No, I do not know what that action would be. There is no magic formula to which I can refer.

1 Comment

Filed under COVID-19, Prevention policy, Public health, public policy

The coronavirus and evidence-informed policy analysis (short version)

The coronavirus feels like a new policy problem that requires new policy analysis. The analysis should be informed by (a) good evidence, translated into (b) good policy. However, don’t be fooled into thinking that either of those things are straightforward. There are simple-looking steps to go from defining a problem to making a recommendation, but this simplicity masks the profoundly political process that must take place. Each step in analysis involves political choices to prioritise some problems and solutions over others, and therefore prioritise some people’s lives at the expense of others.

The very-long version of this post takes us through those steps in the UK, and situates them in a wider political and policymaking context. This post is shorter, and only scratches the surface of analysis.

5 steps to policy analysis

  1. Define the problem.

Perhaps we can sum it up as: (a) the impact of this virus and illness will be a level of death and illness that could overwhelm the population and exceed the capacity of public services, so (b) we need to contain the virus enough to make sure it spreads in the right way at the right time, so (c) we need to encourage and make people change their behaviour (primarily via hygiene and social distancing). However, there are many ways to frame this problem to emphasise the importance of some populations over others, and some impacts over others.

  1. Identify technically and politically feasible solutions.

Solutions are not really solutions: they are policy instruments that address one aspect of the problem, including taxation and spending, delivering public services, funding research, giving advice to the population, and regulating or encouraging changes to social behaviour. Each new instrument contributes an existing mix, with unpredictable and unintended consequences. Some instruments seem technically feasible (they will work as intended if implemented), but will not be adopted unless politically feasible (enough people support their introduction). Or vice versa. This dual requirement rules out a lot of responses.

  1. Use values and goals to compare solutions.

Typical judgements combine: (a) broad descriptions of values such as efficiency, fairness, freedom, security, and human dignity, (b) instrumental goals, such as sustainable policymaking (can we do it, and for how long?), and political feasibility (will people agree to it, and will it make me more or less popular or trusted?), and (c) the process to make choices, such as the extent to which a policy process involves citizens or stakeholders (alongside experts) in deliberation. They combine to help policymakers come to high profile choices (such as the balance between individual freedom and state coercion), and low profile but profound choices (to influence the level of public service capacity, and level of state intervention, and therefore who and how many people will die).

  1. Predict the outcome of each feasible solution.

It is difficult to envisage a way for the UK Government to publicise all of the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation. People often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic about who should live or die, or provide a frank account without unintended consequences for public trust or anxiety. If so, one aspect of government policy is to keep some choices implicit and avoid a lot of debate on trade-offs. Another is to make choices continuously without knowing what their impact will be (the most likely scenario right now).

  1. Make a choice, or recommendation to your client.

Your recommendation or choice would build on these four steps. Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem seemed to change. If you are writing your analysis, maybe keep it down to one sheet of paper (in other words, fewer words than in this post up to this point).

Policy analysis is not as simple as these steps suggest, and further analysis of the wider policymaking environment helps describe two profound limitations to simple analytical thought and action.

  1. Policymakers must ignore almost all evidence

The amount of policy relevant information is infinite, and capacity is finite. So, individuals and governments need ways to filter out almost all of it. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information. They include: define a problem and a feasible response, seek information that is available, understandable, and actionable, and identify credible sources of information and advice. In that context, the vague idea of trusting or not trusting experts is nonsense, and the larger post highlights the many flawed ways in which all people decide whose expertise counts.

  1. They do not control the policy process.

Policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome.

  • There are many policymakers and influencers spread across a political system. For example, consider the extent to which each government department, devolved governments, and public and private organisations are making their own choices that help or hinder the UK government approach.
  • Most choices in government are made in ‘subsystems’, with their own rules and networks, over which ministers have limited knowledge and influence.
  • The social and economic context, and events, are largely out of their control.

The take home messages (if you accept this line of thinking)

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results do not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing. No one is helping their government solve the problem by saying stupid shit on the internet (OK, that last bit was a message of despair).

 

Further reading:

The longer report sets out these arguments in much more detail, with some links to further thoughts and developments.

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

The coronavirus and evidence-informed policy analysis (long version)

This is the long version. It is long. Too long to call a blog post. Let’s call it a ‘living document’ that I update and amend as new developments arise (then start turning into a more organised paper). In most cases, I am adding tweets, so the date of the update is embedded. If I add a new section, I will add a date. If you seek specific topics (like ‘herd immunity’), it might be worth doing a search. The short version is shorter.

The coronavirus feels like a new policy problem. Governments already have policies for public health crises, but the level of uncertainty about the spread and impact of this virus seems to be taking it to a new level of policy, media, and public attention. The UK Government’s Prime Minister calls it ‘the worst public health crisis for a generation’.

As such, there is no shortage of opinions on what to do, but there is a shortage of well-considered opinions, producing little consensus. Many people are rushing to judgement and expressing remarkably firm opinions about the best solutions, but their contributions add up to contradictory evaluations, in which:

  • the government is doing precisely the right thing or the completely wrong thing,
  • we should listen to this expert saying one thing or another expert saying the opposite.

Lots of otherwise-sensible people are doing what they bemoan in politicians: rushing to judgement, largely accepting or sharing evidence only if it reinforces that judgement, and/or using their interpretation of any new development to settle scores with their opponents.

Yet, anyone who feels, without uncertainty, that they have the best definition of, and solution to, this problem is a fool. If people are also sharing bad information and advice, they are dangerous fools. Further, as Professor Madley puts it (in the video below), ‘anyone who tells you they know what’s going to happen over the next six months is lying’.

In that context, how can we make sense of public policy to address the coronavirus in a more systematic way?

Studies of policy analysis and policymaking do not solve a policy problem, but they at least give us a language to think it through.

  1. Let’s focus on the UK as an example, and use common steps in policy analysis, to help us think through the problem and how to try to manage it.
  • In each step, note how quickly it is possible to be overwhelmed by uncertainty and ambiguity, even when the issue seems so simple at first.
  • Note how difficult it is to move from Step 1, and to separate Step 1 from the others. It is difficult to define the problem without relating it to the solution (or to the ways in which we will evaluate each solution).
  1. Let’s relate that analysis to research on policymaking, to understand the wider context in which people pay attention to, and try to address, important problems that are largely out of their control.

Throughout, note that I am describing a thought process as simply as I can, not a full examination of relevant evidence. I am highlighting the problems that people face when ‘diagnosing’ policy problems, not trying to diagnose it myself. To do so, I draw initially on common advice from the key policy analysis texts (summaries of the texts that policy analysis students are most likely to read) that simplify the process a little too much. Still, the thought process that it encourages took me hours alone (spread over three days) to produce no real conclusion. Policymakers and advisers, in the thick of this problem, do not have that luxury of time or uncertainty.

See also: Boris Johnson’s address to the nation in full (23.3.20) and press conference transcripts

https://twitter.com/BorisJohnson/status/1246358936585986048

https://twitter.com/BorisJohnson/status/1243496858095411200

https://twitter.com/R_S_P_H/status/1242833029728477188

Step 1 Define the problem

Common advice in policy analysis texts:

  • Provide a diagnosis of a policy problem, using rhetoric and eye-catching data to generate attention.
  • Identify its severity, urgency, cause, and our ability to solve it. Don’t define the wrong problem, such as by oversimplifying.
  • Problem definition is a political act of framing, as part of a narrative to evaluate the nature, cause, size, and urgency of an issue.
  • Define the nature of a policy problem, and the role of government in solving it, while engaging with many stakeholders.
  • ‘Diagnose the undesirable condition’ and frame it as ‘a market or government failure (or maybe both)’.

Coronavirus as a physical problem is not the same as a coronavirus policy problem. To define the physical problem is to identify the nature, spread, and impact of a virus and illness on individuals and populations. To define a policy problem, we identify the physical problem and relate it (implicitly or explicitly) to what we think a government can, and should, do about it. Put more provocatively, it is only a policy problem if policymakers are willing and able to offer some kind of solution.

This point may seem semantic, but it raises a profound question about the capacity of any government to solve a problem like an epidemic, or for governments to cooperate to solve a pandemic. It is easy for an outsider to exhort a government to ‘do something!’ (or ‘ACT NOW!’) and express certainty about what would happen. However, policymakers inside government:

  1. Do not enjoy the same confidence that they know what is happening, or that their actions will have their intended consequences, and
  2. Will think twice about trying to regulate social behaviour under those circumstances, especially when they
  3. Know that any action or inaction will benefit some and punish others.

For example, can a government make people wash their hands? Or, if it restricts gatherings at large events, can it stop people gathering somewhere else, with worse impact? If it closes a school, can it stop children from going to their grandparents to be looked after until it reopens? There are 101 similar questions and, in each case, I reckon the answer is no. Maybe government action has some of the desired impact; maybe not. If you agree, then the question might be: what would it really take to force people to change their behaviour?

See also: Coronavirus has not suspended politics – it has revealed the nature of power (David Runciman)

The answer is: often too much for a government to consider (in a liberal democracy), particularly if policymakers are informed that it will not have the desired impact.

https://twitter.com/AdamJKucharski/status/1238152492178976769

If so, the UK government’s definition of the policy problem will incorporate this implicit question: what can we do if we can influence, but not determine (or even predict well) how people behave?

Uncertainty about the coronavirus plus uncertainty about policy impact

Now, add that general uncertainty about the impact of government to this specific uncertainty about the likely nature and spread of the coronavirus:

https://www.youtube.com/watch?time_continue=350&v=blkDulsgh3Q&feature=emb_logo

A summary of this video suggests:

  • There will be an epidemic (a profound spread to many people in a short space of time), then the problem will be endemic (a long-term, regular feature of life) (see also UK policy on coronavirus COVID-19 assumes that the virus is here to stay).
  • In the absence of a vaccine, the only way to produce ‘herd immunity’ is for most people to be infected and recover

[Note: there is much debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation, based on levels of trust/distrust in the UK Government, its Prime Minister, and the Prime Minister’s special adviser. I discuss this point below under ‘trial and error policymaking’. See also Who can you trust during the coronavirus crisis? ]

  • The ideal spread involves all well people sharing the virus first, while all vulnerable people (e.g. older, and/or with existing health problems that affect their immune systems) protected in one isolated space, but it won’t happen like that; so, we are trying to minimise damage in the real world.
  • We mainly track the spread via deaths, with data showing a major spike appearing one month later, so the problem may only seem real to most people when it is too late to change behaviour

https://twitter.com/ChrisGiles_/status/1247458186300456960

https://twitter.com/d_spiegel/status/1248157520943857665

https://twitter.com/d_spiegel/status/1247824140645683205

https://twitter.com/EmergMedDr/status/1250039068890726400

See also: Coronavirus: Government expert defends not closing UK schools (BBC, Sir Patrick Vallance 13th March 2020)

https://twitter.com/DrSamSims/status/1247445729439895555

  • The choice in theory is between a rapid epidemic with a high peak, or a slowed-down epidemic over a longer period, but ‘anyone who tells you they know what’s going to happen over the next six months is lying’.
  • Maybe this epidemic will be so memorable as to shift social behaviour, but so much depends on trying to predict (badly) if individuals will actually change (see also Spiegelhalter on communicating risk).

None of this account tells policymakers what to do, but at least it helps them clarify three key aspects of their policy problem:

  1. The impact of this virus and illness could overwhelm the population, to the extent that it causes mass deaths, causes a level of illness that exceeds the capacity of health services to treat, and contributes to an unpredictable amount of social and economic damage.
  2. We need to contain the virus enough to make sure it (a) spreads at the right speed and/or (b) peaks at the right time. The right speed seems to be: a level that allows most people to recover alone, while the most vulnerable are treated well in healthcare settings that have enough capacity. The right time seems to be the part of the year with the lowest demand on health services (e.g. summer is better than winter). In other words, (a) reduce the size of the peak by ‘flattening the curve’, and/or (b) find the right time of year to address the peak, while (c) anticipating more than one peak.

My impression is that the most frequently-expressed aim is (a) …

https://twitter.com/STVNews/status/1238468179036459008

https://twitter.com/DHSCgovuk/status/1238540941717356548

… while the UK Government’s Deputy Chief Medical Officer also seems to be describing (b):

  1. We need to encourage or coerce people to change their behaviour, to look after themselves (e.g. by handwashing) and forsake their individual preferences for the sake of public health (e.g. by self-isolating or avoiding vulnerable people). Perhaps we can foster social trust and empathy to encourage responsible individual action. Perhaps people will only protect others if obliged to do so (compare Stone; Ostrom; game theory).

See also: From across the Ditch: How Australia has to decide on the least worst option for COVID-19 (Prof Tony Blakely on three bad options: (1) the likelihood of ‘elimination’ of the virus before vaccination is low; (2) an 18-month lock-down will help ‘flatten the curve’; (3) ‘to prepare meticulously for allowing the pandemic to wash through society over a period of six or so months. To tool up the production of masks and medical supplies. To learn as quickly as possible which treatments of people sick with COVID-19 saves lives. To work out our strategies for protection of the elderly and those with a chronic condition (for whom the mortality from COVID-19 is much higher’).

https://twitter.com/luciadambruoso/status/1246361265909444608

https://twitter.com/anandMenon1/status/1246712962519310337

From uncertainty to ambiguity

If you are still with me, I reckon you would have worded those aims slightly differently, right? There is some ambiguity about these broad intentions, partly because there is some uncertainty, and partly because policymakers need to set rather vague intentions to generate the highest possible support for them. However, vagueness is not our friend during a crisis involving such high anxiety. Further, they are only delaying the inevitable choices that people need to make to turn a complex multi-faceted problem into something simple enough to describe and manage. The problem may be complex, but our attention focuses only on a small number of aspects, at the expense of the rest. Examples that have arisen, so far, include to accentuate:

  1. The health of the whole population or people who would be affected disproportionately by the illness.
  • For example, the difference in emphasis affects the health advice for the relatively vulnerable (and the balance between exhortation and reassurance)

https://twitter.com/colinrtalbot/status/1238227267471527937?s=09

https://twitter.com/hacscot/status/1240588827829436416?s=09

https://twitter.com/lisatrigg/status/1249670660802187266

 

  1. Inequalities in relation to health, socio-economic status (e.g. income, gender, race, ethnicity), or the wider economy.
  • For example, restrictive measures may reduce the risk of harm to some, but increase the burden on people with no savings or reliable sources of income.
  • For example, some people are hoarding large quantities of home and medical supplies that (a) other people cannot afford, and (b) some people cannot access, despite having higher need.
  • For example, social distancing will limit the spread of the virus (see the nascent evidence), but also produce highly unequal forms of social isolation that increase the risk of domestic abuse (possibly exacerbated by school closures) and undermine wellbeing. Or, there will be major policy changes, such as to the rules to detain people under mental health legislation, regarding abortion, or in relation to asylum (note: some of these tweets are from the US, partly because I’m seeing more attention to race – and the consequence of systematic racism on the socioeconomic inequalities so important to COVID-19 mortality – than in the UK).

See also: COVID-19: how the UK’s economic model contributes towards a mismanagement of the crisis (Carolina Alves and Farwa Sial 30.3.20),

Economic downturn and wider NHS disruption likely to hit health hard – especially health of most vulnerable (Institute for Fiscal Studies 9.4.20),

Don’t be fooled: Britain’s coronavirus bailout will make the rich richer still (Christine Berry 13.4.20)

https://twitter.com/closethepaygap/status/1244579870392422400

https://twitter.com/heyDejan/status/1238944695260233728?s=09

https://twitter.com/TimothyNoah1/status/1240375741809938433

https://twitter.com/politicshome/status/1249236632009691136?s=09

 

https://twitter.com/NPR/status/1246837779474120705?s=09

https://twitter.com/povertyscholar/status/1246487621230092294

https://twitter.com/Yamiche/status/1248028548998344708

https://twitter.com/MalindaSmith/status/1247281226274107392

https://twitter.com/Jas_Athwal/status/1248875273568878592?s=09

https://twitter.com/GKBhambra/status/1248874500764073989

cc

https://twitter.com/sunny_hundal/status/1247454112762990592

https://twitter.com/olivernmoody/status/1248260326140805125

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/MarioLuisSmall/status/1239879542094925825

https://twitter.com/kevinstoneUWE/status/1240000285046640645?s=09

https://twitter.com/colinimckay/status/1240721797731045378?s=09

https://twitter.com/heytherehurley/status/1242113416103432195

https://twitter.com/stellacreasy/status/1244022413865648128

https://twitter.com/NIOgov/status/1246482663738871811

https://twitter.com/refugeecouncil/status/1243842703680471040

https://twitter.com/libertyhq/status/1248173788598013953

https://twitter.com/TheLancet/status/1246039259880054784

https://twitter.com/profhrs/status/1247572112061222914

https://twitter.com/HumzaYousaf/status/1248262165657722885

  • For example, governments cannot ignore the impact of their actions on the economy, however much they emphasise mortality, health, and wellbeing. Most high-profile emphasis was initially on the fate of large and small businesses, and people with mortgages, but a long period of crisis will a tip the balance from low income to unsustainable poverty (even prompting Iain Duncan Smith to propose policy change), and why favour people who can afford a mortgage over people scraping the money together for rent?
  1. A need for more communication and exhortation, or for direct action to change behaviour.
  2. The short term (do everything possible now) or long term (manage behaviour over many months).
  1. How to maintain trust in the UK government when (a) people are more or less inclined to trust a the current part of government and general trust may be quite low, and (b) so many other governments are acting differently from the UK.

https://twitter.com/DrSophieHarman/status/1238893265782530059

https://twitter.com/Sander_vdLinden/status/1242168652180475906?s=09

https://twitter.com/policyatkings/status/1248318259029516289

  • For example, note the visible presence of the Prime Minister, but also his unusually high deference to unelected experts such as (a) UK Government senior scientists providing direct advice to ministers and the public, and (b) scientists drawing on limited information to model behaviour and produce realistic scenarios (we can return to the idea of ‘evidence-based policymaking’ later). This approach is not uncommon with epidemics/ pandemics (LD was then the UK Government’s Chief Medical Officer):

https://twitter.com/AndyBurnhamGM/status/1239153510903619584

  • For example, note how often people are second guessing and criticising the UK Government position (and questioning the motives of Conservative ministers).

See also: Coronavirus: meet the scientists who are now household names

  1. How policy in relation to the coronavirus relates to other priorities (e.g. Brexit, Scottish independence, trade, education, culture)

7. Who caused, or who is exacerbating, the problem? The answers to such questions helps determine which populations are most subject to policy intervention.

  • For example, people often try to lay blame for viruses on certain populations, based on their nationality, race, ethnicity, sexuality, or behaviour (e.g. with HIV).
  • For example, the (a) association between the coronavirus and China and Chinese people (e.g. restrict travel to/ from China; e.g. exacerbate racism), initially overshadowed (b) the general role of international travellers (e.g. place more general restrictions on behaviour), and (c) other ways to describe who might be responsible for exacerbating a crisis.

See also: ‘Othering the Virus‘ by Marius Meinhof

Under ‘normal’ policymaking circumstances, we would expect policymakers to resolve this ambiguity by exercising power to set the agenda and make choices that close off debate. Attention rises at first, a choice is made, and attention tends to move on to something else. With the coronavirus, attention to many different aspects of the problem has been lurching remarkably quickly. The definition of the policy problem often seems to be changing daily or hourly, and more quickly than the physical problem. It will also change many more times, particularly when attention to each personal story of illness or death prompts people to question government policy every hour. If the policy problem keeps changing in these ways, how could a government solve it?

Step 2 Identify technically and politically feasible solutions

Common advice in policy analysis texts:

  • Identify the relevant and feasible policy solutions that your audience/ client might consider.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Provide ‘plausible’ predictions about the future effects of current/ alternative policies.
  • Identify many possible solutions, then select the ‘most promising’ for further analysis.
  • Identify how governments have addressed comparable problems, and a previous policy’s impact.

Policy ‘solutions’ are better described as ‘tools’ or ‘instruments’, largely because (a) it is rare to expect them to solve a problem, and (b) governments use many instruments (in different ways, at different times) to make policy, including:

  1. Public expenditure (e.g. to boost spending for emergency care, crisis services, medical equipment)
  2. Economic incentives and disincentives (e.g. to reduce the cost of business or borrowing, or tax unhealthy products)
  3. Linking spending to entitlement or behaviour (e.g. social security benefits conditional on working or seeking work, perhaps with the rules modified during crises)
  4. Formal regulations versus voluntary agreements (e.g. making organisations close, or encouraging them to close)
  5. Public services: universal or targeted, free or with charges, delivered directly or via non-governmental organisations
  6. Legal sanctions (e.g. criminalising reckless behaviour)
  7. Public education or advertising (e.g. as paid adverts or via media and social media)
  8. Funding scientific research, and organisations to advise on policy
  9. Establishing or reforming policymaking units or departments
  10. Behavioural instruments, to ‘nudge’ behaviour (seemingly a big feature in the UK , such as on how to encourage handwashing).

As a result, what we call ‘policy’ is really a complex mix of instruments adopted by one or more governments. A truism in policy studies is that it is difficult to define or identify exactly what policy is because (a) each new instrument adds to a pile of existing measures (with often-unpredictable consequences), and (b) many instruments designed for individual sectors tend, in practice, to intersect in ways that we cannot always anticipate. When you think through any government response to the coronavirus, note how every measure is connected to many others.

Further, it is a truism in public policy that there is a gap between technical and political feasibility: the things that we think will be most likely to work as intended if implemented are often the things that would receive the least support or most opposition. For example:

  1. Redistributing income and wealth to reduce socio-economic inequalities (e.g. to allay fears about the impact of current events on low-income and poverty) seems to be less politically feasible than distributing public services to deal with the consequences of health inequalities.
  2. Providing information and exhortation seems more politically feasible than the direct regulation of behaviour. Indeed, compared to many other countries, the UK Government seems reluctant to introduce ‘quarantine’ style measures to restrict behaviour.

Under ‘normal’ circumstances, governments may be using these distinctions as simple heuristics to help them make modest policy changes while remaining sufficiently popular (or at least looking competent). If so, they are adding or modifying policy instruments during individual ‘windows of opportunity’ for specific action, or perhaps contributing to the sense of incremental change towards an ambitious goal.

Right now, we may be pushing the boundaries of what seems possible, since crises – and the need to address public anxiety – tend to change what seems politically feasible. However, many options that seem politically feasible may not be possible (e.g. to buy a lot of extra medical/ technology capacity quickly), or may not work as intended (e.g. to restrict the movement of people). Think of technical and political feasibility as necessary but insufficient on their own, which is a requirement that rules out a lot of responses.

https://twitter.com/CairneyPaul/status/1244970044351791104

https://twitter.com/ChrisCEOHopson/status/1249617980859744256?s=09

Step 3 Use value-based criteria and political goals to compare solutions

Common advice in policy analysis texts:

  • Typical value judgements relate to efficiency, equity and fairness, the trade-off between individual freedom and collective action, and the extent to which a policy process involves citizens in deliberation.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions
  • ‘Specify the objectives to be attained in addressing the problem and the criteria  to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.
  • ‘Effectiveness, efficiency, fairness, and administrative efficiency’ are common.
  • Identify (a) the values to prioritise, such as ‘efficiency’, ‘equity’, and ‘human dignity’, and (b) ‘instrumental goals’, such as ‘sustainable public finance or political feasibility’, to generate support for solutions.
  • Instrumental questions may include: Will this intervention produce the intended outcomes? Is it easy to get agreement and maintain support? Will it make me popular, or diminish trust in me even further?

Step 3 is the most simple-looking but difficult task. Remember that it is a political, not technical, process. It is also a political process that most people would like to avoid doing (at least publicly) because it involves making explicit the ways in which we prioritise some people over others. Public policy is the choice to help some people and punish or refuse to help others (and includes the choice to do nothing).

Policy analysis texts describe a relatively simple procedure of identifying criteria and producing a table (with a solution in each row, and criteria in each column) to compare the trade-offs between each solution. However, these criteria are notoriously difficult to define, and people resolve that problem by exercising power to decide what each term means, and whose interests should be served when they resolve trade-offs. For example, see Stone on whose needs come first, who benefits from each definition of fairness, and how technical-looking processes such as ‘cost benefit analysis’ mask political choices.

Right now, the most obvious and visible trade-off, accentuated in the UK, is between individual freedom and collective action, or the balance between state, communal, and market/ individual solutions. In comparison with many countries (and China and Italy in particular), the UK Government seems to be favouring individual action over state quarantine measures. However, most trade-offs are difficult to categorise

  1. What should be the balance between efforts to minimise the deaths of some (generally in older populations) and maximise the wellbeing of others? This is partly about human dignity during crisis, how we treat different people fairly, and the balance of freedom and coercion.
  2. How much should a government spend to keep people alive using intensive case or expensive medicines, when the money could be spent improving the lives of far more people? This is partly about human dignity, the relative efficiency of policy measures, and fairness.

If you are like me, you don’t really want to answer such questions (indeed, even writing them looks callous). If so, one way to resolve them is to elect policymakers to make such choices on our behalf (perhaps aided by experts in moral philosophy, or with access to deliberative forums). To endure, this unusually high level of deference to elected ministers requires some kind of reciprocal act:

https://twitter.com/devisridhar/status/1240648925998178304

See also: We must all do everything in our power to protect lives (UK Secretary of State for Health and Social Care)

Still, I doubt that governments are making reportable daily choices with reference to a clear and explicit view of what the trade-offs and priorities should be, because their choices are about who will die, and their ability to predict outcomes is limited.

See also: Media experts despair at Boris Johnson’s coronavirus campaign (Sonia Sodha)

Step 4 Predict the outcome of each feasible solution.

Common advice in policy analysis texts:

  • Focus on the outcomes that key actors care about (such as value for money), and quantify and visualise your predictions if possible. Compare the pros and cons of each solution, such as how much of a bad service policymakers will accept to cut costs.
  • ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.
  • Estimate the cost of a new policy, in comparison with current policy, and in relation to factors such as savings to society or benefits to certain populations. Use your criteria and projections to compare each alternative in relation to their likely costs and benefits.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Short deadlines dictate that you use ‘logic and theory, rather than systematic empirical evidence’ to make predictions efficiently.
  • Monitoring is crucial because it is difficult to predict policy success, and unintended consequences are inevitable. Try to measure the outcomes of your solution, while noting that evaluations are contested.

It is difficult to envisage a way for the UK Government to publicise the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation, rather than a highly technical debate between a small number of academics:

Further, people often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic, or provide a frank account without unintended consequences for public trust or anxiety. If so, government policy involves (a) to keep some choices implicit to avoid a lot of debate on trade-offs, and (b) to make general statements about choices when they do not know what their impact will be.

Step 5 Make a recommendation to your client

Common advice in policy analysis texts:

  • Examine your case through the eyes of a policymaker. Keep it simple and concise.
  • Make a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Client-oriented advisors identify the beliefs of policymakers and tailor accordingly.
  • ‘Unless your client asks you not to do so, you should explicitly recommend one policy’

I now invite you to make a recommendation (step 5) based on our discussion so far (steps 1-4). Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem would seem to change. If you are writing your analysis, maybe keep it down to one sheet of paper (and certainly far fewer words than in this post). Better you than me.

Please now watch this video before I suggest that things are not so simple.

Would that policy analysis were so simple

Imagine writing policy analysis in an imaginary world, in which there is a single powerful ‘rational’ policymaker at the heart of government, making policy via an orderly series of stages.

cycle and cycle spirograph 18.2.20

Your audience would be easy to identify at each stage, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change (since the selection of a solution would lead to implementation).  You could adopt a simple 5 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

Studies of policy analysts describe how unrealistic this expectation tends to be (Radin, Brans, Thissen).

Table for coronavirus 750

For example, there are many policymakers, analysts, influencers, and experts spread across political systems, and engaging with 101 policy problems simultaneously, which suggests that it is not even clear how everyone fits together and interacts in what we call (for the sake of simplicity) ‘the policy process’.

Instead, we can describe real world policymaking with reference to two factors.

The wider policymaking environment: 1. Limiting the use of evidence

First, policymakers face ‘bounded rationality’, in which they only have the ability to pay attention to a tiny proportion of available facts, are unable to separate those facts from their values (since we use our beliefs to evaluate the meaning of facts), struggle to make clear and consistent choices, and do not know what impact they will have. The consequences can include:

  • Limited attention, and lurches of attention. Policymakers can only pay attention to a tiny proportion of their responsibilities, and policymaking organizations struggle to process all policy-relevant information. They prioritize some issues and information and ignore the rest.
  • Power and ideas. Some ways of understanding and describing the world dominate policy debate, helping some actors and marginalizing others.
  • Beliefs and coalitions. Policymakers see the world through the lens of their beliefs. They engage in politics to turn their beliefs into policy, form coalitions with people who share them, and compete with coalitions who don’t.
  • Dealing with complexity. They engage in ‘trial-and-error strategies’ to deal with uncertain and dynamic environments (see the new section on trial-and-error- at the end).
  • Framing and narratives. Policy audiences are vulnerable to manipulation when they rely on other actors to help them understand the world. People tell simple stories to persuade their audience to see a policy problem and its solution in a particular way.
  • The social construction of populations. Policymakers draw on quick emotional judgements, and social stereotypes, to propose benefits to some target populations and punishments for others.
  • Rules and norms. Institutions are the formal rules and informal understandings that represent a way to narrow information searches efficiently to make choices quickly.
  • Learning. Policy learning is a political process in which actors engage selectively with information, not a rational search for truth.

Evidence-based or expert-informed policymaking

Put simply, policymakers cannot oversee a simple process of ‘evidence-based policymaking’. Rather, to all intents and purposes:

  1. They need to find ways to ignore most evidence so that they can focus disproportionately on some. Otherwise, they will be unable to focus well enough to make choices. The cognitive and organisational shortcuts, described above, help them do it almost instantly.
  2. They also use their experience to help them decide – often very quickly – what evidence is policy-relevant under the circumstances. Relevance can include:
  • How it relates to the policy problem as they define it (Step 1).
  • If it relates to a feasible solution (Step 2).
  • If it is timely, available, understandable, and actionable.
  • If it seems credible, such as from groups representing wider populations, or from people they trust.
  1. They use a specific shortcut: relying on expertise.

However, the vague idea of trusting or not trusting experts is a nonsense, largely because it is virtually impossible to set a clear boundary between relevant/irrelevant experts and find a huge consensus on (exactly) what is happening and what to do. Instead, in political systems, we define the policy problem or find other ways to identify the most relevant expertise and exclude other sources of knowledge.

In the UK Government’s case, it appears to be relying primarily on expertise from its own general scientific advisers, medical and public health advisers, and – perhaps more controversially – advisers on behavioural public policy.

box 7.1

Right now, it is difficult to tell exactly how and why it relies on each expert (at least when the expert is not in a clearly defined role, in which case it would be irresponsible not to consider their advice). Further, there are regular calls on Twitter for ministers to be more open about their decisions.

See also: Coronavirus: do governments ever truly listen to ‘the science’?

However, don’t underestimate the problems of identifying why we make choices, then justifying one expert or another (while avoiding pointless arguments), or prioritising one form of advice over another. Look, for example, at the kind of short-cuts that intelligent people use, which seem sensible enough, but would receive much more intense scrutiny if presented in this way by governments:

  • Sophisticated speculation by experts in a particular field, shared widely (look at the RTs), but questioned by other experts in another field:
  • Experts in one field trusting certain experts in another field based on personal or professional interaction:
  • Experts in one field not trusting a government’s approach based on its use of one (of many) sources of advice:
  • Experts representing a community of experts, criticising another expert (Prof John Ashton), for misrepresenting the amount of expert scepticism of government experts (yes, I am trying to confuse you):
  • Expert debate on how well policymakers are making policy based on expert advice
  • Finding quite-sensible ways to trust certain experts over others, such as because they can be held to account in some way (and may be relatively worried about saying any old shit on the internet):

There are many more examples in which the shortcut to expertise is fine, but not particularly better than another shortcut (and likely to include a disproportionately high number of white men with STEM backgrounds).

Update: of course, they are better than the volume trumps expertise approach:

See also:

Further, in each case, we may be receiving this expert advice via many other people, and by the time it gets to us the meaning is lost or reversed (or there is some really sophisticated expert analysis of something rumoured – not demonstrated – to be true):

For what it’s worth, I tend to favour experts who:

(a) establish the boundaries of their knowledge, (b) admit to high uncertainty about the overall problem:

(c) (in this case) make it clear that they are working on scenarios, not simple prediction

(d) examine critically the too-simple ideas that float around, such as the idea that the UK Government should emulate ‘what works’ somewhere else

(e) situate their own position (in Prof Sridhar’s case, for mass testing) within a broader debate

See also:

See also: Prof Sir John Bell (4.3.20) on why an accurate antibody test is at least one month away and these exchanges on the problems with test ‘accuracy’:

(f) use their expertise on governance to highlight problems with thoughtless criticism

However, note that most of these experts are from a very narrow social background, and from very narrow scientific fields (first in modelling, then likely in testing), despite the policy problem being largely about (a) who, and how many people, a government should try to save, and (b) how far a government should go to change behaviour to do it (Update 2.4.20: I wrote that paragraph before adding so many people to the list). It is understandable to defer in this way during a crisis, but it also contributes to a form of ‘depoliticisation’ that masks profound choices that benefit some people and leave others vulnerable to harm.

See also: COVID-19: a living systematic map of the evidence

See also: To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review

See also: Covid-19: why is the UK government ignoring WHO’s advice? (British Medical Journal editorial)

See also: Coronavirus: just 2,000 NHS frontline workers tested so far

See also: ‘What’s important is social distancing’ coronavirus testing ‘is a side issue’, says Deputy Chief Medical Officer [Professor Jonathan Van-Tam talks about the important distinction between a currently available test to see if someone has contracted the virus (an antigen test) and a forthcoming test to see if someone has had and recovered from COVID-19 (an antibody test)]. The full interview is here (please feel free to ignore the editorialising of the uploader):

See also: Why is Germany able to test for coronavirus so much more than the UK? (which is mostly a focus on Germany’s innovation and partly on the UK (Public Health England) focus on making sure its test is reliable, in the context of ‘coronavirus tests produced at great speed which have later proven to be inaccurate’ (such as one with a below-30% accuracy rate, which is worse than not testing at all). Compare with The Coronavirus Hit Germany And The UK Just Days Apart But The Countries Have Responded Differently. Here’s How and the Opinion piece ‘A public inquiry into the UK’s coronavirus response would find a litany of failures

See also: Rights and responsibilities in the Coronavirus pandemic

See also: UK police warned against ‘overreach’ in use of virus lockdown powers (although note that there is no UK police force and that Scotland has its own legal system) and Coronavirus: extra police powers risk undermining public trust (Alex Oaten and Chris Allen)

See also (Calderwood resigned as CMO that night):

See also: Social Licensing of Privacy-Encroaching Policies to Address the COVID-19 Pandemic (U.K.) (research on public opinion)

The wider policymaking environment: 2. Limited control

Second, policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome. I normally use the following figure to think through the nature of a complex and unwieldy policymaking environment of which no ‘centre’ of government has full knowledge or control.

image policy process round 2 25.10.18

It helps us identify (further) the ways in which we can reject the idea that the UK Prime Minister and colleagues can fully understand and solve policy problems:

Actors. The environment contains many policymakers and influencers spread across many levels and types of government (‘venues’).

For example, consider how many key decisions that (a) have been made by organisations not in the UK central government, and (b) are more or less consistent with its advice, including:

  • Devolved governments announcing their own healthcare and public health responses (although the level of UK coordination seems more significant than the level of autonomy).
  • Public sector employers initiating or encouraging at-home working (and many Universities moving quickly from in-person to online teaching)
  • Private organisations cancelling cultural and sporting events.

Context and events. Policy solutions relate to socioeconomic context and events which can be impossible to ignore and out of the control of policymakers. The coronavirus, and its impact on so many aspects on population health and wellbeing, is an extreme example of this problem.

Networks, Institutions, and Ideas. Policymakers and influencers operate in subsystems (specialist parts of political systems). They form networks or coalitions built on the exchange of resources or facilitated by trust underpinned by shared beliefs or previous cooperation. Many different parts of government have practices driven by their own formal and informal rules. Formal rules are often written down or known widely. Informal rules are the unwritten rules, norms and practices that are difficult to understand, and may not even be understood in the same way by participants. Political actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so established as to be taken for granted. These dominant frames of reference establish the boundaries of the political feasibility of policy solutions.  These kinds of insights suggest that most policy decisions are considered, made, and delivered in the name of – but not in the full knowledge of – government ministers.

Trial and error policymaking in complex policymaking systems (17.3.20)

There are many ways to conceptualise this policymaking environment, but few theories provide specific advice on what to do, or how to engage effectively in it. One notable exception is the general advice that comes from complexity theory, including:

  • Law-like behaviour is difficult to identify – so a policy that was successful in one context may not have the same effect in another.
  • Policymaking systems are difficult to control; policy makers should not be surprised when their policy interventions do not have the desired effect.
  • Policy makers in the UK have been too driven by the idea of order, maintaining rigid hierarchies and producing top-down, centrally driven policy strategies.  An attachment to performance indicators, to monitor and control local actors, may simply result in policy failure and demoralised policymakers.
  • Policymaking systems or their environments change quickly. Therefore, organisations must adapt quickly and not rely on a single policy strategy.

On this basis, there is a tendency in the literature to encourage the delegation of decision-making to local actors:

  1. Rely less on central government driven targets, in favour of giving local organisations more freedom to learn from their experience and adapt to their rapidly-changing environment.
  2. To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly.
  3. Encourage better ways to deal with alleged failure by treating ‘errors’ as sources of learning (rather than a means to punish organisations) or setting more realistic parameters for success/ failure (although see this example and this comment).
  4. Encourage a greater understanding, within the public sector, of the implications of complex systems and terms such as ‘emergence’ or ‘feedback loops’.

In other words, this literature, when applied to policymaking, tends to encourage a movement from centrally driven targets and performance indicators towards a more flexible understanding of rules and targets by local actors who are more able to understand and adapt to rapidly-changing local circumstances.

[See also: Complex systems and systems thinking]

Now, just imagine the UK Government taking that advice right now. I think it is fair to say that it would be condemned continuously (even more so than right now). Maybe that is because it is the wrong way to make policy in times of crisis. Maybe it is because too few people are willing and able to accept that the role of a small group of people at the centre of government is necessarily limited, and that effective policymaking requires trial-and-error rather than a single, fixed, grand strategy to be communicated to the public. The former highlights policy that changes with new information and perspective. The latter highlights errors of judgement, incompetence, and U-turns. In either case, the advice is changing as estimates of the coronavirus’ impact change:

I think this tension, in the way that we understand UK government, helps explain some of the criticism that it faces when changing its advice to reflect changes in its data or advice. This criticism becomes intense when people also question the competence or motives of ministers (and even people reporting the news) more generally, leading to criticism that ranges from mild to outrageous:

For me, this casual reference to a government policy to ‘cull the heard of the weak’ is outrageous, but you can find much worse on Twitter. It reflects wider debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation of government statements, based on levels of trust/distrust in the UK Government, its Prime Minister and Secretaries of State, and the Prime Minister’s special adviser

However, I think that some of it is also about:

1. Wilful misinterpretation (particularly on Twitter). For example, in the early development and communication of policy, Boris Johnson was accused (in an irresponsibly misleading way) of advocating for herd immunity rather than restrictive measures.

See: Here is the transcript of what Boris Johnson said on This Morning about the new coronavirus (Full Fact)

full fact coronavirus

Below is one of the most misleading videos of its type. Look at how it cuts each segment into a narrative not provided by ministers or their advisors (see also this stinker):

See also:

2. The accentuation of a message not being emphasised by government spokespeople.

See for example this interview, described by Sky News (13.3.20) as: The government’s chief scientific adviser Sir Patrick Vallance has told Sky News that about 60% of people will need to become infected with coronavirus in order for the UK to enjoy “herd immunity”. You might be forgiven for thinking that he was on Sky extolling the virtues of a strategy to that end (and expressing sincere concerns on that basis). This was certainly the write-up in respected papers like the FT (UK’s chief scientific adviser defends ‘herd immunity’ strategy for coronavirus). Yet, he was saying nothing of the sort. Rather, when prompted, he discussed herd immunity in relation to the belief that COVID-19 will endure long enough to become as common as seasonal flu.

The same goes for Vallance’s interview on the same day (13.3.20) during Radio 4’s Today programme (transcribed by the Spectator, which calls Vallance the author, and gives it the headlineHow ‘herd immunity’ can help fight coronavirusas if it is his main message). The Today Programme also tweeted only 30 seconds to single out that brief exchange:

Yet, clearly his overall message – in this and other interviews – was that some interventions (e.g. staying at home; self-isolating with symptoms) would have bigger effects than others (e.g. school closures; prohibiting mass gatherings) during the ‘flattening of the peak’ strategy (‘What we don’t want is everybody to end up getting it in a short period of time so that we swamp and overwhelm NHS services’). Rather than describing ‘herd immunity’ as a strategy, he is really describing how to deal with its inevitability (‘Well, I think that we will end up with a number of people getting it’).

See also: British government wants UK to acquire coronavirus ‘herd immunity’, writes Robert Peston (12.3.20) and live debates (and reports grasping at straws) on whether or not ‘herd immunity’ was the goal of the UK government:

See also: Why weren’t we ready? (Harry Lambert) which is a good exemplar of the ‘U turn’ argument, and compare with the evidence to the Health and Social Care Committee (CMO Whitty, DCMO Harries) that it describes.

A more careful forensic analysis (such as this one) will try to relate each government choice to the ways in which key advisory bodies (such as the New and Emerging Respiratory Virus Threats Advisory Group, NERVTAG) received and described evidence on the current nature of the problem:

See also: Special Report: Johnson listened to his scientists about coronavirus – but they were slow to sound the alarm (Reuters)

Some aspects may also be clearer when there is systematic qualitative interview data on which to draw. Right now, there are bits and pieces of interviews sandwiched between whopping great editorial discussions (e.g. FT Alphaville Imperial’s Neil Ferguson: “We don’t have a clear exit strategy”; compare with the more useful Let’s flatten the coronavirus confusion curve) or confused accounts by people speaking to someone who has spoken to someone else (e.g. Buzzfeed Even The US Is Doing More Coronavirus Tests Than The UK. Here Are The Reasons Why).

See also: other rabbit holes are available

[OK, that proved to be a big departure from the trial-and-error discussion. Here we are, back again]

In some cases, maybe people are making the argument that trial-and-error is the best way to respond quickly, and adapt quickly, in a crisis but that the UK Government version is not what, say, the WHO thinks of as good kind of adaptive response. It is not possible to tell, at least from the general ways in which they justify acting quickly.

See also the BBC’s provocative question (which I expect to be replaced soon):

Compare with:

The take home messages

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results to not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing.

Further reading, until I can think of a better conclusion:

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

See also: Advisers, Governments and why blunders happen? (Colin Talbot)

See also: Why we might disagree about … Covid-19 (Ruth Dixon and Christopher Hood)

See also: Pandemic Science and Politics (Daniel Sarewitz)

See also: We knew this would happen. So why weren’t we ready? (Steve Bloomfield)

See also: Europe’s coronavirus lockdown measures compared (Politico)

.

.

.

.

.

7 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

Policy Analysis in 750 Words: complex systems and systems thinking

This post forms one part of the Policy Analysis in 750 words series overview and connects to previous posts on complexity. The first 750 words tick along nicely, then there is a picture of a cat hanging in there baby to signal where it can all go wrong. I updated it (22.6.20) to add category 11.

There are a million-and-one ways to describe systems and systems thinking. These terms are incredibly useful, but also at risk of meaning everything and therefore nothing (compare with planning and consultation).

Let’s explore how the distinction between policy studies and policy analysis can help us clarify the meaning of ‘complex systems’ and ‘systems thinking’ in policymaking.

For example, how might we close a potentially large gap between these two stories?

  1. Systems thinking in policy analysis.
  • Avoid the unintended consequences of too-narrow definitions of problems and processes (systems thinking, not simplistic thinking).
  • If we engage in systems thinking effectively, we can understand systems well enough to control, manage, or influence them.
  1. The study of complex policymaking systems.
  • Policy emerges from complex systems in the absence of: (a) central government control and often (b) policymaker awareness.
  • We need to acknowledge these limitations properly, to accept our limitations, and avoid the mechanistic language of ‘policy levers’ which exaggerate human or government control.

https://twitter.com/apoliticalco/status/1107796576280432640

See also: Systems science and systems thinking for public health: a systematic review of the field

Six meanings of complex systems in policy and policymaking

Let’s begin by trying to clarify many meanings of complex system and relate them to systems thinking storylines.

For example, you will encounter three different meanings of complex system in this series alone, and each meaning presents different implications for systems thinking:

  1. A complex policymaking system

Policy outcomes seem to ‘emerge’ from policymaking systems in the absence of central government control. As such, we should rely less on central government driven targets (in favour of local discretion to adapt to environments), encourage trial-and-error learning, and rethink the ways in which we think about government ‘failure’ (see, for example, Hallsworth on ‘system stewardship’, the OECD on ‘Systemic Thinking for Policy Making‘, and this thread)

  • Systems thinking is about learning and adapting to the limits to policymaker control.

https://twitter.com/CPI_foundation/status/1227211939052445699?s=09

  1. Complex policy problems

Dunn (2017:  73) describes the interdependent nature of problems:

Subjectively experienced problems – crime, poverty, unemployment, inflation, energy, pollution, health, security – cannot be decomposed into independent subsets without running the risk of producing an approximately right solution to the wrong problem. A key characteristic of systems of problems is that the whole is greater – that is, qualitatively different – than the simple sum of its parts” (contrast with Meltzer and Schwartz on creating a ‘boundary’ to make problems seem solveable).

  • Systems thinking is about addressing policy problems holistically.
  1. Complex policy mixes

What we call ‘policy’ is actually a collection of policy instruments. Their overall effect is ‘non-linear’, difficult to predict, and subject to emergent outcomes, rather than cumulative (compare with Lindblom’s hopes for incrementalist change).

This point is crucial to policy analysis: does it involve a rethink of all instruments, or merely add a new instrument to the pile?

  • Systems thinking is about anticipating the disproportionate effect of a new policy instrument.

These three meanings are joined by at least three more (from Munro and Cairney on energy systems):

  1. Socio-technical systems (Geels)

Used to explain the transition from unsustainable to sustainable energy systems.

  • Systems thinking is about identifying the role of new technologies, protected initially in a ‘niche’, and fostered by a supportive ‘social and political environment’.
  1. Socio-ecological systems (Ostrom)

Used to explain how and why policy actors might cooperate to manage finite resources.

  • Systems thinking is about identifying the conditions under which actors develop layers of rules to foster trust and cooperation.
  1. The metaphor of systems

Used by governments – rather loosely – to indicate an awareness of the interconnectedness of things.

  • Systems thinking is about projecting the sense that (a) policy and policymaking is complicated, but (b) governments can still look like they are in control.

Five more meanings of systems thinking

Now, let’s compare these storylines with a small sample of wider conceptions of systems thinking:

  1. The old way of establishing order from chaos

Based on the (now-diminished) faith in science and rational management techniques to control the natural world for human benefit (compare Hughes and Hughes on energy with Checkland on ‘hard’ v ‘soft’ systems approaches, then see What you need as an analyst versus policymaking reality and Radin on the old faith in rationalist governing systems).

  • Systems thinking was about the human ability to turn potential chaos into well-managed systems (such as ‘large technical systems’ to distribute energy)
  1. The new way of accepting complexity but seeking to make an impact

Based on the idea that we can identify ‘leverage points’, or the places that help us ‘intervene in a system’ (see Meadows then compare with Arnold and Wade).

  • Systems thinking is about the human ability to use a small shift in a system to produce profound changes in that system.
  1. A way to rethink cause-and-effect

Based on the idea that current research methods are too narrowly focused on linearity rather than the emergent properties of systems of behaviour (for example, Rutter et al on how to analyse the cumulative effect of public health interventions, and Greenhalgh on responding more effectively to pandemics).

  • Systems thinking is about rethinking the ways in which governments, funders, or professions conduct policy-relevant research on social behaviour.

https://twitter.com/CairneyPaul/status/1278250293843673088

  1. A way of thinking about ourselves

Embrace the limits to human cognition, and accept that all understandings of complex systems are limited.

  • Systems thinking is about developing the ‘wisdom’ and ‘humility’ to accept our limited knowledge of the world.

https://twitter.com/JoBibbyTHF/status/1207586906634104832

11. The performance of systems thinking

Policymakers can use the language of systems thinking, to give the impression that they are thinking and acting differently, but without backing up their words with tangible changes to policy instruments.

hang-in-there-baby

 

How can we clarify systems thinking and use it effectively in policy analysis?

Now, imagine you are in a room of self-styled systems thinkers, and that no-one has yet suggested a brief conversation to establish what you all mean by systems thinking. I reckon you can make a quick visual distinction by seeing who looks optimistic.

I’ll be the morose-looking guy sitting in the corner, waiting to complain about ambiguity, so you would probably be better off sitting next to Luke Craven who still ‘believes in the power of systems thinking’.

If you can imagine some amalgam of these pessimistic/ optimistic positions, perhaps the conversation would go like this:

  1. Reasons to expect some useful collaboration.

Some of these 10 discussions seem to complement each other. For example:

  • We can use 3 and 9 to reject one narrow idea of ‘evidence-based policymaking’, in which the focus is on (a) using experimental methods to establish cause and effect in relation to one policy instrument, without showing (b) the overall impact on policy and outcomes (e.g. compare FNP with more general ‘families’ policy).
  • 1-3 and 10 might be about the need for policy analysts to show humility when seeking to understand and influence complex policy problems, solutions, and policymaking systems.

In other words, you could define systems thinking in relation to the need to rethink the ways in which we understand – and try to address – policy problems. If so, you can stop here and move on to the next post. There is no benefit to completing this post.

  1. Reasons to expect the same old frustrating discussions based on no-one defining terms well enough (collectively) to collaborate effectively (beyond using the same buzzwords).

Although all of these approaches use the language of complex systems and systems thinking, note some profound differences:

Holding on versus letting go.

  • Some are about intervening to take control of systems or, at least, make a disproportionate difference from a small change.
  • Some are about accepting our inability to understand, far less manage, these systems.

Talking about different systems.

  • Some are about managing policymaking systems, and others about social systems (or systems of policy problems), without making a clear connection between both endeavours.

For example, if you use approach 9 to rethink societal cause-and-effect, are you then going to pretend that you can use approach 7 to do something about it? Or, will our group have a difficult discussion about the greater likelihood of 6 (metaphorical policymaking) in the context of 1 (the inability of governments to control the policymaking systems we need to solve the problems raised by 9).

In that context, the reason that I am sitting in the corner, looking so morose, is that too much collective effort goes into (a) restating, over and over and over again, the potential benefits of systems thinking, leaving almost no time for (b) clarifying systems thinking well enough to move on to these profound differences in thinking. Systems thinking has not even helped us solve these problems with systems thinking.

See also:

Why systems thinkers and data scientists should work together to solve social challenges

5 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UKERC

Prevenir es mejor que curar, entonces, ¿por qué no hacemos más?

Serie: El proceso de las políticas públicas.

Paul Cairney, Profesor de Política y Políticas Públicas en la Universidad de Stirling, Escocia. Enlace a texto original en inglés.

Esta publicación proporciona una amplia cantidad de antecedentes de mi plática en la Escuela de Gobierno de Australia y Nueva Zelanda (ANZSOG, por sus siglas en inglés), la cual se titula “Prevenir es mejor que curar, entonces, ¿por qué no hacemos más?” [en inglés] Si lo lees todo, es una lectura larga. Si no, es una lectura corta antes de la lectura larga. Aquí está la descripción de la plática:

“¿Te suena familiar? Comienza una nueva administración en el gobierno, la cual promete cambiar el equilibrio en las políticas sociales y de salud, – de costosos remedios y atención de alta dependencia o complejidad, a prevención e intervención temprana-. Se comprometen a una mejor formulación de políticas públicas; y dicen que la entrega de políticas y programas se hará de forma coordinada, delegando responsabilidades a nivel local y enfocándose en resultados a largo plazo en lugar de soluciones a corto plazo; y que garantizarán que la política se base en evidencia. Y luego todo se vuelve demasiado difícil y el ciclo comienza nuevamente, dejando a su paso algunos especialistas exhaustos y desilusionados. ¿Por qué sucede esto repetidamente, en diferentes países y con gobiernos de diferentes doctrinas, incluso con la mejor voluntad del mundo?

  • De acuerdo con la pregunta verás que no estoy sugiriendo que todas las políticas públicas de prevención o intervención temprana fallen. Por el contrario, utilizo teorías de políticas públicas para proporcionar una explicación general de la brecha significativa entre las expectativas (realistas) expresadas en las estrategias de prevención y los resultados reales. Luego se puede discutir sobre cómo disminuir esa brecha.
  • También verás la frase “incluso con la mejor voluntad del mundo”, que considero clave para esta plática. Nadie necesita que yo ensaye las formas comunes y generalmente vagas de explicar las políticas de prevención fallidas, incluida la “intratabilidad” [en inglés] de los problemas de políticas públicas o la “patología” [en inglés] de las mismas. Más bien, demuestro que tales políticas públicas pueden “fracasar” incluso cuando existe un acuerdo franco y amplio entre las partes sobre la necesidad de pasar del diseño de políticas reactivas a más preventivas. También sugiero que la explicación general del fracaso (baja “voluntad política”) a menudo es perjudicial para posibilidades de éxito en el futuro.
  • Comencemos por definir la política pública de prevención y la formulación de políticas públicas.

Cuando los gobiernos se involucran en la “prevención”, buscan:

  1. Reformar las políticas públicas

La política pública de prevención es realmente un conjunto de políticas diseñadas para intervenir lo antes posible en la vida de las personas para mejorar su bienestar y reducir las desigualdades o la demanda de servicios agudos. El objetivo es pasar de los servicios públicos reactivos a los preventivos, interviniendo de manera temprana en la vida de las personas para abordar una amplia gama de problemas de largo alcance, incluidos el crimen y el comportamiento antisocial, la mala salud y los comportamientos no saludables, el bajo nivel educativo, el desempleo y la baja empleabilidad, antes de que se vuelvan demasiado severos.

  1. Reformar la formulación de la política pública

La formulación de políticas públicas preventivas describe las formas en que los gobiernos reforman sus prácticas para apoyar las políticas de prevención, incluido el compromiso de:

  • “Unir” a departamentos y servicios gubernamentales para resolver “problemas intratables” que trascienden áreas.
  • Producir objetivos a largo plazo para obtener mayores resultados a través de otorgar mayor responsabilidad en el diseño del servicio a los organismos públicos locales, las partes interesadas, las “comunidades” y los usuarios del servicio
  • Reducir los objetivos a corto plazo en favor de resultados a largo plazo.
  1. Asegurar que la política pública es “basada en evidencia”

Tres razones generales por las cuales las políticas públicas de “prevención” nunca parecen tener éxito.

  1. Los formuladores de política pública no saben el significado de la prevención

Expresan un compromiso con la prevención antes de definirla completamente. Cuando comienzan a dar sentido a la prevención, descubren lo difícil que es perseguirla y las elecciones controvertidas que esto implica (ver también incertidumbre versus ambigüedad)

  1. Se involucran en un sistema de formulación de políticas públicas que es demasiado complejo para controlarse

Intentan compartir la responsabilidad entre varios actores y coordinan acciones para direccionar los resultados de las políticas públicas. Sin embargo, no poseen la capacidad de diseñar dichas relaciones y controlar los resultados de las políticas públicas.

Sin embargo, también deben demostrarle al electorado que tienen el control y descubrir lo difícil que es localizar y centralizar las políticas públicas.

  1. No pueden y no quieren producir la “formulación de la política pública basada en evidencia”

Los formuladores buscan atajos cognitivos (y sus equivalentes organizacionales) para recopilar suficiente información para tomar decisiones “suficientemente buenas”. Cuando buscan evidencia sobre la prevención, descubren que es irregular, poco concluyente y a menudo contraria a sus creencias, y no una “bala mágica” para ayudar a justificar las elecciones.

A lo largo de este proceso, su compromiso con la política pública de prevención puede ser sincero, pero no se materializa. No articulan completamente lo que significa prevención ni aprecian la dimensión de dicha tarea. Cuando intentan ofrecer estrategias de prevención, se enfrentan a varios problemas que por sí solos parecerían desalentadores. Muchos de los problemas que tratan de “prevenir” son “intratables” o difíciles de definir y aparentemente imposibles de resolver, como la pobreza, el desempleo, las viviendas de baja calidad y la falta de ellas, el crimen y las desigualdades en salud y educación. Se enfrentan a elecciones difíciles sobre cuán lejos deberían llegar para cambiar el equilibrio entre el Estado y el mercado, redistribuir la riqueza y los ingresos, distribuir recursos públicos e intervenir en la vida de las personas para cambiar su comportamiento y sus formas de pensar. Su enfoque en el largo plazo se enfrenta a una gran competencia por problemas de políticas públicas cortoplacistas más destacados que los impulsan a mantener servicios públicos “reactivos”. Su deseo puro de “localizar” la formulación de políticas, a menudo cede el paso a la política electoral nacional, en la que los gobiernos centrales se enfrentan a la presión para formular políticas públicas desde “arriba” y ser decisivos. Su búsqueda de políticas “basadas en evidencia” a menudo revela una falta de evidencia sobre qué intervenciones políticas funcionan y la medida en que se pueden “expandir” con éxito.

Un mal diagnostico por parte de los encargados de la formulación de la política pública y actores influyentes hará que los problemas no se resuelvan

  • Si los actores con poder en las políticas públicas hacen la suposición simplista de que un problema es causado por cuestiones que no son vitales para el Estado, darán malos consejos.
  • Si los nuevos formuladores realmente piensan que el problema fue la falta de compromiso y la competencia de sus predecesores, comenzarán con las mismas esperanzas sobre el impacto que pueden tener, solo para desencantarse cuando vean la diferencia entre sus objetivos abstractos y los resultados del mundo real.
  • La mala explicación del éxito limitado contribuye en gran medida a observar (a) un período inicial de entusiasmo y actividad, reemplazado por (b) desencanto e inactividad, y (c) la repetición de este ciclo.

Agreguemos más detalles a estas explicaciones generales:

  1. ¿Qué hace que la prevención sea tan difícil de definir?

Cuando se ve como un eslogan simple, “prevención” parece un objetivo intuitivamente atractivo. Puede generar un consenso entre los partidos políticos, reuniendo grupos de la “izquierda”, buscando reducir las desigualdades, y de la “derecha”, buscando reducir la inactividad económica y el costo de servicios.

Tal consenso es superficial e ilusorio. Al hacer una estrategia detallada, la prevención está abierta a muchas interpretaciones por parte de muchos formuladores de políticas públicas. Imagina los muchos tipos de políticas de prevención y formulación de políticas que podríamos producir:

 

     1. ¿Qué problema tratamos de resolver?

La formulación de políticas públicas de prevención representa una solución heroica a varias crisis: grandes desigualdades, servicios públicos con recursos insuficientes y un gobierno disfuncional.

 

     2. ¿En qué medidas debemos centrarnos?

¿En qué desigualdades debemos concentrarnos principalmente? Riqueza, ocupación y empleo, ingresos, raza, etnia, género, sexualidad, discapacidad, salud mental.

¿De acuerdo a cuál medida de desigualdad? Económica, salud, comportamiento saludable, educación, bienestar, castigo.

     3. ¿En qué solución deberíamos centrarnos?

Para reducir la pobreza y las desigualdades socioeconómicas, mejorar la calidad de vida nacional, reducir los costos de los servicios públicos o aumentar la relación precio-calidad.

     4. ¿Qué “herramientas” o instrumentos de política debemos utilizar?

¿Políticas redistributivas para abordar las causas “estructurales” de pobreza y desigualdad?

O tal vez políticas centradas en el individuo para: (a) aumentar la “resistencia” mental de los usuarios de servicios públicos, (b) obligar o (c) exhortar a las personas a cambiar su comportamiento. 

     5. ¿Cómo se interviene lo antes posible en la vida de las personas?

Prevención primaria. Concentrándose en toda la población para evitar que ocurra un problema invirtiendo de forma temprana o modificando el entorno social o físico. Similar a la vacunación del total de la población.

Prevención secundaria. Enfocándose en los grupos en riesgo para identificar un problema en una etapa temprana con el objetivo de minimizar el daño.

Prevención terciaria. Concentrándose en los grupos afectados para evitar que un problema empeore.

 

     6. ¿Cómo se alcanza la “formulación de políticas públicas basada en evidencia”? 3 modelos ideales (en preparación).

¿Usando ensayos controlados aleatorios y revisión sistemática para identificar las mejores intervenciones?

¿Narrativas para compartir las mejores prácticas de gobernanza?

¿Métodos de “mejora” para experimentar a menor escala y compartir las mejores prácticas?

 

     7. ¿Cómo se relaciona la recopilación de evidencia con la formulación de políticas públicas a largo plazo? 

¿Una estrategia nacional impulsa resultados a largo plazo?

¿El gobierno central produce acuerdos u objetivos para las autoridades locales?

 

  1. ¿La formulación de políticas públicas preventivas es una filosofía o un profundo proceso de reforma?

¿Qué tan serios son los gobiernos nacionales (sobre el localismo, los servicios públicos impulsados por los usuarios del servicio y la formulación de políticas integrales u holísticas), cuando los responsables del resultado son políticos electos?

 

  1. ¿Cuál es la naturaleza de la intervención del Estado?

Puede ser punitivo o de apoyo. Ver: ¿Cómo harían Lisa Simpson y Monty Burns una política social progresista? [en inglés]

 

  1. Tomar “decisiones difíciles”: ¿Qué problemas surgen cuando la política se enfrenta a la formulación de políticas públicas?

 

Cuando los formuladores de políticas se mueven desde un amplia filosofía y lenguaje hacia políticas y prácticas específicas, encuentran una serie de obstáculos, que incluyen:

La escala de la tarea se vuelve abrumadora y no se adapta a los ciclos electorales.

Desarrollar políticas públicas y reformar su formulación lleva tiempo, su efecto puede tardar una generación en verse.

 

Existe competencia por los recursos para la formulación de las políticas públicas, tales como la atención y el dinero.

La prevención es general, a largo plazo y de poca importancia. Compite contra los principales problemas a corto plazo que los políticos se sienten obligados a resolver primero.

La prevención es similar a la inversión de capital sin garantía de retorno sobre la inversión. Las reducciones en los fondos de “lucha contra incendios”, “servicios de primera línea” para solventar las iniciativas de prevención, son difíciles de vender. Los gobiernos invierten en pequeñas acciones, y la inversión es vulnerable cuando se necesita dinero rápidamente para financiar crisis en el servicio público.

 

Los beneficios son difíciles de ver y medir.

Los impactos a corto plazo son difíciles de medir, los impactos a largo plazo son difíciles de atribuir a una sola intervención, y la prevención no necesariamente implica ahorrar dinero (ni proporciona ahorros “canjeables”).

Las políticas reactivas tienen un impacto más visible, como reducir los tiempos de espera en el hospital o aumentar el número de maestros u oficiales de policía.

 

Los problemas son “intratables”.

Llegar a la “causa raíz” de los problemas no es sencillo; los formuladores de políticas públicas a menudo no tienen certeza de la causa de los problemas o el efecto de sus soluciones. Pocos aspectos de la prevención en la política social se asemejan a la prevención de enfermedades, en la que se conocen las causas de muchas enfermedades, así como sus formas de detección y prevención.

 

La gestión del desempeño no conduce a la prevención.

Los sistemas de gestión del desempeño alientan a los administradores del sector público a considerar servicios cuyos objetivos sean medibles a corto plazo, sobre aquellos compartidos con socios de prestación de servicios públicos o referentes al bienestar de sus pobladores.

La gestión del desempeño consiste en establecer prioridades cuando los gobiernos tienen demasiados objetivos que cumplir. Cuando los gobiernos centrales alientan a los órganos de gobierno locales a formar asociaciones a largo plazo para abordar las desigualdades y cumplir los objetivos a corto plazo, lo último es lo primero.

 

Los gobiernos enfrentan grandes dilemas éticos.

Las elecciones políticas coexisten con juicios normativos sobre el papel del Estado y la responsabilidad personal, a menudo socavando acuerdos entre partidos políticos.

 

Un aspecto de la prevención puede debilitar al otro.

Una visión cínica de las iniciativas de prevención es que representan una solución política rápida en lugar de una solución significativa a largo plazo:

  • Los gobiernos centrales describen la prevención como la solución a los costos del sector público. A la vez, delegan la responsabilidad de la formulación de políticas públicas y reducen los presupuestos de los organismos públicos subnacionales.
  • Luego los organismos públicos de acuerdo a la urgencia priorizan sus responsabilidades legales.

 

Alguien debe rendir cuentas.

Si todos están involucrados en la formulación y elaboración de políticas públicas, no queda claro quién puede será responsable de los resultados. Esto es incompatible con la responsabilidad democrática al estilo de “Westminster” en donde se sabe quién es responsable y, por lo tanto, a quién culpar o reconocerle el buen desempeño.

 

     3. La evidencia no es una “bala mágica”

 

En una serie de pláticas [en inglés], identifico las razones por las cuales la “formulación de políticas públicas basada en evidencia” (EBPM) [en inglés] no describe bien el proceso de la política pública.

En otras publicaciones también sugiero que es más difícil para la evidencia “ganar la batalla” [en inglés] en las extensas áreas de la política de prevención en comparación con campos más específicos, por ejemplo el control del tabaco.

En general, una regla simple sobre EBPM es que nunca hay una panacea que sustituya al juicio. La política se trata de tomar decisiones que beneficien a algunos mientras que otros pierden. Puedes usar la evidencia para ayudar a comprender esas opciones, pero no para producir una solución “técnica”.

Una regla adicional con los problemas “intratables” es que la evidencia no es lo suficientemente buena como para generar claridad sobre la causa del problema. O simplemente encuentras cosas que no quieres saber.

La intervención temprana en las “políticas públicas familiares” parece ser un buen candidato para este último, por tres razones principales:

 

  1. Muy pocas intervenciones cumplen con los más altos estándares de evidencia

Hay dos tipos principales de intervenciones relevantes “basadas en evidencia” en este campo [en inglés].

Los primeros son “proyectos de intervención familiar” (FIPs, por sus siglas en inglés). En general, se centran en familias de bajos ingresos a menudo de padres solteros, en riesgo de desalojo y vinculados a comportamientos antisociales. Dichos proyectos proporcionan dos formas de intervención:

  • Apoyo intensivo las 24 horas del día, los 7 días de la semana. Los programas incluyen grupos y actividades después de la escuela (para niños) y clases de habilidades (para padres). En algunos casos también consideran tratamiento para las adicciones o la depresión. Dicho tratamiento se lleva a cabo en alojamientos destinados para este fin con reglas estrictas sobre acceso y comportamiento.
  • Un modelo de apoyo y capacitación.

 

La evidencia del éxito proviene de la evaluación más un contrafáctico: esta intervención es costosa, pero se cree que habría costado mucho más dinero y esfuerzo si no se hubiese intervenido. En general, no existe un ensayo controlado aleatorio (RCT, por sus siglas en inglés) para establecer la causa de los mejores resultados, o demostrar que esos resultados no habrían sucedido sin esta intervención.

El segundo son proyectos transferidos de otros países (principalmente los Estados Unidos de América. y Australia) en función de su exitosa reputación que se basa en la evidencia de los RCTs. Hay más evidencia cuantitativa de éxito, pero aún es difícil saber si el proyecto puede transferirse de manera efectiva y si su éxito puede replicarse en otro país con impulsores, problemas y servicios políticos muy diferentes.

 

  1. La evidencia sobre la “expansión” de la prevención primaria es relativamente débil

 Kenneth Dodge [en inglés] (2009) resume un problema general:

  • Hay pocos ejemplos de proyectos efectivos que especialistas llevan a cabo a “a escala”.
  • Existen problemas importantes en torno a la “fidelidad” al proyecto original cuando se amplía (incluida la necesidad de supervisar una expansión de profesionales bien capacitados)
  • Es difícil predecir el efecto de un programa, que se mostró prometedor cuando se aplicó a una determinada población, a una nueva y diferente.

 

  1. La evidencia sobre la intervención temprana secundaria también es débil

 Este punto sobre diferentes poblaciones con diferentes motivaciones se demuestra en un estudio (publicado en 2014) por Stephen Scott y otros [en inglés], acerca de dos intervenciones de Incredible Years para abordar los “síntomas de trastorno de oposición desafiante y los rasgos de personalidad antisocial” en niños de 3 a 7 años (para una discusión más amplia de tales programas, ver Fundamentos para la vida: ¿qué funciona para apoyar la interacción entre padres e hijos en los primeros años? [en inglés], publicado por la Early Intervention Foundation (Fundación de Intervención Temprana)).

Destacan un dilema clásico en la intervención temprana: la evidencia de efectividad solo es clara cuando los niños han sido remitidos clínicamente (“enfoque indicado”), pero no está claro cuando los niños han sido identificados como de alto riesgo utilizando predictores socioeconómicos (“enfoque selectivo”):

 

Un enfoque indicado es más sencillo de administrar, ya que hay menos niños con problemas graves, son más fáciles de identificar y sus padres generalmente están preparados para participar en el tratamiento; sin embargo, los problemas podrían ya estar demasiado arraigados para tratarlos. Por el contrario, un enfoque selectivo se centra en casos menos severos, pero debido a que los problemas están menos establecidos se debe evaluar a poblaciones enteras y algunos casos desarrollarán problemas graves.

 

Para nuestros propósitos, esto podría representar la forma más inconveniente de evidencia sobre intervención temprana: se podría intervenir temprano con respaldo limitado de evidencia que resulte probablemente exitoso o se podría tener una probabilidad mucho mayor de éxito cuando se interviene más tarde, en otras palabras, cuando se está acabando de tiempo para llamarlo ‘intervención temprana’.

Conclusión: Un vago consenso no sustituye la elección política.

Los gobiernos comienzan con la sensación de que han encontrado la solución a muchos problemas, solo para descubrir que tienen que tomar y defender elecciones altamente “políticas”.

Por ejemplo, considera el uso “creativo” de evidencia del gobierno del Reino Unido para hacer una política familiar [en inglés]. En pocas palabras, el gobierno eligió actuar rápido y a la ligera con la evidencia, demonizando a 117,000 familias para proporcionarle cobertura política a una redistribución de recursos hacia proyectos de intervención familiar.

Con justa razón, se podría objetar este estilo de política. Sin embargo, también se tendría que producir una alternativa factible.

Por ejemplo, el Gobierno escocés ha adoptado un enfoque diferente (quizás más cercano a lo que se esperaría en Nueva Zelanda), pero aún necesita producir y defender una narrativa acerca de sus elecciones. El gobierno de Escocia enfrenta casi las mismas limitaciones que el Reino Unido, su auto descripción hacia un “cambio decisivo” hacia la prevención [en inglés], no lo es.

Después de todo, la prevención no es diferente de cualquier otra área de política pública, excepto que ha demostrado ser mucho más complicada y difícil de mantener que la mayoría de las demás. La prevención es parte de un lenguaje excelente pero no una panacea para los problemas de política pública.

 

Otras lecturas:

Prevención [en inglés]

 

Vea también:

¿Qué haces cuando el 20% de la población causa el 80% de sus problemas? Posiblemente nada [en inglés].

Política de intervención temprana, desde “familias con problemas” hasta “personas nombradas”: problemas con la evidencia y encuadre de problemas [en inglés]

 

Traductores

Anette Bonifant Cisneros anette.bonifant@york.ac.uk

Juan Guillermo Vieira jgvieiras@unal.edu.co

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Políticas Públicas, Prevention policy

Public health policy: assumptions and expectations

Rather misleadingly, this very draft paper is called The Politics of Evidence-based ‘Health in All Policies’It’s for Integrating Science and Politics for Public Health, convened by Patrick Fafard and Adèle Cassola at the Global Strategy Lab.

The most interesting section, for me, is the attempt to sense check the following list of assumptions/ expectations that I associate with public health studies of public policy. Unless stated otherwise, this list is based on literature reviews and documentary analysis underpinning studies of tobacco policy and prevention policy (Cairney and St Denny, 2020), as well as more impressionistic reflections from peer-reviewing many papers on this topic and attending relevant conferences (usually to speak to practitioners about the politics of EBPM). I am relying primarily on (a) the sense, often described in qualitative research, of a ‘saturation point’ to feel confident that more research will not unearth more categories, than (b) counting the frequency of term-use in each category, or (c) network analysis to identify the nature of a self-defined public health profession or community. As such, the focus is on the assumptions that scholars in this field often seem to take for granted, and often do not feel the need to explain. Its purpose is logical and conditional: if these are the assumptions, these are the expectations.

On that basis, I present a common public health narrative of the policy problem, how to understand it, and the processes necessary to address it:

  • Focus on preventing ill health rather than treating it when it becomes too severe.
  • Distinguish between types of prevention: primary (focus on the whole population to stop a problem occurring by investing early and/or modifying the social or physical environment); secondary (focus on at-risk groups to identify a problem at a very early stage to minimise harm); tertiary (focus on affected groups to stop a problem getting worse)
  • Focus on the social determinants of health inequalities, defined by the WHO (2019) as ‘the unfair and avoidable differences in health status’ that are ‘shaped by the distribution of money, power and resources’ and ‘the conditions in which people are born, grow, live, work and age’.
  • Promote ‘upstream’ measures designed to influence the health of the whole population (or health inequalities) rather than ‘downstream’ measures targeting individuals (although we discussed some debate/ confusion about the meaning of upstream).
  • Use scientific evidence to identify the nature of problems and most effective solutions.
  • Define scientific evidence in a particular way, such as in relation to a ‘hierarchy’ in which (a) the systematic review of randomised control trials often represents the gold standard, and (b) systems modelling plays a key role. Or, in fewer cases, challenge that hierarchy energetically.
  • Promote major policymaking reforms, including a focus on holistic or joined-up government, since the responsibility for health improvement goes well beyond health departments.  Prevention (or preventive policymaking) is a classic term, and ‘health in all policies’ (HIAP) is currently a key term.
  • Focus strongly on the role of industry as ‘vested interests’ causing public health problems (the ‘commercial determinants of health’) and, often, the lack of political will to regulate commercial activity.
  • Treat public health and prevention as a form of social protection (new category after PHE). Often, actors describe a moral imperative to intervene (in which case, the opposite argument relates to individual responsibility and opposition to the ‘nanny state’ – see also Cairney et al, 2012 on ‘secular morality’).
  • Use tobacco control as a model for other specific issues (e.g. alcohol use, obesity, salt) and the prevention agenda more generally (Studlar and Cairney, 2019).
  • Focus on identifying policy changes that represent a ‘win-win’ scenario in which all parties benefit from the policy outcome (in terms of their health), rather than identifying political winners and losers from the policy choice itself (new category – Baum et al, 2014).

Such assumptions underpin expectations for the role of government, and provide a frame of reference for assessing the overall direction of policy (such as for ‘prevention’). Please let me know if there is a big missing category, or one of them doesn’t seem quite right.

Leave a comment

Filed under Prevention policy, Public health, public policy, tobacco policy

Prevention is better than cure, so why aren’t we doing more of it?

This post provides a generous amount of background for my ANZSOG talk Prevention is better than cure, so why aren’t we doing more of it? If you read all of it, it’s a long read. If not, it’s a short read before the long read. Here is the talk’s description:

‘Does this sound familiar? A new government comes into office, promising to shift the balance in social and health policy from expensive remedial, high dependency care to prevention and early intervention. They commit to better policy-making; they say they will join up policy and program delivery, devolving responsibility to the local level and focusing on long term outcomes rather than short term widgets; and that they will ensure policy is evidence-based.  And then it all gets too hard, and the cycle begins again, leaving some exhausted and disillusioned practitioners in its wake. Why does this happen repeatedly, across different countries and with governments of different persuasions, even with the best will in the world?’ 

  • You’ll see from the question that I am not suggesting that all prevention or early intervention policies fail. Rather, I use policy theories to provide a general explanation for a major gap between the (realistic) expectations expressed in prevention strategies and the actual outcomes. We can then talk about how to close that gap.
  • You’ll also see the phrase ‘even with the best will in the world’, which I think is key to this talk. No-one needs me to rehearse the usually-vague and often-stated ways to explain failed prevention policies, including the ‘wickedness’ of policy problems, or the ‘pathology’ of public policy. Rather, I show that such policies may ‘fail’ even when there is wide and sincere cross-party agreement about the need to shift from reactive to more prevention policy design. I also suggest that the general explanation for failure – low ‘political will’ – is often damaging to the chances for future success.
  • Let’s start by defining prevention policy and policymaking.

When engaged in ‘prevention’, governments seek to:

  1. Reform policy.

Prevention policy is really a collection of policies designed to intervene as early as possible in people’s lives to improve their wellbeing and reduce inequalities and/or demand for acute services. The aim is to move from reactive to preventive public services, intervening earlier in people’s lives to address a wide range of longstanding problems – including crime and anti-social behaviour, ill health and unhealthy behaviour, low educational attainment, unemployment and low employability – before they become too severe.

  1. Reform policymaking.

Preventive policymaking describes the ways in which governments reform their practices to support prevention policy, including a commitment to:

  • ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area
  • give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users produce long term aims for outcomes, and
  • reduce short term performance targets in favour of long term outcomes agreements.
  1. Ensure that policy is ‘evidence based’.

Three general reasons why ‘prevention’ policies never seem to succeed.

  1. Policymakers don’t know what prevention means.

They express a commitment to prevention before defining it fully. When they start to make sense of prevention, they find out how difficult it is to pursue, and how many controversial choices it involves (see also uncertainty versus ambiguity)

  1. They engage in a policymaking system that is too complex to control.

They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes.

Yet, they also need to demonstrate to the electorate that they are in control, and find out how difficult it is to localise and centralise policy.

  1. They are unable and unwilling to produce ‘evidence based policymaking’.

Policymakers seek cognitive shortcuts (and their organisational equivalents) to gather enough information to make ‘good enough’ decisions. When they seek evidence on prevention, they find that it is patchy, inconclusive, often counter to their beliefs, and not a ‘magic bullet’ to help justify choices.

Throughout this process, their commitment to prevention policy can be sincere but unfulfilled. They do not articulate fully what prevention means or appreciate the scale of their task. When they try to deliver prevention strategies, they face several problems that, on their own, would seem daunting. Many of the problems they seek to ‘prevent’ are ‘wicked’, or difficult to define and seemingly impossible to solve, such as poverty, unemployment, low quality housing and homelessness, crime, and health and education inequalities. They face stark choices on how far they should go to shift the balance between state and market, redistribute wealth and income, distribute public resources, and intervene in people’s lives to change their behaviour and ways of thinking. Their focus on the long term faces major competition from more salient short-term policy issues that prompt them to maintain ‘reactive’ public services. Their often-sincere desire to ‘localise’ policymaking often gives way to national electoral politics, in which central governments face pressure to make policy from the ‘top’ and be decisive. Their pursuit of ‘evidence based’ policymaking often reveals a lack of evidence about which policy interventions work and the extent to which they can be ‘scaled up’ successfully.

These problems will not be overcome if policy makers and influencers misdiagnose them

  • If policy influencers make the simplistic assumption that this problem is caused by low political they will provide bad advice.
  • If new policymakers truly think that the problem was the low commitment and competence of their predecessors, they will begin with the same high hopes about the impact they can make, only to become disenchanted when they see the difference between their abstract aims and real world outcomes.
  • Poor explanation of limited success contributes to the high potential for (a) an initial period of enthusiasm and activity, replaced by (b) disenchantment and inactivity, and (c) for this cycle to be repeated without resolution.

Let’s add more detail to these general explanations:

1. What makes prevention so difficult to define?

When viewed as a simple slogan, ‘prevention’ seems like an intuitively appealing aim. It can generate cross-party consensus, bringing together groups on the ‘left’, seeking to reduce inequalities, and on the ‘right’, seeking to reduce economic inactivity and the cost of services.

Such consensus is superficial and illusory. When making a detailed strategy, prevention is open to many interpretations by many policymakers. Imagine the many types of prevention policy and policymaking that we could produce:

  1. What problem are we trying to solve?

Prevention policymaking represents a heroic solution to several crises: major inequalities, underfunded public services, and dysfunctional government.

  1. On what measures should we focus?

On which inequalities should we focus primarily? Wealth, occupation, income, race, ethnicity, gender, sexuality, disability, mental health.

On which measures of inequality? Economic, health, healthy behaviour, education attainment, wellbeing, punishment.

  1. On what solution should we focus?

To reduce poverty and socioeconomic inequalities, improve national quality of life, reduce public service costs, or increase value for money

  1. Which ‘tools’ or policy instruments should we use?

Redistributive policies to address ‘structural’ causes of poverty and inequality?

Or, individual-focused policies to: (a) boost the mental ‘resilience’ of public service users, (b) oblige, or (c) exhort people to change behaviour.

  1. How do we intervene as early as possible in people’s lives?

Primary prevention. Focus on the whole population to stop a problem occurring by investing early and/or modifying the social or physical environment. Akin to whole-population immunizations.

Secondary prevention. Focus on at-risk groups to identify a problem at a very early stage to minimise harm.

Tertiary prevention. Focus on affected groups to stop a problem getting worse.

  1. How do we pursue ‘evidence based policymaking’? 3 ideal-types

Using randomised control trials and systematic review to identify the best interventions?

Storytelling to share best governance practice?

‘Improvement’ methods to experiment on a small scale and share best practice?

  1. How does evidence gathering connect to long-term policymaking?

Does a national strategy drive long-term outcomes?

Does central government produce agreements with or targets for local authorities?

  1. Is preventive policymaking a philosophy or a profound reform process?

How serious are national governments – about localism, service user-driven public services, and joined up or holistic policymaking – when their elected policymakers are held to account for outcomes?

  1. What is the nature of state intervention?

It may be punitive or supportive. See: How would Lisa Simpson and Monty Burns make progressive social policy?

2.     Making ‘hard choices’: what problems arise when politics meets policymaking?

When policymakers move from idiom and broad philosophy towards specific policies and practices, they find a range of obstacles, including:

The scale of the task becomes overwhelming, and not suited to electoral cycles.

Developing policy and reforming policymaking takes time, and the effect may take a generation to see.

There is competition for policymaking resources such as attention and money.

Prevention is general, long-term, and low salience. It competes with salient short-term problems that politicians feel compelled to solve first.

Prevention is akin to capital investment with no guarantee of a return. Reductions in funding ‘fire-fighting’, ‘frontline’ services to pay for prevention initiatives, are hard to sell. Governments invest in small steps, and investment is vulnerable when money is needed quickly to fund public service crises.

The benefits are difficult to measure and see.

Short-term impacts are hard to measure, long-term impacts are hard to attribute to a single intervention, and prevention does not necessarily save money (or provide ‘cashable’ savings’).

Reactive policies have a more visible impact, such as to reduce hospital waiting times or increase the number of teachers or police officers.

Problems are ‘wicked’.

Getting to the ‘root causes’ of problems is not straightforward; policymakers often have no clear sense of the cause of problems or effect of solutions. Few aspects of prevention in social policy resemble disease prevention, in which we know the cause of many diseases, how to screen for them, and how to prevent them in a population.

Performance management is not conducive to prevention.

Performance management systems encourage public sector managers to focus on their services’ short-term and measurable targets over shared aims with public service partners or the wellbeing of their local populations.

Performance management is about setting priorities when governments have too many aims to fulfil. When central governments encourage local governing bodies to form long-term partnerships to address inequalities and meet short-term targets, the latter come first.

Governments face major ethical dilemmas.

Political choices co-exist with normative judgements concerning the role of the state and personal responsibility, often undermining cross-party agreement.

One aspect of prevention may undermine the other.

A cynical view of prevention initiatives is that they represent a quick political fix rather than a meaningful long-term solution:

  • Central governments describe prevention as the solution to public sector costs while also delegating policymaking responsibility to, and reducing the budgets of, local public bodies.
  • Then, public bodies prioritise their most pressing statutory responsibilities.

Someone must be held to account.

If everybody is involved in making and shaping policy, it becomes unclear who can be held to account over the results. This outcome is inconsistent with Westminster-style democratic accountability in which we know who is responsible and therefore who to praise or blame.

3.      ‘The evidence’ is not a ‘magic bullet’

In a series of other talks, I identify the reasons why ‘evidence based policymaking’ (EBPM) does not describe the policy process well.

Elsewhere, I also suggest that it is more difficult for evidence to ‘win the day’ in the broad area of prevention policy compared to the more specific field of tobacco control.

Generally speaking, a good simple rule about EBPM is that there is never a ‘magic bullet’ to take the place of judgement. Politics is about making choices which benefit some while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution.

A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention in ‘families policies’ seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to the highest evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field.

The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention:

  • intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour
  • an outreach model of support and training.

The evidence of success comes from evaluation plus a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without this intervention.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success built on RCT evidence. There is more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and services.

2. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem:

  • there are few examples of taking effective specialist projects ‘to scale’
  • there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners)
  • it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

3. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

Conclusion: vague consensus is no substitute for political choice

Governments begin with the sense that they have found the solution to many problems, only to find that they have to make and defend highly ‘political’ choices.

For example, see the UK government’s ‘imaginative’ use of evidence to make families policy. In a nutshell, it chose to play fast and loose with evidence, and demonise 117000 families, to provide political cover to a redistribution of resources to family intervention projects.

We can, with good reason, object to this style of politics. However, we would also have to produce a feasible alternative.

For example, the Scottish Government has taken a different approach (perhaps closer to what one might often expect in New Zealand), but it still needs to produce and defend a story about its choices, and it faces almost the same constraints as the UK. It’s self-described ‘decisive shift’ to prevention was no a decisive shift to prevention.

Overall, prevention is no different from any other policy area, except that it has proven to be much more complicated and difficult to sustain than most others. Prevention is part of an excellent idiom but not a magic bullet for policy problems.

Further reading:

Prevention

See also

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

 

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

The UK government’s imaginative use of evidence to make policy

This post describes a new article published in British Politics (Open Access). Please find:

(1) A super-exciting video/audio powerpoint I use for a talk based on the article

https://youtu.be/A7qYj8nRkYg

(2) The audio alone (link)

(3) The powerpoint to download, so that the weblinks work (link) or the ppsx/ presentation file in case you are having a party (link)

(4) A written/ tweeted discussion of the main points

https://twitter.com/CairneyPaul/status/950317933158334464

In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.

So I describe two cases in which its evidence-use seems cynical, when:

  1. Declaring complete success in turning around the lives of ‘troubled families’
  2. Exploiting vivid neuroscientific images to support ‘early intervention’

Then I describe more difficult cases in which supportive evidence is not clear:

  1. Family intervention project evaluations are of limited value and only tentatively positive
  2. Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’

As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?

At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:

Cairney 2018 British Politics discussion section

I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.

The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’

I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.

Cairney British Politics 2018 Table 1

As such, it forms part of a far wider body of work …

https://twitter.com/CairneyPaul/status/950317956189302784

https://twitter.com/CairneyPaul/status/950317958529798144

In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.

In that wider context, it is worth comparing this talk with the one I gave at the IDS (which, I reckon is a good primer for – or prequel to – the UK talk):

https://twitter.com/Bloggs74/status/1085874777158500352

https://www.facebook.com/idsuk/videos/364796097654832/

See also:

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Why doesn’t evidence win the day in policy and policymaking?

(found by searching for early intervention)

See also:

Here’s why there is always an expectations gap in prevention policy

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

(found by searching for prevention)

Powerpoint for guest lecture: Paul Cairney UK Government Evidence Policy

5 Comments

Filed under Evidence Based Policymaking (EBPM), POLU9UK, Prevention policy, UK politics and policy

Here’s why there is always an expectations gap in prevention policy

Prevention is the most important social policy agenda of our time. Many governments make a sincere commitment to it, backed up by new policy strategies and resources. Yet, they also make limited progress before giving up or changing tack. Then, a new government arrives, producing the same cycle of enthusiasm and despair. This fundamental agenda never seems to get off the ground. We aim to explain this ‘prevention puzzle’, or the continuous gap between policymaker expectations and actual outcomes.

What is prevention policy and policymaking?

When engaged in ‘prevention’, governments seek to:

  1. Reform policy. To move from reactive to preventive public services, intervening earlier in people’s lives to ward off social problems and their costs when they seem avoidable.
  2. Reform policymaking. To (a) ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area, (b) give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users, and (c) produce long term aims for outcomes, and reduce short term performance targets.
  3. Ensure that policy is ‘evidence based’.

Three reasons why they never seem to succeed

We use well established policy theories/ studies to explain the prevention puzzle.

  1. They don’t know what prevention means. They express a commitment to something before defining it. When they start to make sense of it, they find out how difficult it is to pursue, and how many controversial choices it involves.
  2. They engage in a policy process that is too complex to control. They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes. Yet, they need to demonstrate to the electorate that they are in control. When they make sense of policymaking, they find out how difficult it is to localise and centralise.
  3. They are unable and unwilling to produce ‘evidence based policymaking’. Policymakers seek ‘rational’ and ‘irrational’ shortcuts to gather enough information to make ‘good enough’ decisions. When they seek evidence on preventing problems before they arise, they find that it is patchy, inconclusive, often counter to their beliefs, and unable to provide a ‘magic bullet’ to help make and justify choices.

Who knows what happens when they address these problems at the same time?

We draw on empirical and comparative UK and devolved government analysis to show in detail how policymaking differs according to the (a) type of government, (b) issue, and (c) era in which they operate.

Although it is reasonable to expect policymaking to be very different in, for example, the UK versus Scottish, or Labour versus Conservative governments, and in eras of boom versus austerity, a key part of our research is to show that the same basic ‘prevention puzzle’ exists at all times. You can’t simply solve it with a change of venue or government.

Our book – Why Isn’t Government Policy More Preventive? – is in press (Oxford University Press) and will be out in January 2020, with sample chapters appearing here. Our longer term agenda – via IMAJINE – is to examine how policymakers try to address ‘spatial justice’ and reduce territorial inequalities across Europe partly by pursuing prevention and reforming public services.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Evidence based policymaking: 7 key themes

7 themes of EBPM

I looked back at my blog posts on the politics of ‘evidence based policymaking’ and found that I wrote quite a lot (particularly from 2016). Here is a list based on 7 key themes.

1. Use psychological insights to influence the use of evidence

My most-current concern. The same basic theme is that (a) people (including policymakers) are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you (b) bombard them with information, or (c) call them idiots.

Three ways to communicate more effectively with policymakers (shows how to use psychological insights to promote evidence in policymaking)

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid? (yes)

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself? (older paper, linking studies of psychology with studies of EBPM)

Older posts on the same theme:

Is there any hope for evidence in emotional debates and chaotic government? (yes)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

These complaints about ignoring science seem biased and naïve – and too easy to dismiss

How can we close the ‘cultural’ gap between the policymakers and scientists who ‘just don’t get it’?

2. How to use policy process insights to influence the use of evidence

I try to simplify key insights about the policy process to show to use evidence in it. One key message is to give up on the idea of an orderly policy process described by the policy cycle model. What should you do if a far more complicated process exists?

Why don’t policymakers listen to your evidence?

The Politics of Evidence Based Policymaking: 3 messages (3 ways to say that you should engage with the policy process that exists, not a mythical process that will never exist)

Three habits of successful policy entrepreneurs (shows how entrepreneurs are influential in politics)

Why doesn’t evidence win the day in policy and policymaking? and What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco and There is no blueprint for evidence-based policy, so what do you do? (3 posts describing the conditions that must be met for evidence to ‘win the day’)

Writing for Impact: what you need to know, and 5 ways to know it (explains how our knowledge of the policy process helps communicate to policymakers)

How can political actors take into account the limitations of evidence-based policy-making? 5 key points (presentation to European Parliament-European University Institute ‘Policy Roundtable’ 2016)

Evidence Based Policy Making: 5 things you need to know and do (presentation to Open Society Foundations New York 2016)

What 10 questions should we put to evidence for policy experts? (part of a series of videos produced by the European Commission)

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

My argument here is that EBPM is about deciding at the same time what is: (1) good evidence, and (2) a good way to make and deliver policy. If you just focus on one at a time – or consider one while ignoring the other – you cannot produce a defendable way to promote evidence-informed policy delivery.

Kathryn Oliver and I have just published an article on the relationship between evidence and policy (summary of and link to our article on this very topic)

We all want ‘evidence based policy making’ but how do we do it? (presentation to the Scottish Government on 2016)

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

What Works (in a complex policymaking system)?

How Far Should You Go to Make Sure a Policy is Delivered?

4. Face up to your need to make profound choices to pursue EBPM

These posts have arisen largely from my attendance at academic-practitioner conferences on evidence and policy. Many participants tell the same story about the primacy of scientific evidence challenged by post-truth politics and emotional policymakers. I don’t find this argument convincing or useful. So, in many posts, I challenge these participants to think about more pragmatic ways to sum up and do something effective about their predicament.

Political science improves our understanding of evidence-based policymaking, but does it produce better advice? (shows how our knowledge of policymaking clarifies dilemmas about engagement)

The role of ‘standards for evidence’ in ‘evidence informed policymaking’ (argues that a strict adherence to scientific principles may help you become a good researcher but not an effective policy influencer)

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators (you have to make profound ethical and strategic choices when seeking to maximise the use of evidence in policy)

Principles of science advice to government: key problems and feasible solutions (calling yourself an ‘honest broker’ while complaining about ‘post-truth politics’ is a cop out)

What sciences count in government science advice? (political science, obvs)

I know my audience, but does my other audience know I know my audience? (compares the often profoundly different ways in which scientists and political scientists understand and evaluate EBPM – this matters because, for example, we rarely discuss power in scientist-led debates)

Is Evidence-Based Policymaking the same as good policymaking? (no)

Idealism versus pragmatism in politics and policymaking: … evidence-based policymaking (how to decide between idealism and pragmatism when engaging in politics)

Realistic ‘realist’ reviews: why do you need them and what might they look like? (if you privilege impact you need to build policy relevance into systematic reviews)

‘Co-producing’ comparative policy research: how far should we go to secure policy impact? (describes ways to build evidence advocacy into research design)

The Politics of Evidence (review of – and link to – Justin Parkhurt’s book on the ‘good governance’ of evidence production and use)

20170512_095446

5. For students and researchers wanting to read/ hear more

These posts are relatively theory-heavy, linking quite clearly to the academic study of public policy. Hopefully they provide a simple way into the policy literature which can, at times, be dense and jargony.

‘Evidence-based Policymaking’ and the Study of Public Policy

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Practical Lessons from Policy Theories (series of posts on the policy process, offering potential lessons for advocates of evidence use in policy)

Writing a policy paper and blog post 

12 things to know about studying public policy

Can you want evidence based policymaking if you don’t really know what it is? (defines each word in EBPM)

Can you separate the facts from your beliefs when making policy? (no, very no)

Policy Concepts in 1000 Words: Success and Failure (Evaluation) (using evidence to evaluate policy is inevitably political)

Policy Concepts in 1000 Words: Policy Transfer and Learning (so is learning from the experience of others)

Four obstacles to evidence based policymaking (EBPM)

What is ‘Complex Government’ and what can we do about it? (read about it)

How Can Policy Theory Have an Impact on Policy Making? (on translating policy theories into useful advice)

The role of evidence in UK policymaking after Brexit (argues that many challenges/ opportunities for evidence advocates will not change after Brexit)

Why is there more tobacco control policy than alcohol control policy in the UK? (it’s not just because there is more evidence of harm)

Evidence Based Policy Making: If You Want to Inject More Science into Policymaking You Need to Know the Science of Policymaking and The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty and Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty and The barriers to evidence based policymaking in environmental policy (early versions of what became the chapters of the book)

6. Using storytelling to promote evidence use

This is increasingly a big interest for me. Storytelling is key to the effective conduct and communication of scientific research. Let’s not pretend we’re objective people just stating the facts (which is the least convincing story of all). So far, so good, except to say that the evidence on the impact of stories (for policy change advocacy) is limited. The major complication is that (a) the story you want to tell and have people hear interacts with (b) the story that your audience members tell themselves.

Combine Good Evidence and Emotional Stories to Change the World

Storytelling for Policy Change: promise and problems

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

7. The major difficulties in using evidence for policy to reduce inequalities

These posts show how policymakers think about how to combine (a) often-patchy evidence with (b) their beliefs and (c) an electoral imperative to produce policies on inequalities, prevention, and early intervention. I suggest that it’s better to understand and engage with this process than complain about policy-based-evidence from the side-lines. If you do the latter, policymakers will ignore you.

The UK government’s imaginative use of evidence to make policy 

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

Two myths about the politics of inequality in Scotland

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

A ‘decisive shift to prevention’: how do we turn an idea into evidence based policy?

Can the Scottish Government pursue ‘prevention policy’ without independence?

Note: these issues are discussed in similar ways in many countries. One example that caught my eye today:

https://twitter.com/LisaC_Research/status/900182047221661696

 

All of this discussion can be found under the EBPM category: https://paulcairney.wordpress.com/category/evidence-based-policymaking-ebpm/T

See also the special issue on maximizing the use of evidence in policy

Palgrave C special

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Storytelling, UK politics and policy