Category Archives: UK politics and policy

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

This post is part 8 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The table is too big to reproduce here, so you have the following options:

Table 2 in PDF

Table 2 as a word document

Or, if you prefer not to read the posts individually:

The whole thing in PDF

The whole thing as a Word document

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 3. Communicating to the public

This post is part 7 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE’s emphasis on uncertainty and limited knowledge extended to the evidence on how to influence behaviour via communication:

‘there is limited evidence on the best phrasing of messages, the barriers and stressors that people will encounter when trying to follow guidance, the attitudes of the public to the interventions, or the best strategies to promote adherence in the long-term’ (SPI-B Meeting paper 3.3.20: 2)

Early on, SAGE minutes described continuously the potential problems of communicating risk and encouraging behavioural change through communication (in other words, based on low expectations for the types of quarantine measures associated with China and South Korea).

  • It sought ‘behavioural science input on public communication’ and ‘agreed on the importance of behavioural science informing policy – and on the importance of public trust in HMG’s approach’ (28.1.20: 2).
  • It worried about how the public might interpret ‘case fatality rate’, given the different ways to describe and interpret frequencies and risks (4.2.20: 3).
  • It stated that ‘Epidemiological terms need to be made clearer in the planning documents to avoid ambiguity’ (11.2.20: 3).
  • Its extensive discussion of behavioural science (13.2.20: 2-3) includes: there will be public scepticism and inaction until first deaths are confirmed; the main aim is to motivate people by relating behavioural change to their lives; messaging should stress ‘personal responsibility and responsibility to others’ and be clear on which measures are effective’, and ‘National messaging should be clear and definitive: if such messaging is presented as both precautionary and sufficient, it will reduce the likelihood of the public adopting further unnecessary or contradictory behaviours’ (13.2.20: 2-3)
  • Banning large public events could signal the need to change behaviour more generally, but evidence for its likely impact is unavailable (SPI-M-O, 11.2.20: 1).

Generally speaking, the assumption underpinning communication is that behavioural change will come largely from communication (encouragement and exhortation) rather than imposition. Hence, for example, the SPI-B (25.2.20: 2) recommendation on limiting the ‘risk of public disorder’:

  • ‘Provide clear and transparent reasons for different strategies: The public need to understand the purpose of the Government’s policy, why the UK approach differs to other countries and how resources are being allocated. SPI-B agreed that government should prioritise messaging that explains clearly why certain actions are being taken, ahead of messaging designed solely for reassuring the public.
  • This should also set clear expectations on how the response will develop, g. ensuring the public understands what they can expect as the outbreak evolves and what will happen when large numbers of people present at hospitals. The use of early messaging will help, as a) individuals are likely to be more receptive to messages before an issue becomes controversial and b) it will promote a sense the Government is following a plan.
  • Promote a sense of collectivism: All messaging should reinforce a sense of community, that “we are all in this together.” This will avoid increasing tensions between different groups (including between responding agencies and the public); promote social norms around behaviours; and lead to self-policing within communities around important behaviours’.

The underpinning assumption is that the government should treat people as ‘rational actors’: explain risk and how to reduce it, support existing measures by the public to socially distance, be transparent, explain if UK is doing things differently to other countries, and recognise that these measures are easier for some more than others (13.3.20: 3).

In that context, SPI-B Meeting paper 22.3.20 describes how to enable social distancing with reference to the ‘behaviour change wheel’ (Michie et al, 2011): ‘There are nine broad ways of achieving behaviour change: Education, Persuasion, Incentivisation, Coercion, Enablement, Training, Restriction, Environmental restructuring, and Modelling’ and many could reinforce each other (22.3.20: 1). The paper comments on current policy in relation to 5 elements:

  1. Education – clarify guidance (generally, and for shielding), e.g. through interactive website, tailored to many audiences
  2. Persuasion – increase perceived threat among ‘those who are complacent, using hard-hitting emotional messaging’ while providing clarity and positive messaging (tailored to your audience’s motivation) on what action to take (22.3.20: 1-2).
  3. Incentivisation – emphasise social approval as a reward for behaviour change
  4. Coercion – ‘Consideration should be given to enacting legislation, with community involvement, to compel key social distancing measures’ (combined with encouraging ‘social disapproval but with a strong caveat around unwanted negative consequences’ (22.3.20: 2)
  5. Enablement – make sure that people have alternative access to social contact, food, and other resources for people feeling the unequal impact of lockdown (particularly for vulnerable people shielding, aided by community support).

Apparently, section 3 of SPI-B’s meeting paper (1.4.20b: 2) had been redacted because it was critical of a UK Government ‘Framework; with 4 new proposals for greater compliance: ‘17) increasing the financial penalties imposed; 18) introducing self-validation for movements; 19) reducing exercise and/or shopping; 20) reducing non-home working’. On 17, it suggests that the evidence base for (e.g.) fining someone exercising more than 1km from their home could contribute to lower support for policy overall. On 17-19, it suggests that most people are already complying, so there is no evidence to support more targeted measures. It is more positive about 20, since it could reduce non-home working (especially if financially supported). Generally, it suggests that ministers should ‘also consider the role of rewards and facilitations in improving adherence’ and use organisational changes, such as staggered work hours and new use of space, rather than simply focusing on individuals.

Communication after the lockdown

SAGE suggests that communication problems are more complicated during the release of lockdown measures (in other words, without the ability to present the relatively-low-ambiguity message ‘stay at home’). Examples (mostly from SPI-B and its contributors) include:

  • Address potential confusion, causing false concern or reassurance, regarding antigen and antibody tests (meeting papers 1.4.20c: 3; 13.4.20b: 1-4; 22.4.20b: 1-5; 29.4.20a: 1-4)
  • When notifying people about the need to self-isolate, address the trade-offs between symptom versus positive test based notifications (meeting paper 29.4.20a: 1-4; 5.5.20: 1-8)
  • If you are worried about public ‘disorder’, focus on clear, effective, tailored communication, using local influencers, appealing to sympathetic groups (like NHS staff), and co-producing messages between the police and public (in other words, police via consent, and do not exacerbate grievances) (meeting papers 19.4.20: 1-4; 21.4.20: 1-3; 4.5.20: 1-11)
  • Be wary of lockdowns specific to very small areas, which undermine the ‘all in it together’ message (REDACTED and Clifford Stott, no date: 1). If you must to it, clarify precisely who is affected and what they should do, support the people most vulnerable and impacted (e.g. financially), and redesign physical spaces (meeting paper SPI-B 22.4.20a)
  • When reopening schools (fully or partly), communication is key to the inevitably complex and unpredictable behavioural consequences (so, for example, work with parents, teachers, and other stakeholders to co-produce clear guidance) (29.4.20d: 1-10)
  • On the introduction of Alert Levels, as part of the Joint Biosecurity Centre work on local outbreaks (described in meeting paper 20.5.20a: 1-9): build public trust and understanding regarding JBC alert levels, and relate them very clearly to expected behaviour (SAGE 28.5.20). Each Alert Level should relate clearly to a required response in that area, and ‘public communications on Alert Levels needs many trusted messengers giving the same advice, many times’ (meeting paper 27.5.20b: 3).
  • On transmission between social networks, ‘Communicate two key principles: 1. People whose work involves large numbers of contacts with different people should avoid close, prolonged, indoor contact with anyone as far as possible … 2. People with different workplace networks should avoid meeting or sharing the same spaces’ (meeting paper 27.5.20b: 1).
  • On outbreaks in ‘forgotten institutional settings’ (including Prisons, Homeless Hostels, Migrant dormitories, and Long stay mental health): address the unusually low levels of trust in (or awareness of) government messaging among so-called ‘hard to reach groups’ (meeting paper 28.5.20a: 1).

See also:

SPI-M (Meeting paper 17.3.20b: 4) list of how to describe probabilities. This is more important than it looks, since there is a potentially major gap between the public and advisory group understanding of words like ‘probably’ (compare with the CIA’s Words of Estimative Probability).

SAGE language of probability 17.3.20b p4

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

This post is part 6 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

Limited testing

Oral evidence to the Health and Social Care committee highlights the now-well-documented limits to UK testing capacity and PPE stocks (see also NERVTAG on PPE). SAGE does not discuss testing capacity much in the beginning, although on 10.3.20 it lists as an action point: ‘Plans for how PHE can move from 1,000 serology tests to 10,000 tests per week’ and by 16.3.20 it describes the urgent need to scale up testing – perhaps with commercial involvement and to test at home (if can ensure accuracy) – and to secure sufficient data to track the epidemic well enough to inform operational decisions. From April, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20), and the need for far more testing is a feature of almost every meeting from then.

Limited contact tracing

Initially, SAGE describes a quite-low contact tracing capacity: ‘Currently, PHE can cope with five new cases a week (requiring isolation of 800 contacts). Modelling suggests this capacity could be increased to 50 new cases a week (8,000 contact isolations)’ (18.2.20: 1).

Previously, it had noted that the point would come when transmission was too high to make contact tracing worthwhile, particularly since many (e.g. asymptomatic) cases may already have been missed (20.2.20: 2) and the necessary testing capacity was not in place (16.4.20): ‘PHE to work with SPI-M to develop criteria for when contact tracing is no longer worthwhile. This should include consideration of any limiting factors on testing and alternative methods of identifying epidemic evolution and characteristics’ (11.2.20: 3; see also Testing and contact tracing).

It returned to the feasibility question after the lockdown, with:

  • SPI-M (meeting paper 4.20d: 1-3) estimating that effective contact tracing (80% of non-household cases, in 2 days) could reduce the R by 30-60% if you could quarantine many people, multiple times; and,
  • SPI-B (meeting paper 4.20a: 1-3) advising on the need to clarify to people how it would work and what they should do, redesign physical spaces, and conduct new qualitative research and stakeholder engagement to ‘help us to understand more clearly the specific drivers, enablers and barriers for new behavioural recommendations’ to address an unprecedented problem in the UK (22.4.20a: 2). SPI-B also describes the trade-offs between app-informed systems (notification based on symptoms would suit people seeking to be precautionary, but could reduce compliance among people who believe the risk to be low) (see meeting papers 29.4.20: 3 and 5.5.20: 1-8)
  • SAGE noting ongoing work on clusters and super-spreading events, which necessitate cluster-based contact tracing (11.6.20: 3)
  • A more general message that contact tracing will be overwhelmed if lockdown measures are released too soon, raising R well above 1 and causing incidence to rise too quickly (e.g. 14.5.20)

Low capacity to achieve high levels of information necessary for forecasting

This type of discussion exemplifies a general and continuous focus on the lack of data to inform advice:

‘24. Real-time forecasting models rely on deriving information on the epidemic from surveillance. If transmission is established in the UK there will necessarily be a delay before sufficiently accurate forecasts in the UK are available. 25. Decisions being made on whether to modify or lift non-pharmaceutical interventions require accurate understanding of the state of the epidemic. Large-scale serological data would be ideal, especially combined with direct monitoring of contact behaviour. 26. Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK (or a similar country). While some estimates may be available before this time their accuracy will be much more limited. 27. The UK hospitalisation rate and CFR will be very important for operational planning and will be estimated over a similar timeframe. They may take longer depending on the availability of data’ (Meeting paper 2.3.20: 3-4).

A limited capacity to reach a relatively cautious consensus?

These limitations to information contributed to the difference between SAGE’s estimate on UK transmission (such as in comparison with Italy) and the UK’s much faster rate of transmission:

‘the UK likely has thousands of cases – as many as 5,000 to 10,000 – which are geographically spread nationally … The UK is considered to be 4-5 weeks behind Italy but on a similar curve (6-8 weeks behind if interventions are applied)’ (10.3.20: 1)

‘Based on limited available evidence, SAGE considers that the UK is 2 to 4 weeks behind Italy in terms of the epidemic curve’ (18.3.20: 1)

Rather, the UK was under 2 weeks behind Italy on the 10th March, suggesting that its lockdown measures were put in place too late.

At the heart of this estimate was the under-estimated doubling time of infection (‘the time it takes for the number of cases to double in size’, Meeting paper 3.2.20a):

  • although described as 3-4 days (28.1.20: 1) then 4-6 days (Meeting paper 2.3.20) based on Wuhan, and 3-5 days based on Hubei (Meeting paper 3.2.20a),
  • SAGE estimates ‘every 5-6 days’ (16.3.20: 1) and states that ‘Assuming a doubling time of around 5-7 days continues to be reasonable’ (18.3.20: 1).
  • Only by meeting 18 does SAGE estimate the doubling time (ICU patients) at 3-4 days (23.3.20). By meeting 19, it describes the doubling time in hospitals as 3.3 days (26.3.20: 1).

Kit Yates suggests that (a) the UK exhibited a 3-day doubling time during this period (Huffington Post), and (b) although many members of SAGE and SPI-M would have preferred to model on the assumption of 3-days:

Having spoken to some of the modellers on SPI-M, not all of them were missing this. Many of the groups had fitted models to data and come up with shorter and more realistic doubling times, maybe around the 3-day mark, but their estimates never found consensus within the group, so some members of SPI-M have communicated their concerns to me that some of the modelling groups had more influence over the consensus decision than others, which meant that some opinions or estimates which might have been valid, didn’t get heard, and consequently weren’t passed on up the line to SAGE, and then further towards the government, so an over-reliance on certain models or modelling groups might have been costly in this situation (interview, Kit Yates, More or Less, 10.6.20: 4m47s-5m27s)

Yates then suggests that the most listened-to model – led by Neil Ferguson, published 16.3.20 –  estimates a doubling time of 5-days, based on early data from Wuhan, using estimate of R2.4 (and generation time of 6.5 days), ‘which we now know to be way too low’ when we look at the UK data:

If they had just plotted the early trajectory of the epidemics against the current UK data at that point, they would have seen [by 14.3.20] that their model was starting to underestimate the number of cases and then the number of deaths which were occurring in the UK’ (interview, Kit Yates, More or Less, 10.6.20: 7m2s-7m15s)

Yates’ account highlights not only

  1. the effect of uncertainty and limited capacity to generate more information, but also
  2. the wider effect of path dependence, in which the (a) written and unwritten rules and norms of organisations, and (b) enduring ways of thinking (in individuals and groups, and political systems) place limits on new action. These limits are often necessary and beneficial, and often unnecessary and harmful.

Compare with Vallance’s oral evidence to the Health and Social Care committee (17.3.20: q96):

‘If you thought SAGE and the way SAGE works was a cosy consensus of agreeing scientists, you would be very mistaken. It is a lively, robust discussion, with multiple inputs. We do not try to get everybody saying exactly the same thing’.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: SAGE Theme 1. The language of intervention

This post is part 5 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

There is often a clear distinction between a strategy designed to (a) eliminate a virus/ the spread of disease quickly, and (b) manage the spread of infection over the long term (see The overall narrative).

However, generally, the language of virus management is confusing. We need to be careful with interpreting the language used in these minutes, and other sources such as oral evidence to House of Commons committees, particularly when comparing the language at the beginning (when people were also unsure what to call SARS-CoV-2 and COVID-19) to present day debates.

For example, in January, it is tempting to contrast ‘slow down the spread of the outbreak domestically’ (28.1.20: 2) with a strategy towards ‘extinction’, but the proposed actions may be the same even if the expectations of impact are different. Some people interpret these differences as indicative of a profoundly different approach (delay versus eradicate); some describe the semantic differences as semantics.

By February, SAGE’s expectation is of an inevitable epidemic and inability to contain COVID-19, prompting it to describe the inevitable series of stages:

‘Priorities will shift during a potential outbreak from containment and isolation on to delay and, finally, to case management … When there is sustained transmission in the UK, contact tracing will no longer be useful’ (18.2.20: 1; its discussion on 20.2.20: 2 also concludes that ‘individual cases could already have been missed – including individuals advised that they are not infectious’).

Mitigation versus suppression

On the face of it, it looks like there is a major difference in the ways on which (a) the Imperial College COVID-19 Response Team and (b) SAGE describe possible policy responses. The Imperial paper makes a distinction between mitigation and suppression:

  1. Its ‘mitigation strategy scenarios’ highlight the relative effects of partly-voluntary measures on mortality and demand for ‘critical care beds’ in hospitals: (voluntary) ‘case isolation in the home’ (people with symptoms stay at home for 7 days), ‘voluntary home quarantine’ (all members of the household stay at home for 14 days if one member has symptoms), (government enforced) ‘social distancing of those over 70’ or ‘social distancing of entire population’ (while still going to work, school or University), and closure of most schools and universities. It omits ‘stopping mass gatherings’ because ‘the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants’ (2020a: 8). Assuming 70-75% compliance, it describes the combination of ‘case isolation, home quarantine and social distancing of those aged over 70’ as the most impactful, but predicts that ‘mitigation is unlikely to be a viable option without overwhelming healthcare systems’ (2020a: 8-10). These measures would only ‘reduce peak critical care demand by two-thirds and halve the number of deaths’ (to approximately 250,000).
  2. Its ‘suppression strategy scenarios’ describe what it would take to reduce the rate of infection (R) from the estimated 2.0-2.6 to 1 or below (in other words, the game-changing point at which one person would infect no more than one other person) and reduce ‘critical care requirements’ to manageable levels. It predicts that a combination of four options – ‘case isolation’, ‘social distancing of the entire population’ (the measure with the largest impact), ‘household quarantine’ and ‘school and university closure’ – would reduce critical care demand from its peak ‘approximately 3 weeks after the interventions are introduced’, and contribute to a range of 5,600-48,000 deaths over two years (depending on the current R and the ‘trigger’ for action in relation to the number of occupied critical care beds) (2020a: 13-14).

In comparison, the SAGE meeting paper (26.2.20b: 1-3), produced 2-3 weeks earlier, pretty much assumes away the possible distinction between mitigation versus suppression measures (which Vallance has described as semantic rather than substantive – scroll down to The distinction between mitigation and suppression measures). In other words, it assumes ‘high levels of compliance over long periods of time’ (26.2.20b: 1). As such, we can interpret SAGE’s discussion as (a) requiring high levels of compliance for these measures to work (the equivalent of Imperial’s description of suppression), while (b) not describing how to use (more or less voluntary versus impositional) government policy to secure compliance. In comparison, Imperial equates suppression with the relatively-short-term measures associated with China and South Korea (while noting uncertainty about how to maintain such measures until a vaccine is produced).

One reason for SAGE to assume compliance in its scenario building is to focus on the contribution of each measure, generally taking place over 13 weeks, to delaying the peak of infection (while stating that ‘It will likely not be feasible to provide estimates of the effectiveness of individual control measures, just the overall effectiveness of them all’, 26.2.20b: 1), while taking into account their behavioural implications (26.2.20b: 2-3).

  • School closures could contribute to a 3-week delay, especially if combined with FE/ HE closures (but with an unequal impact on ‘Those in lower socio-economic groups … more reliant on free school meals or unable to rearrange work to provide childcare’).
  • Home isolation (65% of symptomatic cases stay at home for 7 days) could contribute to a 2-3 week delay (and is the ‘Easiest measure to explain and justify to the public’).
  • ‘Voluntary household quarantine’ (all member of the household isolate for 14 days) would have a similar effect – assuming 50% compliance – but with far more implications for behavioural public policy:

‘Resistance & non-compliance will be greater if impacts of this policy are inequitable. For those on low incomes, loss of income means inability to pay for food, heating, lighting, internet. This can be addressed by guaranteeing supplies during quarantine periods.

Variable compliance, due to variable capacity to comply, may lead to dissatisfaction.

Ensuring supplies flow to households is essential. A desire to help among the wider community (e.g. taking on chores, delivering supplies) could be encouraged and scaffolded to support quarantined households.

There is a risk of stigma, so ‘voluntary quarantine’ should be portrayed as an act of altruistic civic duty’.

  • ‘Social distancing’ (‘enacted early’), in which people restrict themselves to essential activity (work and school) could produce a 3-5 week delay (and likely to be supported in relation to mass leisure events, albeit less so when work activities involve a lot of contact.

[Note that it is not until May that it addresses this issue of feasibility directly (and, even then, it does not distinguish between technical and political feasibility: ‘It was noted that a useful addition to control measures SAGE considers (in addition to scientific uncertainty) would be the feasibility of monitoring/ enforcement’ (7.5.20: 3)]

As theme 2 suggests, there is a growing recognition that these measures should have been introduced by early March (such as via the Coronavirus Act 2020 not passed until 25.3.20), and likely would if the UK government and SAGE had more information (or interpreted its information in a different way). However, by mid-March, SAGE expresses a mixture of (a) growing urgency, but also (b) the need to stick to the plan, to reduce the peak and avoid a second peak of infection). On 13th March, it states:

‘There are no strong scientific grounds to hasten or delay implementation of either household isolation or social distancing of the elderly or the vulnerable in order to manage the epidemiological curve compared to previous advice. However, there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic. Household isolation is modelled to have the biggest effect of the three interventions currently planned, but with some risks. SAGE therefore thinks there is scientific evidence to support household isolation being implemented as soon as practically possible’ (13.3.20: 1)

‘SAGE further agreed that one purpose of behavioural and social interventions is to enable the NHS to meet demand and therefore reduce indirect mortality and morbidity. There is a risk that current proposed measures (individual and household isolation and social distancing) will not reduce demand enough: they may need to be coupled with more intensive actions to enable the NHS to cope, whether regionally or nationally’ (13.3.20: 2)

On 16th March, it states:

‘On the basis of accumulating data, including on NHS critical care capacity, the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1)

Overall, we can conclude two things about the language of intervention:

  1. There is now a clear difference between the ways in which SAGE and its critics describe policy: to manage an inevitably long-term epidemic, versus to try to eliminate it within national borders.
  2. There is a less clear difference between terms such as suppress and mitigate, largely because SAGE focused primarily on a comparison of different measures (and their combination) rather than the question of compliance.

See also: There is no ‘herd immunity strategy’, which argues that this focus on each intervention was lost in radio and TV interviews with Vallance.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: SAGE meetings from January-June 2020

This post is part 4 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE began a series of extraordinary meetings from 22nd January 2020. The first was described as ‘precautionary’ (22.1.20: 1) and includes updates from NERVTAG which met from 13th January. Its minutes state that ‘SAGE is unable to say at this stage whether it might be required to reconvene’ (22.1.20: 2). The second meeting notes that SAGE will meet regularly (e.g. 2-3 times per week in February) and coordinate all relevant science advice to inform domestic policy, including from NERVTAG and SPI-M (Scientific Pandemic Influenza Group on Modelling) which became a ‘formal sub-group of SAGE for the duration of this outbreak’ (SPI-M-O) (28.1.20: 1). It also convened an additional Scientific Pandemic Influenza subgroup (SPI-B) in February. I summarise these developments by month, but you can see that, by March, it is worth summarising each meeting. The main theme is uncertainty.

January 2020

The first meeting highlights immense uncertainty. Its description of WN-CoV (Wuhan Coronavirus), and statements such as ‘There is evidence of person-to-person transmission. It is unknown whether transmission is sustainable’, sum up the profound lack of information on what is to come (22.1.20: 1-2). It notes high uncertainty on how to identify cases, rates of infection, infectiousness in the absence of symptoms, and which previous experience (such as MERS) offers the most useful guidance. Only 6 days later, it estimates an R between 2-3, doubling rate of 3-4 days, incubation period of around 5 days, 14-day window of infectivity, varied symptoms such as coughing and fever, and a respiratory transmission route (different from SARS and MERS) (28.1.20: 1). These estimates are fairly constant from then, albeit qualified with reference to uncertainty (e.g. about asymptomatic transmission), some key outliers (e.g. the duration of illness in one case was 41 days – 4.2.20: 1), and some new estimates (e.g. of a 6-day ‘serial interval’, or ‘time between successive cases in a chain of transmission’, 11.2.20: 1). By now, it is preparing a response: modelling a ‘reasonable worst case scenario’ (RWC) based on the assumption of an R of 2.5 and no known treatment or vaccine, considering how to slow the spread, and considering how behavioural insights can be used to encourage self-isolation.

February 2020

SAGE began to focus on what measures might delay or reduce the impact of the epidemic. It described travel restrictions from China as low value, since a 95% reduction would have to be draconian to achieve and only secure a one month delay, which might be better achieved with other measures (3.2.20: 1-2). It, and supporting papers, suggested that the evidence was so limited that they could draw ‘no meaningful conclusions … as to whether it is possible to achieve a delay of a month’ by using one or a combination of these measures: international travel restrictions, domestic travel restrictions, quarantine people coming from infected areas, close schools, close FE/ HE, cancel large public events, contact tracing, voluntary home isolation, facemasks, hand washing. Further, some could undermine each other (e.g. school closures impact on older people or people in self-isolation) and have major societal or opportunity costs (SPI-M-O, 3.2.20b: 1-4). For example, the ‘SPI-M-O: Consensus view on public gatherings’ (11.2.20: 1) notes the aim to reduce duration and closeness of (particularly indoor) contact. Large outdoor gatherings are not worse than small, and stopping large events could prompt people to go to pubs (worse).

Throughout February, the minutes emphasize high uncertainty:

  • if there will be an epidemic outside of China (4.2.20: 2)
  • if it spreads through ‘air conditioning systems’ (4.2.20: 3)
  • the spread from, and impact on, children and therefore the impact of closing schools (4.2.20: 3; discussed in a separate paper by SPI-M-O, 10.2.20c: 1-2)
  • ‘SAGE heard that NERVTAG advises that there is limited to no evidence of the benefits of the general public wearing facemasks as a preventative measure’ (while ‘symptomatic people should be encouraged to wear a surgical face mask, providing that it can be tolerated’ (4.2.20: 3)

At the same time, its meeting papers emphasized a delay in accurate figures during an initial outbreak: ‘Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK’ (SPI-M-O, 3.2.20a: 3).

This problem proved to be crucial to the timing of government intervention. A key learning point will be the disconnect between the following statement and the subsequent realisation (3-4 weeks later) that the lockdown measures from mid-to-late March came too late to prevent an unanticipated number of excess deaths:

‘SAGE advises that surveillance measures, which commenced this week, will provide

actionable data to inform HMG efforts to contain and mitigate spread of Covid-19’ … PHE’s surveillance approach provides sufficient sensitivity to detect an outbreak in its early stages. This should provide evidence of an epidemic around 9- 11 weeks before its peak … increasing surveillance coverage beyond the current approach would not significantly improve our understanding of incidence’ (25.2.20: 1)

It also seems clear from the minutes and papers that SAGE highlighted a reasonable worst case scenario on 26.2.20. It was as worrying as the Imperial College COVID-19 Response Team report dated 16.3.20 that allegedly changed the UK Government’s mind on the 16th March. Meeting paper 26.2.20a described the assumption of an 80% infection attack rate and 50% clinical attack rate (i.e. 50% of the UK population would experience symptoms), which underpins the assumption of 3.6 million requiring hospital care of at least 8 days (11% of symptomatic), and 541,200 requiring ventilation (1.65% of symptomatic) for 16 days. While it lists excess deaths as unknown, its 1% infection mortality rate suggests 524,800 deaths. This RWC replaces a previous projection (in Meeting paper 10.2.20a: 1-3, based on pandemic flu assumptions) of 820,000 excess deaths (27.2.20: 1).

As such, the more important difference could come from SAGE’s discussion of ‘non-pharmaceutical interventions (NPIs)’ if it recommends ‘mitigation’ while the Imperial team recommends ‘suppression’. However, the language to describe each approach is too unclear to tell (see Theme 1. The language of intervention; also note that NPIs were often described from March as ‘behavioural and social interventions’ following an SPI-B recommendation, Meeting paper 3.2.20: 1, but the language of NPI seems to have stuck).

March 2020

In March, SAGE focused initially (Meetings 12-14) on preparing for the peak of infection on the assumption that it had time to transition towards a series of isolation and social distancing measures that would be sustainable (and therefore unlikely to contribute to a second peak if lifted too soon). Early meetings and meeting papers express caution about the limited evidence for intervention and the potential for their unintended consequences. This approach began to change somewhat from mid-March (Meeting 15), and accelerate from Meetings 16-18, when it became clear that incidence and virus transmission were much larger than expected, before a new phase began from Meeting 19 (after the UK lockdown was announced on the 23rd).

Meeting 12 (3.3.18) describes preparations to gather and consolidate information on the epidemic and the likely relative effect of each intervention, while its meeting papers emphasise:

  • ‘It is highly likely that there is sustained transmission of COVID-19 in the UK at present’, and a peak of infection ‘might be expected approximately 3-5 months after the establishment of widespread sustained transmission’ (SPI-M Meeting paper 2.3.20: 1)
  • the need the prepare the public while giving ‘clear and transparent reasons for different strategies’ and reducing ambiguity whenever giving guidance (SPI-B Meeting paper 3.2.20: 1-2)
  • The need to combine different measures (e.g. school closure, self-isolation, household isolation, isolating over-65s) at the right time; ‘implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave’ (Meeting paper 4.3.20a: 3).

Meeting 13 (5.3.20) describes staying in the ‘containment’ phase (which, I think, means isolating people with positive tests at home or in hospital) , and introducing: a 12-week period of individual and household isolation measures in 1-2 weeks, on the assumption of 50% compliance; and a longer period of shielding over-65s 2 weeks later. It describes ‘no evidence to suggest that banning very large gatherings would reduce transmission’, while closing bars and restaurants ‘would have an effect, but would be very difficult to implement’, and ‘school closures would have smaller effects on the epidemic curve than other options’ (5.3.20: 1). Its SPI-B Meeting paper (4.3.20b) expresses caution about limited evidence and reliance on expert opinion, while identifying:

  • potential displacement problems (e.g. school closures prompt people to congregate elsewhere, or be looked after by vulnerable older people, while parents to lose the chance to work)
  • the visibility of groups not complying
  • the unequal impact on poorer and single parent families of school closure and loss of school meals, lost income, lower internet access, and isolation
  • how to reduce discontent about only isolating at-risk groups (the view that ‘explaining that members of the community are building some immunity will make this acceptable’ is not unanimous) (4.3.20b: 2).

Meeting 14 (10.3.20) states that the UK may have 5-10000 cases and ‘10-14 weeks from the epidemic peak if no mitigations are introduced’ (10.3.20: 2). It restates the focus on isolation first, followed by additional measures in April, and emphasizes the need to transition to measures that are acceptable and sustainable for the long term:

‘SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods’ …’the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2)

Meeting 15 (13.3.20: 1) describes an update to its data, suggesting ‘more cases in the UK than SAGE previously expected at this point, and we may therefore be further ahead on the epidemic curve, but the UK remains on broadly the same epidemic trajectory and time to peak’. It states that ‘household isolation and social distancing of the elderly and vulnerable should be implemented soon, provided they can be done well and equitably’, noting that there are ‘no strong scientific grounds’ to accelerate key measures but ‘there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic’ (13.3.20: 1) and ‘more intensive actions’ will be required to maintain NHS capacity (13.3.20: 2).

*******

On the 16th March, the UK Prime Minister Boris Johnson describes an ‘emergency’ (one week before declaring a ‘national emergency’ and UK-wide lockdown)

*******

Meeting 16 (16.3.20) describes the possibility that there are 5-10000 new cases in the UK (there is great uncertainty on the estimate’), doubling every 5-6 days. Therefore, to stay within NHS capacity, ‘the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1). SPI-M Meeting paper (16.3.20: 1) describes:

‘a combination of case isolation, household isolation and social distancing of vulnerable groups is very unlikely to prevent critical care facilities being overwhelmed … it is unclear whether or not the addition of general social distancing measures to case isolation, household isolation and social distancing of vulnerable groups would curtail the epidemic by reducing the reproduction number to less than 1 … the addition of both general social distancing and school closures to case isolation, household isolation and social distancing of vulnerable groups would be likely to control the epidemic when kept in place for a long period. SPI-M-O agreed that this strategy should be followed as soon as practical’

Meeting 17 (18.3.20) marks a major acceleration of plans, and a de-emphasis of the low-certainty/ beware-the-unintended-consequences approach of previous meetings (on the assumption that it was now 2-4 weeks behind Italy). It recommends school closures as soon as possible (and it, and SPIM Meeting paper 17.3.20b, now downplays the likely displacement effect). It focuses particularly on London, as the place with the largest initial numbers:

‘Measures with the strongest support, in terms of effect, were closure of a) schools, b) places of leisure (restaurants, bars, entertainment and indoor public spaces) and c) indoor workplaces. … Transport measures such as restricting public transport, taxis and private hire facilities would have minimal impact on reducing transmission’ (18.3.20: 2)

Meeting 18 (23.3.20) states that the R is higher than expected (2.6-2.8), requiring ‘high rates of compliance for social distancing’ to get it below 1 and stay under NHS capacity (23.3.20: 1). There is an urgent need for more community testing/ surveillance (and to address the global shortage of test supplies). In the meantime, it needs a ‘clear rationale for prioritising testing for patients and health workers’ (the latter ‘should take priority’) (23.3.20: 3) Closing UK borders ‘would have a negligible effect on spread’ (23.3.20: 2).

*******

The lockdown. On the 23rd March 2020, the UK Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of coronavirus, including police powers to support public health, such as to disperse gatherings of more than two people (unless they live together), close events and shops, and limit outdoor exercise to once per day (at a distance of two metres from others).

*******

Meeting 19 (26.3.20) follows the lockdown. SAGE describes its priorities if the R goes below 1 and NHS capacity remains under 100%: ‘monitoring, maintenance and release’ (based on higher testing); public messaging on mass testing and varying interventions; understanding nosocomial transmission and immunology; clinical trials (avoiding hasty decisions’ on new drug treatment in absence of good data) and ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2). The optimistic scenario is 10,000 deaths from the first wave (SPIM-O Meeting paper 25.3.20: 4).

Meeting 20 Confirms RWC and optimistic scenarios (Meeting paper 25.3.20), but it needs a ‘clearer narrative, clarifying areas subject to uncertainty and sensitivities’ and to clarify that scenarios (with different assumptions on, for example, the R, which should be explained more) are not predictions (29.3.20).

Meeting 21 seeks to establish SAGE ‘scientific priorities’ (e.g. long term health impacts of COVID-19, including socioeconomic impact on health (including mental health), community testing, international work (‘comorbidities such as malaria and malnutrition) (31.3.20: 1-2). NHS to set up an interdisciplinary group (including science and engineering) to ‘understand and tackle nosocomial transmission’ in the context of its growth and urgent need to define/ track it (31.3.20: 1-2). SAGE to focus on testing requirements, not operational issues. It notes the need to identify a single source of information on deaths.

April 2020

The meetings in April highlight four recurring themes.

First, it stresses that it will not know the impact of lockdown measures for some time, that it is too soon to understand the impact of releasing them, and there is high risk of failure: ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1; see also 14.4.20: 1-2). This problem remains even if a reliable testing and contact tracing system is in place, and if there are environmental improvements to reduce transmission (by keeping people apart).

Second, it notes signals from multiple sources (including CO-CIN and the RCGP) on the higher risk of major illness and death among black people, the ongoing investigation of higher risk to ‘BAME’ health workers (16.4.20), and further (high priority) work on ‘ethnicity, deprivation, and mortality’ (21.4.20: 1) (see also: Race, ethnicity, and the social determinants of health).

Third, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20). The need for far more testing is a feature of almost every meeting (see also The need to ramp up testing).

Fourth, SAGE describes the need for more short and long-term research, identifying nosocomial infection as a short term priority, and long term priorities in areas such as the long term health impacts of COVID-19 (including socioeconomic impacts on physical and mental health), community testing, and international work (31.3.20: 1-2).

Finally, it reflects shifting advice on the precautionary use of face masks. Previously, advisory bodies emphasized limited evidence of a clear benefit to the wearer, and worried that public mask use would reduce the supply to healthcare professionals and generate a false sense of security (compare with this Greenhalgh et al article on the precautionary principle, the subsequent debate, and work by the Royal Society). Even by April: ‘NERVTAG concluded that the increased use of masks would have minimal effect’ on general population infection (7.4.20: 1), while the WHO described limited evidence that facemasks are beneficial for community use (9.4.20). Still, general face mask use but could have small positive effect, particularly in ‘enclosed environments with poor ventilation, and around vulnerable people’ (14.4.20: 2) and ‘on balance, there is enough evidence to support recommendation of community use of cloth face masks, for short periods in enclosed spaces where social distancing is not possible’ (partly because people can be infectious with no symptoms), as long as people know that it is no substitute for social distancing and handwashing (21.4.20)

May 2020

In May, SAGE continues to discuss high uncertainty on relaxing lockdown measures, the details of testing systems, and the need for research.

Generally, it advises that relaxations should not happen before there is more understanding of transmission in hospitals and care homes, and ‘until effective outbreak surveillance and test and trace systems are up and running’ (14.5.20). It advises specifically ‘against reopening personal care services, as they typically rely on highly connected workers who may accelerate transmission’ (5.5.20: 3) and warns against the too-quick introduction of social bubbles. Relaxation runs the risk of diminishing public adherence to social distancing, and to overwhelm any contact tracing system put in place:

‘SAGE participants reaffirmed their recent advice that numbers of Covid-19 cases remain high (around 10,000 cases per day with wide confidence intervals); that R is 0.7-0.9 and could be very close to 1 in places across the UK; and that there is very little room for manoeuvre especially before a test, trace and isolate system is up and running effectively. It is not yet possible to assess the effect of the first set of changes which were made on easing restrictions to lockdown’ (28.5.20: 3).

It recommends extensive testing in hospitals and care homes (12.5.20: 3) and ‘remains of the view that a monitoring and test, trace & isolate system needs to be put in place’ (12.5.20: 1)

June 2020

In June, SAGE identifies the importance of clusters of infection (super-spreading events) and the importance of a contact tracing system that focuses on clusters (rather than simply individuals) (11.6.20: 3). It reaffirms the value of a 2-metre distance rule. It also notes that the research on immunology remains unclear, which makes immunity passports a bad idea (4.6.20).

It describes the result of multiple meeting papers on the unequal impact of COVID-19:

‘There is an increased risk from Covid-19 to BAME groups, which should be urgently investigated through social science research and biomedical research, and mitigated by policy makers’ … ‘SAGE also noted the importance of involving BAME groups in framing research questions, participating in research projects, sharing findings and implementing recommendations’ (4.6.20: 1-3)

See also: Race, ethnicity, and the social determinants of health

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: The overall narrative underpinning SAGE advice and UK government policy

This post is part 3 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

I discuss the UK government’s definition of the COVID-19 policy problem in some other posts (1. in a now-dated post on early developments, and 2. in relation to oral evidence to the Health and Social Care committee). It includes the following elements:

  • We need to use a suppression strategy to reduce infection enough to avoid overwhelming health service capacity, and shield the people most vulnerable to major illness or death caused by COVID-19, to minimize deaths during at least one peak of infection.
  • We need to maintain suppression for a period of time that is difficult to predict, subject to compliance levels that are difficult to predict and monitor.
  • We need to avoid panicking the public in the lead up to suppression, avoid too-draconian enforcement, and maintain wide public trust in the government.
  • We need to avoid (a) excessive and (b) insufficient suppression measures, either of which could contribute to a second wave of the epidemic of the same magnitude as the first.
  • We need to transition safely from suppression measures to foster economic activity, find safe ways for people to return to work and education, and reinstate the full use of NHS capacity for non-COVID-19 illness.
  • In the absence of a vaccine, this strategy will likely involve social distancing and (voluntary) track-and-trace measures to isolate people with COVID-19.

This understanding in the UK, informed strongly by SAGE, also informs the ways in which SAGE (a) deals with uncertainty, and (b) describes the likely impact of each stage of action.

Manage suppression during the first peak to avoid a second peak

Most importantly, it stresses continuously the need to avoid excessive suppressive measures on the first peak that would contribute to a second peak [my emphasis added]:

  • ‘Any combination of [non-pharmaceutical] measures would slow but not halt an epidemic’, 25.2.20: 1).
  • ‘Mitigations can be expected to change the shape of the epidemic curve or the timing of a first or second peak, but are not likely to reduce the overall number of total infections’. Therefore, identify whose priorities matter (such as NHS England) on the assumption that, ‘The optimal shape of the epidemic curve will differ according to sectoral or organisational priorities’ (27.2.20: 2).
  • ‘A combination of these measures [school closures, household isolation, social distancing] is expected to have a greater impact: implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave. In comparison combining stringent social distancing measures, school closures and quarantining cases, as a long-term policy, may have a similar impact to that seen in Hong Kong or Singapore, but this could result in a large second epidemic wave once the measures were lifted’ (Meeting paper 4.3.20a: 3).
  • SAGE was unanimous that measures seeking to completely suppress spread of Covid-19 will cause a second peak. SAGE advises that it is a near certainty that countries such as China, where heavy suppression is underway, will experience a second peak once measures are relaxed’ (also: ‘It was noted that Singapore had had an effective “contain phase” but that now new cases had appeared) (13.3.20: 2)
  • Its visual of each possible peak of infection emphasises the risk of a second peak (Meeting paper 4.3.20: 2).

SAGE image of 1st 2nd peaks 4.3.20

  • ‘The objective is to avoid critical cases exceeding NHS intensive care and other respiratory support bed capacity’ … SAGE ‘advice on interventions should be based on what the NHS needs’ (16.3.20: 1)
  • The fewer cases that happen as a result of the policies enacted, the larger subsequent waves are expected to be when policies are lifted (SPI-M-O Meeting paper 25.3.20: 1)
  • ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1)

Avoid the unintended consequences of epidemic suppression

This understanding intersects with (c) an emphasis of the loss of benefits caused by certain interventions (such as schools closures).

  • SPI-B (Meeting paper 4.3.20b: 1-4) expresses reluctance to close schools, partly to avoid the unintended consequences, including: displacement problems (e.g. school closures prompt children to be looked after by vulnerable older people, or parents to lose the chance to work); and, the unequal impact on poorer and single parent families (loss of school meals, lost income, lower internet access, exacerbating isolation and mental ill health). It then states that: ‘The importance of schools during a crisis should not be overlooked. This includes: Acting as a source of emotional support for children; Providing education (e.g. on hand hygiene) which is conveyed back to families; Provision of social service (e.g. free school meals, monitoring wellbeing); Acting as a point of leadership and communication within communities’ (4.3.20b: 4).
  • ‘Long periods of social isolation may have significant risks for vulnerable people … SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods. Input from behavioural scientists is essential to policy development of cocooning measures, to increase public practicability and likelihood of compliance … the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2).
  • After the lockdown (23.3.20), SAGE describes a priority regarding: ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2).

Exhort and encourage, rather than impose

It also intersects with (d) a primary focus on exhortation and encouragement rather than the imposition of behavioural change (Table 1), largely based on the belief that the UK government would be unwilling or unable to enforce behavioural change in ways associated with China. In that context, the government’s willingness and ability to enforce social distancing and business closure from the 23rd March is striking.

Examples include:

  • when recommending ‘individual home isolation (symptomatic individuals to stay at home for 14 days) and whole family isolation (fellow household members of symptomatic individuals to stay at home for 14 days after last family member becomes unwell)’, it assumes a 50% compliance rate, and notes that ‘closing bars and restaurants ‘would have an effect, but would be very difficult to implement’ (5.3.20: 1).

See also: oral evidence to the Health and Social Care committee, which suggests that the UK government and SAGE’s problem definition contrasts with approaches in countries such as South Korea (described by Kim et al, and Kim).

It also contrasts with the approach described by several of the UK’s (expert) critics, including Professor Devi Sridhar (Professor of Global Public Health), who is critical of SAGE specifically, and more generally of the UK government’s rejection of an ‘elimination’ strategy:

Table 1 sets out one way to describe the distinction between these approaches:

  • The UK government is addressing a chronic problem, being cautious about policy change without supportive evidence, identifying trigger points to new approaches (based on incidence), and assuming initially that the approach is based largely on exhortation.
  • One alternative is to pursue elimination aggressively, adopting a precautionary principle before there is supportive evidence of a major problem and the effectiveness of solutions, backed by measures such as contact tracing and quarantine, and assuming that the imposition of behaviour should be a continuous expectation.

One approach highlights the lack of evidence to support major policy change, and therefore gives primacy to the status quo. The other is more preventive, giving primacy to the precautionary principle until there is more clarity or certainty on the available evidence.

Table 1

In that context, note (in Table 2) how frequently the SAGE minutes state that there is limited evidence to support policy change, and that an epidemic is inevitable (in other words, elimination without a vaccine is near-impossible). Both statements tend to support a UK government policy that was, until mid-March, based on reluctance to enforce a profound lockdown to impose social distancing.

As the next post describes, the chronology of Table 2 is instructive, since it demonstrates a degree of path dependence based on initial uncertainty and hesitancy. This approach was understandable at first (particularly when connected to an argument about reducing the peak of infection then avoiding a second wave), before being so heavily criticised only two months later.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, UK politics and policy

COVID-19 policy in the UK: The role of SAGE and science advice to government

This post is part 2 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The issue of science advice to government, and the role of SAGE in particular, became unusually high profile in the UK, particularly in relation to four factors:

  1. Ministers described ‘following the science’ to project a certain form of authority and control.
  2. The SAGE minutes and papers – including a record of SAGE members and attendees – were initially unpublished, in line with the previous convention of government to publish after, rather than during, a crisis.

‘SAGE is keen to make the modelling and other inputs underpinning its advice available to the public and fellow scientists’ (13.3.20: 1)

When it agrees to publish SAGE papers/ documents, it stresses: ‘It is important to demonstrate the uncertainties scientists have faced, how understanding of Covid-19 has developed over time, and the science behind the advice at each stage’ (16.3.20: 2)

‘SAGE discussed plans to release the academic models underpinning SAGE and SPI-M discussions and judgements. Modellers agreed that code would become public but emphasised that the effort to do this immediately would distract from other analyses. It was agreed that code should become public as soon as practical, and SPI-M would return to SAGE with a proposal on how this would be achieved. ACTION: SPI-M to advise on how to make public the source code for academic models, working with relevant partners’ (18.3.20: 2).

SAGE welcomes releasing names of SAGE participants (if willing) and notes role of Ian Boyd as ‘independent challenge function’ (28.4.20: 1)

SAGE also describes the need for a better system to allow SAGE participants to function effectively and with proper support (given the immense pressure/ strain on their time and mental health) (7.5.20: 1)

  1. There were growing concerns that ministers would blame their advisers for poor choices (compare Freedman and Snowdon) or at least use science advice as ‘an insurance policy’, and
  2. There was some debate about the appropriateness of Dominic Cummings (Prime Minister Boris Johnson’s special adviser) attending some meetings.

Therefore, its official description reflects its initial role plus a degree of clarification on the role of science advice mechanisms during the COVID-19 pandemic. The SAGE webpage on the gov.uk sites describes its role as:

provides scientific and technical advice to support government decision makers during emergencies … SAGE is responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR). The advice provided by SAGE does not represent official government policy’.

Its more detailed explainer describes:

‘SAGE’s role is to provide unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants. This includes everything from latest knowledge of the virus to modelling the disease course, understanding the clinical picture, and effects of and compliance with interventions. This advice together with a descriptor of uncertainties is then passed onto government ministers. The advice is used by Ministers to allow them to make decisions and inform the government’s response to the COVID-19 outbreak …

The government, naturally, also considers a range of other evidence including economic, social, and broader environmental factors when making its decisions…

SAGE is comprised of leading lights in their representative fields from across the worlds of academia and practice. They do not operate under government instruction and expert participation changes for each meeting, based on the expertise needed to address the crisis the country is faced with …

SAGE is also attended by official representatives from relevant parts of government. There are roughly 20 such officials involved in each meeting and they do not frequently contribute to discussions, but can play an important role in highlighting considerations such as key questions or concerns for policymakers that science needs to help answer or understanding Civil Service structures. They may also ask for clarification on a scientific point’ (emphasis added by yours truly).

Note that the number of participants can be around 60 people, which is more like an assembly with presentations and a modest amount of discussion, than a decision-making function (the Zoom meeting on 4.6.20 lists 76 participants). Even a Cabinet meeting is about 20 and that is too much for coherent discussion/ action (hence separate, smaller, committees).

Further, each set of now-published minutes contains an ‘addendum’ to clarify its operation. For example, its first minutes in 2020 seek to clarify the role of participants. Note that the participants change somewhat at each meeting (see the full list of members/ attendees), and some names are redacted. Dominic Cummings’ name only appears (I think) on 5.3.20, 14.4.20, and two meetings on 1.5.20 (although, as Freedman notes, ‘his colleague Ben Warner was a more regular presence’).

SAGE minutes 1 addendum 22.1.20

More importantly, the minutes from late February begin to distinguish between three types of potential science advice:

  1. to describe the size of the problem (e.g. surveillance of cases and trends, estimating a reasonable worst case scenario)
  2. to estimate the relative impact of many possible interventions (e.g. restrictions on travel, school closures, self-isolation, household quarantine, and social distancing measures)
  3. to recommend the level and timing of state action to achieve compliance in relation to those interventions.

SAGE focused primarily on roles 1 and 2, arguing against role 3 on the basis that state intervention is a political choice to be taken by ministers. Ministers are responsible for weighing up the potential public health benefits of each measure in relation to their social and economic costs (see also: The relationship between science, science advice, and policy).

Example 1: setting boundaries between advice and strategy

  • ‘It is a political decision to consider whether it is preferable to enact stricter measures at first, lifting them gradually as required, or to start with fewer measures and add further measures if required. Surveillance data streams will allow real-time monitoring of epidemic growth rates and thus allow approximate evaluation of the impact of whatever package of interventions is implemented’ (Meeting paper 26.2.20b: 1)

This example highlights a limitation in performing role 2 to inform 3: SAGE would not be able to compare the relative impact of measures without knowing their level of imposition and its impact on compliance. Further, the way in which it addressed this problem is crucial to our interpretation and evaluation of the timing and substance of the UK government’s response.

In short, it simultaneously assumed away and maintained attention to this problem by stating:

  • ‘The measures outlined below assume high levels of compliance over long periods of time. This may be unachievable in the UK population’ (26.2.20b: 1).
  • ‘advice on interventions should be based on what the NHS needs and what modelling of those interventions suggests, not on the (limited) evidence on whether the public will comply with the interventions in sufficient numbers and over time’ (16.3.20: 1)

The assumption of high compliance reduces the need for SAGE to make distinctions between terms such as mitigation versus suppression (see also: Confusion about the language of intervention and stages of intervention). However, it contributes to confusion within wider debates on UK action (see Theme 1. The language of intervention).

Example 2: setting boundaries between advice and value judgements

  • ‘SAGE has not provided a recommendation of which interventions, or package of interventions, that Government may choose to apply. Any decision must consider the impacts these interventions may have on society, on individuals, the workforce and businesses, and the operation of Government and public services’ (Meeting paper 4.3.20a: 1).

To all intents and purposes, SAGE is noting that governments need to make value-based choices to:

  1. Weigh up the costs and benefits of any action (as described by Layard et al, with reference to wellbeing measures and the assumed price of a life), and
  2. Decide whose wellbeing, and lives, matter the most (because any action or inaction will have unequal consequences across a population).

In other words, policy analysis is one part evidence and one part value judgement. Both elements are contested in different ways, and different questions inform political choices (e.g. whose knowledge counts versus whose wellbeing counts?).

[see also:

  • ‘Determining a tolerable level of risk from imported cases requires consideration of a number of non-science factors and is a policy question’ (28.4.20: 3)
  • ‘SAGE reemphasises that its own focus should always be on providing clear scientific advice to government and the principles behind that advice’ (7.5.20: 1)]

Future reflections

Any future inquiry will be heavily contested, since policy learning and evaluation are political acts (and the best way to gather and use evidence during a pandemic is highly contested).  Still, hopefully, it will promote reflection on how, in practice, governments and advisory bodies negotiate the blurry boundary between scientific advice and political choice when they are so interdependent and rely so heavily on judgement in the face of ambiguity and uncertainty (or ‘radical uncertainty’). I discuss this issue in the next post, which highlights the ways in which UK ministers relied on SAGE (and advisers) to define the policy problem.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

6 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE explainer

SAGE is the Scientific Advisory Group for Emergencies. The text up there comes from the UK Government description. SAGE is the main venue to coordinate science advice to the UK government on COVID-19, including from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group, reporting to PHE), and the SPI-M (Scientific Pandemic Influenza Group on Modelling) sub-groups on modelling (SPI-M) and behavioural public policy (SPI-B) which supply meeting papers to SAGE.

I have summarized SAGE’s minutes (41 meetings, from 22 January to 11 June) and meeting/ background papers (125 papers, estimated range 1-51 pages, median 4, not-peer-reviewed, often produced a day after a request) in a ridiculously long table. This thing is huge (40 pages and 20000 words). It is the sequoia table. It is the humongous fungus. Even Joey Chestnut could not eat this table in one go. To make your SAGE meal more palatable, here is a series of blog posts that situate these minutes and papers in their wider context. This initial post is unusually long, so I’ve put in a photo to break it up a bit.

Did the UK government ‘follow the science’?

I use the overarching question Did the UK Government ‘follow the science’? initially for the clickbait. I reckon that, like a previous favourite (people have ‘had enough of experts’), ‘following the science’ is a phrase used by commentators more frequently than the original users of the phrase. It is easy to google and find some valuable commentaries with that hook (Devlin & Boseley, Siddique, Ahuja, Stevens, Flinders, Walker, , FT; see also Vallance) but also find ministers using a wider range of messages with more subtle verbs and metaphors:

  • ‘We will take the right steps at the right time, guided by the science’ (Prime Minister Boris Johnson, 3.20)
  • ‘We will be guided by the science’ (Health Secretary Matt Hancock, 2.20)
  • ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’ (Johnson, 3.20)
  • ‘The plan is driven by the science and guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.20)
  • ‘The plan does not set out what the government will do, it sets out the steps we could take at the right time along the basis of the scientific advice’ (Johnson, 3.20).

Still, clearly they are saying ‘the science’ as a rhetorical device, and it raises many questions or objections, including:

  1. There is no such thing as ‘the science’.

Rather, there are many studies described as scientific (generally with reference to a narrow range of accepted methods), and many people described as scientists (with reference to their qualifications and expertise). The same can be said for the rhetorical phrase ‘the evidence’ and the political slogan ‘evidence based policymaking’ (which often comes with its notionally opposite political slogan ‘policy based evidence’). In both cases, a reference to ‘the science’ or ‘the evidence’ often signals one or both of:

  • a particular, restrictive, way to describe evidence that lives up to a professional quality standard created by some disciplines (e.g. based on a hierarchy of evidence, in which the systematic review of randomized control trials is often at the top)
  • an attempt by policymakers to project their own governing competence, relative certainty, control, and authority, with reference to another source of authority

2. Ministers often mean ‘following our scientists

PM_press_conference Vallance Whitty 12.3.20

When Johnson (12.3.20) describes being ‘guided by the science’, he is accompanied by Professor Patrick Vallance (Government Chief Scientific Adviser) and Professor Chris Whitty (the UK government’s Chief Medical Adviser). Hancock (3.3.20) describes being ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.3.20).

In other words, following ‘the science’ means ‘following the advice of our scientific advisors’, via mechanisms such as SAGE.

As the SAGE minutes and meeting papers show, government scientists and SAGE participants necessarily tell a partial story about the relevant evidence from a particular perspective (note: this is not a criticism of SAGE; it is a truism). Other interpreters of evidence, and sources of advice, are available.

Therefore, the phrase ‘guided by the science’ is, in practice, a way to:

  • narrow the search for information (and pay selective attention to it)
  • close down, or set the terms of, debate
  • associate policy with particular advisors or advisory bodies, often to give ministerial choices more authority, and often as ‘an insurance policy’ to take the heat off ministers.
  1. What exactly is ‘the science’ guiding?

Let’s make a simple distinction between two types of science-guided action. Scientists provide evidence and advice on:

  1. the scale and urgency of a potential policy problem, such as describing and estimating the incidence and transmission of coronavirus
  2. the likely impact of a range of policy interventions, such as contact tracing, self-isolation, and regulations to oblige social distancing

In both cases, let’s also distinguish between science advice to reduce uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Put both together to produce a wide range of possibilities for policy ‘guided by the science’, from (a) simply providing facts to help reduce uncertainty on the incidence of coronavirus (minimal), to (b) providing information and advice on how to define and try to solve the policy problem (maximal).

If so, note that being guided by science does not signal more or less policy change. Ministers can use scientific uncertainty to defend limited action, or use evidence selectively to propose rapid change. In either case, it can argue – sincerely – that it is guided by science. Therefore, analyzing critically the phraseology of ministers is only a useful first step. Next, we need to identify the extent to which scientific advisors and advisory bodies, such as SAGE, guided ministers.

The role of SAGE: advice on evidence versus advice on strategy and values

In that context, the next post examines the role of SAGE.

It shows that, although science advice to government is necessarily political, the coronavirus has heightened attention to science and advice, and you can see the (subtle and not subtle) ways in SAGE members and its secretariat are dealing with its unusually high level of politicization. SAGE has responded by clarifying its role, and trying to set boundaries between:

  • Advice versus strategy
  • Advice versus value judgements

These aims are understandable, but difficult to do in theory (the fact/value distinction is impossible) and practice (plus, policymakers may not go along with the distinction anyway). I argue that it also had some unintended consequences, which should prompt some further reflection on facts-versus-values science advice during crises.

The ways in which UK ministers followed SAGE advice

With these caveats in mind, my reading of this material is that UK government policy was largely consistent with SAGE evidence and advice in the following ways:

  1. Defining the policy problem

This post (and a post on oral evidence to the Health and Social Care Committee) identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows (although the post provides a more expansive discussion):

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

While SAGE minutes suggest a general reluctance to comment too much on the point 4, government discussions were underpinned by 1-3. For me, this context is the most important. It provides a lens through which to understand all of SAGE advice: how it shapes, and is shaped by, UK government policy.

  1. The timing and substance of interventions before lockdown, maintenance of lockdown for several months, and gradual release of lockdown measures

This post presents a long chronological story of SAGE minutes and papers, divided by month (and, in March, by each meeting). Note the unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, can only be appreciated fully if you read the minutes from 1 to 41. Or, you know, take my word for it.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China (albeit while developing initially-good estimates of R, doubling rate, incubation period, window of infectivity, and symptoms). In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

In other words, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice (and it would not be outrageous to argue that it went ahead of it).

It is more difficult to describe the consistency between UK government policy & SAGE advice in relation to the relaxation of lockdown measures.

SAGE’s minutes and meeting papers describe very low certainty about what will happen after the release of lockdown. Their models do not hide this unusually high level of uncertainty, and they use models (built on assumptions) to generate scenarios rather than estimate what will happen. In this sense, ‘following the science’ could relate to (a) a level of buy-in for this kind of approach, and (b) making choices when scientific groups cannot offer much (if any) advice on what to do or what will happen. The example of reopening schools is a key example, since SPI-M and SPI-B focused intensely on the issue, but their conclusions could not underpin a specific UK government choice.

There are two ways to interpret what happened next.

First, there will always be a mild gap between hesitant SAGE advice and ministerial action. SAGE advice tends to be based on the amount and quality of evidence to support a change, which meant it was hesitant to recommend (a) a full lockdown and (b) a release from lockdown. Just as UK government policy seemed to go ahead of the evidence to enter lockdown on the 23rd March, so too does it seem to go ahead of the cautious approach to relaxing it.

Second, UK ministers are currently going too far ahead of the evidence. SPI-M papers state repeatedly that the too-quick release of measures will cause the R to go above 1 (in some papers, it describes reaching 1.7; in some graphs it models up to 3).

  1. The use of behavioural insights to inform and communicate policy

In March, you can find a lot of external debate about the appropriate role for ‘behavioural science’ and ‘behavioural public policy’ (BPP) (in other words, using insights from psychology to inform policy). Part of the initial problem related to the lack of transparency of the UK government, which prompted concerns that ministers were basing choices on limited evidence (see Hahn et al, Devlin, Mills). Oliver also describes initial confusion about the role of BPP when David Halpern became mildly famous for describing the concept of ‘herd immunity’ rather than sticking to psychology.

External concern focused primarily on the argument that the UK government (and many other governments) used the idea of ‘behavioural fatigue’ to justify delayed or gradual lockdown measures. In other words, if you do it too quickly and for too long, people will tire of it and break the rules.

Yet, this argument about fatigue is not a feature of the SAGE minutes and SPI-B papers (indeed, Oliver wonders if the phrase came from Whitty, based on his experience of people tiring of taking medication).

Rather, the papers tend to emphasise:

  • There is high uncertainty about behavioural change in key scenarios, and this reference to uncertainty should inform any choice on what to do next.
  • The need for effective and continuous communication with citizens, emphasizing transparency, honesty, clarity, and respect, to maintain high trust in government and promote a sense of community action (‘we are all in this together’).

John and Stoker argue that ‘much of behavioural science lends itself to’ a ‘top-down approach because its underlying thinking is that people tend to be limited in cognitive terms, and that a paternalistic expert-led government needs to save them from themselves’. Yet, my overall impression of the SPI-B (and related) work is that (a) although SPI-B is often asked to play that role, to address how to maximize adherence to interventions (such as social distancing), (b) its participants try to encourage the more deliberative or collaborative mechanisms favoured by John and Stoker (particularly when describing how to reopen schools and redesign work spaces). If so, my hunch is that they would not be as confident that UK ministers were taking their advice consistently (for example, throughout table 2, have a look at the need to provide a consistent narrative on two different propositions: we are all in this together, but the impact of each action/inaction will be profoundly unequal).

Expanded themes in SAGE minutes

Throughout this period, I think that one – often implicit – theme is that members of SAGE focused quite heavily on what seemed politically feasible to suggest to ministers, and for ministers to suggest to the public (while also describing technical feasibility – i.e. will it work as intended if implemented?). Generally, it seemed to anticipate policymaker concern about, and any unintended public reactions, to a shift towards more social regulation. For example:

‘Interventions should seek to contain, delay and reduce the peak incidence of cases, in that order. Consideration of what is publicly perceived to work is essential in any decisions’ (25.2.20: 1)

Put differently, it seemed to operate within the general confines of what might work in a UK-style liberal democracy characterised by relatively low social regulation. This approach is already a feature of The overall narrative underpinning SAGE advice and UK government policy, and the remaining posts highlight key themes that arise in that context.

They include how to:

Delaying the inevitable

All of these shorter posts delay your reading of a ridiculously long table summarizing each meeting’s discussion and advice/ action points (Table 2, which also includes a way to chase up the referencing in the blog posts: dates alone refer to SAGE minutes; multiple meeting papers are listed as a, b, c if they have the same date stamp rather than same authors).

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

It is part of a wider project, in which you can also read about:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I hope to get through all of this material (and equivalent material in the devolved governments) somehow, but also to find time to live, love, eat, and watch TV, so please bear with me if you want to know what happened but don’t want to do all of the reading to find out.

If you would rather just read all of this discussion in one document:

The whole thing in PDF

Table 2 in PDF

The whole thing as a Word document

Table 2 as a word document

If you would like some other analyses, compare with:

  • Freedman (7.6.20) ‘Where the science went wrong. Sage minutes show that scientific caution, rather than a strategy of “herd immunity”, drove the UK’s slow response to the Covid-19 pandemic’. Concludes that ‘as the epidemic took hold the government was largely following Sage’s advice’, and that the government should have challenged key parts of that advice (to ensure an earlier lockdown).
  • More or Less (1.7.20) ‘Why Did the UK Have Such a Bad Covid-19 Epidemic?’. Relates the delays in ministerial action to inaccurate scientific estimates of the doubling time of infection (discussed further in Theme 2).
  • Both Freedman and More or Less focus on the mishandling of care home safety, exacerbated by transfers from hospital without proper testing.
  • Snowden (28.5.20) ‘The lockdown’s founding myth. We’ve forgotten that the Imperial model didn’t even call for a full lockdown’. Challenges the argument that ministers dragged their feet while scientists were advising quick and extensive interventions (an argument he associates with Calvert et al (23.5.20) ‘22 days of dither and delay on coronavirus that cost thousands of British lives’). Rather, ministers were following SAGE advice, and the lockdown in Italy had a far bigger impact on ministers (since it changed what seemed politically feasible).
  • Greg Clark MP (chair of the House of Commons Science and Technology Committee) Between science and policy – Scrutinising the role of SAGE in providing scientific advice to government

8 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

The coronavirus and evidence-informed policy analysis (short version)

The coronavirus feels like a new policy problem that requires new policy analysis. The analysis should be informed by (a) good evidence, translated into (b) good policy. However, don’t be fooled into thinking that either of those things are straightforward. There are simple-looking steps to go from defining a problem to making a recommendation, but this simplicity masks the profoundly political process that must take place. Each step in analysis involves political choices to prioritise some problems and solutions over others, and therefore prioritise some people’s lives at the expense of others.

The very-long version of this post takes us through those steps in the UK, and situates them in a wider political and policymaking context. This post is shorter, and only scratches the surface of analysis.

5 steps to policy analysis

  1. Define the problem.

Perhaps we can sum it up as: (a) the impact of this virus and illness will be a level of death and illness that could overwhelm the population and exceed the capacity of public services, so (b) we need to contain the virus enough to make sure it spreads in the right way at the right time, so (c) we need to encourage and make people change their behaviour (primarily via hygiene and social distancing). However, there are many ways to frame this problem to emphasise the importance of some populations over others, and some impacts over others.

  1. Identify technically and politically feasible solutions.

Solutions are not really solutions: they are policy instruments that address one aspect of the problem, including taxation and spending, delivering public services, funding research, giving advice to the population, and regulating or encouraging changes to social behaviour. Each new instrument contributes an existing mix, with unpredictable and unintended consequences. Some instruments seem technically feasible (they will work as intended if implemented), but will not be adopted unless politically feasible (enough people support their introduction). Or vice versa. This dual requirement rules out a lot of responses.

  1. Use values and goals to compare solutions.

Typical judgements combine: (a) broad descriptions of values such as efficiency, fairness, freedom, security, and human dignity, (b) instrumental goals, such as sustainable policymaking (can we do it, and for how long?), and political feasibility (will people agree to it, and will it make me more or less popular or trusted?), and (c) the process to make choices, such as the extent to which a policy process involves citizens or stakeholders (alongside experts) in deliberation. They combine to help policymakers come to high profile choices (such as the balance between individual freedom and state coercion), and low profile but profound choices (to influence the level of public service capacity, and level of state intervention, and therefore who and how many people will die).

  1. Predict the outcome of each feasible solution.

It is difficult to envisage a way for the UK Government to publicise all of the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation. People often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic about who should live or die, or provide a frank account without unintended consequences for public trust or anxiety. If so, one aspect of government policy is to keep some choices implicit and avoid a lot of debate on trade-offs. Another is to make choices continuously without knowing what their impact will be (the most likely scenario right now).

  1. Make a choice, or recommendation to your client.

Your recommendation or choice would build on these four steps. Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem seemed to change. If you are writing your analysis, maybe keep it down to one sheet of paper (in other words, fewer words than in this post up to this point).

Policy analysis is not as simple as these steps suggest, and further analysis of the wider policymaking environment helps describe two profound limitations to simple analytical thought and action.

  1. Policymakers must ignore almost all evidence

The amount of policy relevant information is infinite, and capacity is finite. So, individuals and governments need ways to filter out almost all of it. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information. They include: define a problem and a feasible response, seek information that is available, understandable, and actionable, and identify credible sources of information and advice. In that context, the vague idea of trusting or not trusting experts is nonsense, and the larger post highlights the many flawed ways in which all people decide whose expertise counts.

  1. They do not control the policy process.

Policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome.

  • There are many policymakers and influencers spread across a political system. For example, consider the extent to which each government department, devolved governments, and public and private organisations are making their own choices that help or hinder the UK government approach.
  • Most choices in government are made in ‘subsystems’, with their own rules and networks, over which ministers have limited knowledge and influence.
  • The social and economic context, and events, are largely out of their control.

The take home messages (if you accept this line of thinking)

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results do not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing. No one is helping their government solve the problem by saying stupid shit on the internet (OK, that last bit was a message of despair).

 

Further reading:

The longer report sets out these arguments in much more detail, with some links to further thoughts and developments.

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

The coronavirus and evidence-informed policy analysis (long version)

This is the long version. It is long. Too long to call a blog post. Let’s call it a ‘living document’ that I update and amend as new developments arise (then start turning into a more organised paper). In most cases, I am adding tweets, so the date of the update is embedded. If I add a new section, I will add a date. If you seek specific topics (like ‘herd immunity’), it might be worth doing a search. The short version is shorter.

The coronavirus feels like a new policy problem. Governments already have policies for public health crises, but the level of uncertainty about the spread and impact of this virus seems to be taking it to a new level of policy, media, and public attention. The UK Government’s Prime Minister calls it ‘the worst public health crisis for a generation’.

As such, there is no shortage of opinions on what to do, but there is a shortage of well-considered opinions, producing little consensus. Many people are rushing to judgement and expressing remarkably firm opinions about the best solutions, but their contributions add up to contradictory evaluations, in which:

  • the government is doing precisely the right thing or the completely wrong thing,
  • we should listen to this expert saying one thing or another expert saying the opposite.

Lots of otherwise-sensible people are doing what they bemoan in politicians: rushing to judgement, largely accepting or sharing evidence only if it reinforces that judgement, and/or using their interpretation of any new development to settle scores with their opponents.

Yet, anyone who feels, without uncertainty, that they have the best definition of, and solution to, this problem is a fool. If people are also sharing bad information and advice, they are dangerous fools. Further, as Professor Madley puts it (in the video below), ‘anyone who tells you they know what’s going to happen over the next six months is lying’.

In that context, how can we make sense of public policy to address the coronavirus in a more systematic way?

Studies of policy analysis and policymaking do not solve a policy problem, but they at least give us a language to think it through.

  1. Let’s focus on the UK as an example, and use common steps in policy analysis, to help us think through the problem and how to try to manage it.
  • In each step, note how quickly it is possible to be overwhelmed by uncertainty and ambiguity, even when the issue seems so simple at first.
  • Note how difficult it is to move from Step 1, and to separate Step 1 from the others. It is difficult to define the problem without relating it to the solution (or to the ways in which we will evaluate each solution).
  1. Let’s relate that analysis to research on policymaking, to understand the wider context in which people pay attention to, and try to address, important problems that are largely out of their control.

Throughout, note that I am describing a thought process as simply as I can, not a full examination of relevant evidence. I am highlighting the problems that people face when ‘diagnosing’ policy problems, not trying to diagnose it myself. To do so, I draw initially on common advice from the key policy analysis texts (summaries of the texts that policy analysis students are most likely to read) that simplify the process a little too much. Still, the thought process that it encourages took me hours alone (spread over three days) to produce no real conclusion. Policymakers and advisers, in the thick of this problem, do not have that luxury of time or uncertainty.

See also: Boris Johnson’s address to the nation in full (23.3.20) and press conference transcripts

https://twitter.com/BorisJohnson/status/1246358936585986048

https://twitter.com/BorisJohnson/status/1243496858095411200

https://twitter.com/R_S_P_H/status/1242833029728477188

Step 1 Define the problem

Common advice in policy analysis texts:

  • Provide a diagnosis of a policy problem, using rhetoric and eye-catching data to generate attention.
  • Identify its severity, urgency, cause, and our ability to solve it. Don’t define the wrong problem, such as by oversimplifying.
  • Problem definition is a political act of framing, as part of a narrative to evaluate the nature, cause, size, and urgency of an issue.
  • Define the nature of a policy problem, and the role of government in solving it, while engaging with many stakeholders.
  • ‘Diagnose the undesirable condition’ and frame it as ‘a market or government failure (or maybe both)’.

Coronavirus as a physical problem is not the same as a coronavirus policy problem. To define the physical problem is to identify the nature, spread, and impact of a virus and illness on individuals and populations. To define a policy problem, we identify the physical problem and relate it (implicitly or explicitly) to what we think a government can, and should, do about it. Put more provocatively, it is only a policy problem if policymakers are willing and able to offer some kind of solution.

This point may seem semantic, but it raises a profound question about the capacity of any government to solve a problem like an epidemic, or for governments to cooperate to solve a pandemic. It is easy for an outsider to exhort a government to ‘do something!’ (or ‘ACT NOW!’) and express certainty about what would happen. However, policymakers inside government:

  1. Do not enjoy the same confidence that they know what is happening, or that their actions will have their intended consequences, and
  2. Will think twice about trying to regulate social behaviour under those circumstances, especially when they
  3. Know that any action or inaction will benefit some and punish others.

For example, can a government make people wash their hands? Or, if it restricts gatherings at large events, can it stop people gathering somewhere else, with worse impact? If it closes a school, can it stop children from going to their grandparents to be looked after until it reopens? There are 101 similar questions and, in each case, I reckon the answer is no. Maybe government action has some of the desired impact; maybe not. If you agree, then the question might be: what would it really take to force people to change their behaviour?

See also: Coronavirus has not suspended politics – it has revealed the nature of power (David Runciman)

The answer is: often too much for a government to consider (in a liberal democracy), particularly if policymakers are informed that it will not have the desired impact.

https://twitter.com/AdamJKucharski/status/1238152492178976769

If so, the UK government’s definition of the policy problem will incorporate this implicit question: what can we do if we can influence, but not determine (or even predict well) how people behave?

Uncertainty about the coronavirus plus uncertainty about policy impact

Now, add that general uncertainty about the impact of government to this specific uncertainty about the likely nature and spread of the coronavirus:

https://www.youtube.com/watch?time_continue=350&v=blkDulsgh3Q&feature=emb_logo

A summary of this video suggests:

  • There will be an epidemic (a profound spread to many people in a short space of time), then the problem will be endemic (a long-term, regular feature of life) (see also UK policy on coronavirus COVID-19 assumes that the virus is here to stay).
  • In the absence of a vaccine, the only way to produce ‘herd immunity’ is for most people to be infected and recover

[Note: there is much debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation, based on levels of trust/distrust in the UK Government, its Prime Minister, and the Prime Minister’s special adviser. I discuss this point below under ‘trial and error policymaking’. See also Who can you trust during the coronavirus crisis? ]

  • The ideal spread involves all well people sharing the virus first, while all vulnerable people (e.g. older, and/or with existing health problems that affect their immune systems) protected in one isolated space, but it won’t happen like that; so, we are trying to minimise damage in the real world.
  • We mainly track the spread via deaths, with data showing a major spike appearing one month later, so the problem may only seem real to most people when it is too late to change behaviour

https://twitter.com/ChrisGiles_/status/1247458186300456960

https://twitter.com/d_spiegel/status/1248157520943857665

https://twitter.com/d_spiegel/status/1247824140645683205

https://twitter.com/EmergMedDr/status/1250039068890726400

See also: Coronavirus: Government expert defends not closing UK schools (BBC, Sir Patrick Vallance 13th March 2020)

https://twitter.com/DrSamSims/status/1247445729439895555

  • The choice in theory is between a rapid epidemic with a high peak, or a slowed-down epidemic over a longer period, but ‘anyone who tells you they know what’s going to happen over the next six months is lying’.
  • Maybe this epidemic will be so memorable as to shift social behaviour, but so much depends on trying to predict (badly) if individuals will actually change (see also Spiegelhalter on communicating risk).

None of this account tells policymakers what to do, but at least it helps them clarify three key aspects of their policy problem:

  1. The impact of this virus and illness could overwhelm the population, to the extent that it causes mass deaths, causes a level of illness that exceeds the capacity of health services to treat, and contributes to an unpredictable amount of social and economic damage.
  2. We need to contain the virus enough to make sure it (a) spreads at the right speed and/or (b) peaks at the right time. The right speed seems to be: a level that allows most people to recover alone, while the most vulnerable are treated well in healthcare settings that have enough capacity. The right time seems to be the part of the year with the lowest demand on health services (e.g. summer is better than winter). In other words, (a) reduce the size of the peak by ‘flattening the curve’, and/or (b) find the right time of year to address the peak, while (c) anticipating more than one peak.

My impression is that the most frequently-expressed aim is (a) …

https://twitter.com/STVNews/status/1238468179036459008

https://twitter.com/DHSCgovuk/status/1238540941717356548

… while the UK Government’s Deputy Chief Medical Officer also seems to be describing (b):

  1. We need to encourage or coerce people to change their behaviour, to look after themselves (e.g. by handwashing) and forsake their individual preferences for the sake of public health (e.g. by self-isolating or avoiding vulnerable people). Perhaps we can foster social trust and empathy to encourage responsible individual action. Perhaps people will only protect others if obliged to do so (compare Stone; Ostrom; game theory).

See also: From across the Ditch: How Australia has to decide on the least worst option for COVID-19 (Prof Tony Blakely on three bad options: (1) the likelihood of ‘elimination’ of the virus before vaccination is low; (2) an 18-month lock-down will help ‘flatten the curve’; (3) ‘to prepare meticulously for allowing the pandemic to wash through society over a period of six or so months. To tool up the production of masks and medical supplies. To learn as quickly as possible which treatments of people sick with COVID-19 saves lives. To work out our strategies for protection of the elderly and those with a chronic condition (for whom the mortality from COVID-19 is much higher’).

https://twitter.com/luciadambruoso/status/1246361265909444608

https://twitter.com/anandMenon1/status/1246712962519310337

From uncertainty to ambiguity

If you are still with me, I reckon you would have worded those aims slightly differently, right? There is some ambiguity about these broad intentions, partly because there is some uncertainty, and partly because policymakers need to set rather vague intentions to generate the highest possible support for them. However, vagueness is not our friend during a crisis involving such high anxiety. Further, they are only delaying the inevitable choices that people need to make to turn a complex multi-faceted problem into something simple enough to describe and manage. The problem may be complex, but our attention focuses only on a small number of aspects, at the expense of the rest. Examples that have arisen, so far, include to accentuate:

  1. The health of the whole population or people who would be affected disproportionately by the illness.
  • For example, the difference in emphasis affects the health advice for the relatively vulnerable (and the balance between exhortation and reassurance)

https://twitter.com/colinrtalbot/status/1238227267471527937?s=09

https://twitter.com/hacscot/status/1240588827829436416?s=09

https://twitter.com/lisatrigg/status/1249670660802187266

 

  1. Inequalities in relation to health, socio-economic status (e.g. income, gender, race, ethnicity), or the wider economy.
  • For example, restrictive measures may reduce the risk of harm to some, but increase the burden on people with no savings or reliable sources of income.
  • For example, some people are hoarding large quantities of home and medical supplies that (a) other people cannot afford, and (b) some people cannot access, despite having higher need.
  • For example, social distancing will limit the spread of the virus (see the nascent evidence), but also produce highly unequal forms of social isolation that increase the risk of domestic abuse (possibly exacerbated by school closures) and undermine wellbeing. Or, there will be major policy changes, such as to the rules to detain people under mental health legislation, regarding abortion, or in relation to asylum (note: some of these tweets are from the US, partly because I’m seeing more attention to race – and the consequence of systematic racism on the socioeconomic inequalities so important to COVID-19 mortality – than in the UK).

See also: COVID-19: how the UK’s economic model contributes towards a mismanagement of the crisis (Carolina Alves and Farwa Sial 30.3.20),

Economic downturn and wider NHS disruption likely to hit health hard – especially health of most vulnerable (Institute for Fiscal Studies 9.4.20),

Don’t be fooled: Britain’s coronavirus bailout will make the rich richer still (Christine Berry 13.4.20)

https://twitter.com/closethepaygap/status/1244579870392422400

https://twitter.com/heyDejan/status/1238944695260233728?s=09

https://twitter.com/TimothyNoah1/status/1240375741809938433

https://twitter.com/politicshome/status/1249236632009691136?s=09

 

https://twitter.com/NPR/status/1246837779474120705?s=09

https://twitter.com/povertyscholar/status/1246487621230092294

https://twitter.com/Yamiche/status/1248028548998344708

https://twitter.com/MalindaSmith/status/1247281226274107392

https://twitter.com/Jas_Athwal/status/1248875273568878592?s=09

https://twitter.com/GKBhambra/status/1248874500764073989

cc

https://twitter.com/sunny_hundal/status/1247454112762990592

https://twitter.com/olivernmoody/status/1248260326140805125

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/MarioLuisSmall/status/1239879542094925825

https://twitter.com/kevinstoneUWE/status/1240000285046640645?s=09

https://twitter.com/colinimckay/status/1240721797731045378?s=09

https://twitter.com/heytherehurley/status/1242113416103432195

https://twitter.com/stellacreasy/status/1244022413865648128

https://twitter.com/NIOgov/status/1246482663738871811

https://twitter.com/refugeecouncil/status/1243842703680471040

https://twitter.com/libertyhq/status/1248173788598013953

https://twitter.com/TheLancet/status/1246039259880054784

https://twitter.com/profhrs/status/1247572112061222914

https://twitter.com/HumzaYousaf/status/1248262165657722885

  • For example, governments cannot ignore the impact of their actions on the economy, however much they emphasise mortality, health, and wellbeing. Most high-profile emphasis was initially on the fate of large and small businesses, and people with mortgages, but a long period of crisis will a tip the balance from low income to unsustainable poverty (even prompting Iain Duncan Smith to propose policy change), and why favour people who can afford a mortgage over people scraping the money together for rent?
  1. A need for more communication and exhortation, or for direct action to change behaviour.
  2. The short term (do everything possible now) or long term (manage behaviour over many months).
  1. How to maintain trust in the UK government when (a) people are more or less inclined to trust a the current part of government and general trust may be quite low, and (b) so many other governments are acting differently from the UK.

https://twitter.com/DrSophieHarman/status/1238893265782530059

https://twitter.com/Sander_vdLinden/status/1242168652180475906?s=09

https://twitter.com/policyatkings/status/1248318259029516289

  • For example, note the visible presence of the Prime Minister, but also his unusually high deference to unelected experts such as (a) UK Government senior scientists providing direct advice to ministers and the public, and (b) scientists drawing on limited information to model behaviour and produce realistic scenarios (we can return to the idea of ‘evidence-based policymaking’ later). This approach is not uncommon with epidemics/ pandemics (LD was then the UK Government’s Chief Medical Officer):

https://twitter.com/AndyBurnhamGM/status/1239153510903619584

  • For example, note how often people are second guessing and criticising the UK Government position (and questioning the motives of Conservative ministers).

See also: Coronavirus: meet the scientists who are now household names

  1. How policy in relation to the coronavirus relates to other priorities (e.g. Brexit, Scottish independence, trade, education, culture)

7. Who caused, or who is exacerbating, the problem? The answers to such questions helps determine which populations are most subject to policy intervention.

  • For example, people often try to lay blame for viruses on certain populations, based on their nationality, race, ethnicity, sexuality, or behaviour (e.g. with HIV).
  • For example, the (a) association between the coronavirus and China and Chinese people (e.g. restrict travel to/ from China; e.g. exacerbate racism), initially overshadowed (b) the general role of international travellers (e.g. place more general restrictions on behaviour), and (c) other ways to describe who might be responsible for exacerbating a crisis.

See also: ‘Othering the Virus‘ by Marius Meinhof

Under ‘normal’ policymaking circumstances, we would expect policymakers to resolve this ambiguity by exercising power to set the agenda and make choices that close off debate. Attention rises at first, a choice is made, and attention tends to move on to something else. With the coronavirus, attention to many different aspects of the problem has been lurching remarkably quickly. The definition of the policy problem often seems to be changing daily or hourly, and more quickly than the physical problem. It will also change many more times, particularly when attention to each personal story of illness or death prompts people to question government policy every hour. If the policy problem keeps changing in these ways, how could a government solve it?

Step 2 Identify technically and politically feasible solutions

Common advice in policy analysis texts:

  • Identify the relevant and feasible policy solutions that your audience/ client might consider.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Provide ‘plausible’ predictions about the future effects of current/ alternative policies.
  • Identify many possible solutions, then select the ‘most promising’ for further analysis.
  • Identify how governments have addressed comparable problems, and a previous policy’s impact.

Policy ‘solutions’ are better described as ‘tools’ or ‘instruments’, largely because (a) it is rare to expect them to solve a problem, and (b) governments use many instruments (in different ways, at different times) to make policy, including:

  1. Public expenditure (e.g. to boost spending for emergency care, crisis services, medical equipment)
  2. Economic incentives and disincentives (e.g. to reduce the cost of business or borrowing, or tax unhealthy products)
  3. Linking spending to entitlement or behaviour (e.g. social security benefits conditional on working or seeking work, perhaps with the rules modified during crises)
  4. Formal regulations versus voluntary agreements (e.g. making organisations close, or encouraging them to close)
  5. Public services: universal or targeted, free or with charges, delivered directly or via non-governmental organisations
  6. Legal sanctions (e.g. criminalising reckless behaviour)
  7. Public education or advertising (e.g. as paid adverts or via media and social media)
  8. Funding scientific research, and organisations to advise on policy
  9. Establishing or reforming policymaking units or departments
  10. Behavioural instruments, to ‘nudge’ behaviour (seemingly a big feature in the UK , such as on how to encourage handwashing).

As a result, what we call ‘policy’ is really a complex mix of instruments adopted by one or more governments. A truism in policy studies is that it is difficult to define or identify exactly what policy is because (a) each new instrument adds to a pile of existing measures (with often-unpredictable consequences), and (b) many instruments designed for individual sectors tend, in practice, to intersect in ways that we cannot always anticipate. When you think through any government response to the coronavirus, note how every measure is connected to many others.

Further, it is a truism in public policy that there is a gap between technical and political feasibility: the things that we think will be most likely to work as intended if implemented are often the things that would receive the least support or most opposition. For example:

  1. Redistributing income and wealth to reduce socio-economic inequalities (e.g. to allay fears about the impact of current events on low-income and poverty) seems to be less politically feasible than distributing public services to deal with the consequences of health inequalities.
  2. Providing information and exhortation seems more politically feasible than the direct regulation of behaviour. Indeed, compared to many other countries, the UK Government seems reluctant to introduce ‘quarantine’ style measures to restrict behaviour.

Under ‘normal’ circumstances, governments may be using these distinctions as simple heuristics to help them make modest policy changes while remaining sufficiently popular (or at least looking competent). If so, they are adding or modifying policy instruments during individual ‘windows of opportunity’ for specific action, or perhaps contributing to the sense of incremental change towards an ambitious goal.

Right now, we may be pushing the boundaries of what seems possible, since crises – and the need to address public anxiety – tend to change what seems politically feasible. However, many options that seem politically feasible may not be possible (e.g. to buy a lot of extra medical/ technology capacity quickly), or may not work as intended (e.g. to restrict the movement of people). Think of technical and political feasibility as necessary but insufficient on their own, which is a requirement that rules out a lot of responses.

https://twitter.com/CairneyPaul/status/1244970044351791104

https://twitter.com/ChrisCEOHopson/status/1249617980859744256?s=09

Step 3 Use value-based criteria and political goals to compare solutions

Common advice in policy analysis texts:

  • Typical value judgements relate to efficiency, equity and fairness, the trade-off between individual freedom and collective action, and the extent to which a policy process involves citizens in deliberation.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions
  • ‘Specify the objectives to be attained in addressing the problem and the criteria  to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.
  • ‘Effectiveness, efficiency, fairness, and administrative efficiency’ are common.
  • Identify (a) the values to prioritise, such as ‘efficiency’, ‘equity’, and ‘human dignity’, and (b) ‘instrumental goals’, such as ‘sustainable public finance or political feasibility’, to generate support for solutions.
  • Instrumental questions may include: Will this intervention produce the intended outcomes? Is it easy to get agreement and maintain support? Will it make me popular, or diminish trust in me even further?

Step 3 is the most simple-looking but difficult task. Remember that it is a political, not technical, process. It is also a political process that most people would like to avoid doing (at least publicly) because it involves making explicit the ways in which we prioritise some people over others. Public policy is the choice to help some people and punish or refuse to help others (and includes the choice to do nothing).

Policy analysis texts describe a relatively simple procedure of identifying criteria and producing a table (with a solution in each row, and criteria in each column) to compare the trade-offs between each solution. However, these criteria are notoriously difficult to define, and people resolve that problem by exercising power to decide what each term means, and whose interests should be served when they resolve trade-offs. For example, see Stone on whose needs come first, who benefits from each definition of fairness, and how technical-looking processes such as ‘cost benefit analysis’ mask political choices.

Right now, the most obvious and visible trade-off, accentuated in the UK, is between individual freedom and collective action, or the balance between state, communal, and market/ individual solutions. In comparison with many countries (and China and Italy in particular), the UK Government seems to be favouring individual action over state quarantine measures. However, most trade-offs are difficult to categorise

  1. What should be the balance between efforts to minimise the deaths of some (generally in older populations) and maximise the wellbeing of others? This is partly about human dignity during crisis, how we treat different people fairly, and the balance of freedom and coercion.
  2. How much should a government spend to keep people alive using intensive case or expensive medicines, when the money could be spent improving the lives of far more people? This is partly about human dignity, the relative efficiency of policy measures, and fairness.

If you are like me, you don’t really want to answer such questions (indeed, even writing them looks callous). If so, one way to resolve them is to elect policymakers to make such choices on our behalf (perhaps aided by experts in moral philosophy, or with access to deliberative forums). To endure, this unusually high level of deference to elected ministers requires some kind of reciprocal act:

https://twitter.com/devisridhar/status/1240648925998178304

See also: We must all do everything in our power to protect lives (UK Secretary of State for Health and Social Care)

Still, I doubt that governments are making reportable daily choices with reference to a clear and explicit view of what the trade-offs and priorities should be, because their choices are about who will die, and their ability to predict outcomes is limited.

See also: Media experts despair at Boris Johnson’s coronavirus campaign (Sonia Sodha)

Step 4 Predict the outcome of each feasible solution.

Common advice in policy analysis texts:

  • Focus on the outcomes that key actors care about (such as value for money), and quantify and visualise your predictions if possible. Compare the pros and cons of each solution, such as how much of a bad service policymakers will accept to cut costs.
  • ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.
  • Estimate the cost of a new policy, in comparison with current policy, and in relation to factors such as savings to society or benefits to certain populations. Use your criteria and projections to compare each alternative in relation to their likely costs and benefits.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Short deadlines dictate that you use ‘logic and theory, rather than systematic empirical evidence’ to make predictions efficiently.
  • Monitoring is crucial because it is difficult to predict policy success, and unintended consequences are inevitable. Try to measure the outcomes of your solution, while noting that evaluations are contested.

It is difficult to envisage a way for the UK Government to publicise the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation, rather than a highly technical debate between a small number of academics:

Further, people often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic, or provide a frank account without unintended consequences for public trust or anxiety. If so, government policy involves (a) to keep some choices implicit to avoid a lot of debate on trade-offs, and (b) to make general statements about choices when they do not know what their impact will be.

Step 5 Make a recommendation to your client

Common advice in policy analysis texts:

  • Examine your case through the eyes of a policymaker. Keep it simple and concise.
  • Make a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Client-oriented advisors identify the beliefs of policymakers and tailor accordingly.
  • ‘Unless your client asks you not to do so, you should explicitly recommend one policy’

I now invite you to make a recommendation (step 5) based on our discussion so far (steps 1-4). Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem would seem to change. If you are writing your analysis, maybe keep it down to one sheet of paper (and certainly far fewer words than in this post). Better you than me.

Please now watch this video before I suggest that things are not so simple.

Would that policy analysis were so simple

Imagine writing policy analysis in an imaginary world, in which there is a single powerful ‘rational’ policymaker at the heart of government, making policy via an orderly series of stages.

cycle and cycle spirograph 18.2.20

Your audience would be easy to identify at each stage, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change (since the selection of a solution would lead to implementation).  You could adopt a simple 5 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

Studies of policy analysts describe how unrealistic this expectation tends to be (Radin, Brans, Thissen).

Table for coronavirus 750

For example, there are many policymakers, analysts, influencers, and experts spread across political systems, and engaging with 101 policy problems simultaneously, which suggests that it is not even clear how everyone fits together and interacts in what we call (for the sake of simplicity) ‘the policy process’.

Instead, we can describe real world policymaking with reference to two factors.

The wider policymaking environment: 1. Limiting the use of evidence

First, policymakers face ‘bounded rationality’, in which they only have the ability to pay attention to a tiny proportion of available facts, are unable to separate those facts from their values (since we use our beliefs to evaluate the meaning of facts), struggle to make clear and consistent choices, and do not know what impact they will have. The consequences can include:

  • Limited attention, and lurches of attention. Policymakers can only pay attention to a tiny proportion of their responsibilities, and policymaking organizations struggle to process all policy-relevant information. They prioritize some issues and information and ignore the rest.
  • Power and ideas. Some ways of understanding and describing the world dominate policy debate, helping some actors and marginalizing others.
  • Beliefs and coalitions. Policymakers see the world through the lens of their beliefs. They engage in politics to turn their beliefs into policy, form coalitions with people who share them, and compete with coalitions who don’t.
  • Dealing with complexity. They engage in ‘trial-and-error strategies’ to deal with uncertain and dynamic environments (see the new section on trial-and-error- at the end).
  • Framing and narratives. Policy audiences are vulnerable to manipulation when they rely on other actors to help them understand the world. People tell simple stories to persuade their audience to see a policy problem and its solution in a particular way.
  • The social construction of populations. Policymakers draw on quick emotional judgements, and social stereotypes, to propose benefits to some target populations and punishments for others.
  • Rules and norms. Institutions are the formal rules and informal understandings that represent a way to narrow information searches efficiently to make choices quickly.
  • Learning. Policy learning is a political process in which actors engage selectively with information, not a rational search for truth.

Evidence-based or expert-informed policymaking

Put simply, policymakers cannot oversee a simple process of ‘evidence-based policymaking’. Rather, to all intents and purposes:

  1. They need to find ways to ignore most evidence so that they can focus disproportionately on some. Otherwise, they will be unable to focus well enough to make choices. The cognitive and organisational shortcuts, described above, help them do it almost instantly.
  2. They also use their experience to help them decide – often very quickly – what evidence is policy-relevant under the circumstances. Relevance can include:
  • How it relates to the policy problem as they define it (Step 1).
  • If it relates to a feasible solution (Step 2).
  • If it is timely, available, understandable, and actionable.
  • If it seems credible, such as from groups representing wider populations, or from people they trust.
  1. They use a specific shortcut: relying on expertise.

However, the vague idea of trusting or not trusting experts is a nonsense, largely because it is virtually impossible to set a clear boundary between relevant/irrelevant experts and find a huge consensus on (exactly) what is happening and what to do. Instead, in political systems, we define the policy problem or find other ways to identify the most relevant expertise and exclude other sources of knowledge.

In the UK Government’s case, it appears to be relying primarily on expertise from its own general scientific advisers, medical and public health advisers, and – perhaps more controversially – advisers on behavioural public policy.

box 7.1

Right now, it is difficult to tell exactly how and why it relies on each expert (at least when the expert is not in a clearly defined role, in which case it would be irresponsible not to consider their advice). Further, there are regular calls on Twitter for ministers to be more open about their decisions.

See also: Coronavirus: do governments ever truly listen to ‘the science’?

However, don’t underestimate the problems of identifying why we make choices, then justifying one expert or another (while avoiding pointless arguments), or prioritising one form of advice over another. Look, for example, at the kind of short-cuts that intelligent people use, which seem sensible enough, but would receive much more intense scrutiny if presented in this way by governments:

  • Sophisticated speculation by experts in a particular field, shared widely (look at the RTs), but questioned by other experts in another field:
  • Experts in one field trusting certain experts in another field based on personal or professional interaction:
  • Experts in one field not trusting a government’s approach based on its use of one (of many) sources of advice:
  • Experts representing a community of experts, criticising another expert (Prof John Ashton), for misrepresenting the amount of expert scepticism of government experts (yes, I am trying to confuse you):
  • Expert debate on how well policymakers are making policy based on expert advice
  • Finding quite-sensible ways to trust certain experts over others, such as because they can be held to account in some way (and may be relatively worried about saying any old shit on the internet):

There are many more examples in which the shortcut to expertise is fine, but not particularly better than another shortcut (and likely to include a disproportionately high number of white men with STEM backgrounds).

Update: of course, they are better than the volume trumps expertise approach:

See also:

Further, in each case, we may be receiving this expert advice via many other people, and by the time it gets to us the meaning is lost or reversed (or there is some really sophisticated expert analysis of something rumoured – not demonstrated – to be true):

For what it’s worth, I tend to favour experts who:

(a) establish the boundaries of their knowledge, (b) admit to high uncertainty about the overall problem:

(c) (in this case) make it clear that they are working on scenarios, not simple prediction

(d) examine critically the too-simple ideas that float around, such as the idea that the UK Government should emulate ‘what works’ somewhere else

(e) situate their own position (in Prof Sridhar’s case, for mass testing) within a broader debate

See also:

See also: Prof Sir John Bell (4.3.20) on why an accurate antibody test is at least one month away and these exchanges on the problems with test ‘accuracy’:

(f) use their expertise on governance to highlight problems with thoughtless criticism

However, note that most of these experts are from a very narrow social background, and from very narrow scientific fields (first in modelling, then likely in testing), despite the policy problem being largely about (a) who, and how many people, a government should try to save, and (b) how far a government should go to change behaviour to do it (Update 2.4.20: I wrote that paragraph before adding so many people to the list). It is understandable to defer in this way during a crisis, but it also contributes to a form of ‘depoliticisation’ that masks profound choices that benefit some people and leave others vulnerable to harm.

See also: COVID-19: a living systematic map of the evidence

See also: To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review

See also: Covid-19: why is the UK government ignoring WHO’s advice? (British Medical Journal editorial)

See also: Coronavirus: just 2,000 NHS frontline workers tested so far

See also: ‘What’s important is social distancing’ coronavirus testing ‘is a side issue’, says Deputy Chief Medical Officer [Professor Jonathan Van-Tam talks about the important distinction between a currently available test to see if someone has contracted the virus (an antigen test) and a forthcoming test to see if someone has had and recovered from COVID-19 (an antibody test)]. The full interview is here (please feel free to ignore the editorialising of the uploader):

See also: Why is Germany able to test for coronavirus so much more than the UK? (which is mostly a focus on Germany’s innovation and partly on the UK (Public Health England) focus on making sure its test is reliable, in the context of ‘coronavirus tests produced at great speed which have later proven to be inaccurate’ (such as one with a below-30% accuracy rate, which is worse than not testing at all). Compare with The Coronavirus Hit Germany And The UK Just Days Apart But The Countries Have Responded Differently. Here’s How and the Opinion piece ‘A public inquiry into the UK’s coronavirus response would find a litany of failures

See also: Rights and responsibilities in the Coronavirus pandemic

See also: UK police warned against ‘overreach’ in use of virus lockdown powers (although note that there is no UK police force and that Scotland has its own legal system) and Coronavirus: extra police powers risk undermining public trust (Alex Oaten and Chris Allen)

See also (Calderwood resigned as CMO that night):

See also: Social Licensing of Privacy-Encroaching Policies to Address the COVID-19 Pandemic (U.K.) (research on public opinion)

The wider policymaking environment: 2. Limited control

Second, policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome. I normally use the following figure to think through the nature of a complex and unwieldy policymaking environment of which no ‘centre’ of government has full knowledge or control.

image policy process round 2 25.10.18

It helps us identify (further) the ways in which we can reject the idea that the UK Prime Minister and colleagues can fully understand and solve policy problems:

Actors. The environment contains many policymakers and influencers spread across many levels and types of government (‘venues’).

For example, consider how many key decisions that (a) have been made by organisations not in the UK central government, and (b) are more or less consistent with its advice, including:

  • Devolved governments announcing their own healthcare and public health responses (although the level of UK coordination seems more significant than the level of autonomy).
  • Public sector employers initiating or encouraging at-home working (and many Universities moving quickly from in-person to online teaching)
  • Private organisations cancelling cultural and sporting events.

Context and events. Policy solutions relate to socioeconomic context and events which can be impossible to ignore and out of the control of policymakers. The coronavirus, and its impact on so many aspects on population health and wellbeing, is an extreme example of this problem.

Networks, Institutions, and Ideas. Policymakers and influencers operate in subsystems (specialist parts of political systems). They form networks or coalitions built on the exchange of resources or facilitated by trust underpinned by shared beliefs or previous cooperation. Many different parts of government have practices driven by their own formal and informal rules. Formal rules are often written down or known widely. Informal rules are the unwritten rules, norms and practices that are difficult to understand, and may not even be understood in the same way by participants. Political actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so established as to be taken for granted. These dominant frames of reference establish the boundaries of the political feasibility of policy solutions.  These kinds of insights suggest that most policy decisions are considered, made, and delivered in the name of – but not in the full knowledge of – government ministers.

Trial and error policymaking in complex policymaking systems (17.3.20)

There are many ways to conceptualise this policymaking environment, but few theories provide specific advice on what to do, or how to engage effectively in it. One notable exception is the general advice that comes from complexity theory, including:

  • Law-like behaviour is difficult to identify – so a policy that was successful in one context may not have the same effect in another.
  • Policymaking systems are difficult to control; policy makers should not be surprised when their policy interventions do not have the desired effect.
  • Policy makers in the UK have been too driven by the idea of order, maintaining rigid hierarchies and producing top-down, centrally driven policy strategies.  An attachment to performance indicators, to monitor and control local actors, may simply result in policy failure and demoralised policymakers.
  • Policymaking systems or their environments change quickly. Therefore, organisations must adapt quickly and not rely on a single policy strategy.

On this basis, there is a tendency in the literature to encourage the delegation of decision-making to local actors:

  1. Rely less on central government driven targets, in favour of giving local organisations more freedom to learn from their experience and adapt to their rapidly-changing environment.
  2. To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly.
  3. Encourage better ways to deal with alleged failure by treating ‘errors’ as sources of learning (rather than a means to punish organisations) or setting more realistic parameters for success/ failure (although see this example and this comment).
  4. Encourage a greater understanding, within the public sector, of the implications of complex systems and terms such as ‘emergence’ or ‘feedback loops’.

In other words, this literature, when applied to policymaking, tends to encourage a movement from centrally driven targets and performance indicators towards a more flexible understanding of rules and targets by local actors who are more able to understand and adapt to rapidly-changing local circumstances.

[See also: Complex systems and systems thinking]

Now, just imagine the UK Government taking that advice right now. I think it is fair to say that it would be condemned continuously (even more so than right now). Maybe that is because it is the wrong way to make policy in times of crisis. Maybe it is because too few people are willing and able to accept that the role of a small group of people at the centre of government is necessarily limited, and that effective policymaking requires trial-and-error rather than a single, fixed, grand strategy to be communicated to the public. The former highlights policy that changes with new information and perspective. The latter highlights errors of judgement, incompetence, and U-turns. In either case, the advice is changing as estimates of the coronavirus’ impact change:

I think this tension, in the way that we understand UK government, helps explain some of the criticism that it faces when changing its advice to reflect changes in its data or advice. This criticism becomes intense when people also question the competence or motives of ministers (and even people reporting the news) more generally, leading to criticism that ranges from mild to outrageous:

For me, this casual reference to a government policy to ‘cull the heard of the weak’ is outrageous, but you can find much worse on Twitter. It reflects wider debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation of government statements, based on levels of trust/distrust in the UK Government, its Prime Minister and Secretaries of State, and the Prime Minister’s special adviser

However, I think that some of it is also about:

1. Wilful misinterpretation (particularly on Twitter). For example, in the early development and communication of policy, Boris Johnson was accused (in an irresponsibly misleading way) of advocating for herd immunity rather than restrictive measures.

See: Here is the transcript of what Boris Johnson said on This Morning about the new coronavirus (Full Fact)

full fact coronavirus

Below is one of the most misleading videos of its type. Look at how it cuts each segment into a narrative not provided by ministers or their advisors (see also this stinker):

See also:

2. The accentuation of a message not being emphasised by government spokespeople.

See for example this interview, described by Sky News (13.3.20) as: The government’s chief scientific adviser Sir Patrick Vallance has told Sky News that about 60% of people will need to become infected with coronavirus in order for the UK to enjoy “herd immunity”. You might be forgiven for thinking that he was on Sky extolling the virtues of a strategy to that end (and expressing sincere concerns on that basis). This was certainly the write-up in respected papers like the FT (UK’s chief scientific adviser defends ‘herd immunity’ strategy for coronavirus). Yet, he was saying nothing of the sort. Rather, when prompted, he discussed herd immunity in relation to the belief that COVID-19 will endure long enough to become as common as seasonal flu.

The same goes for Vallance’s interview on the same day (13.3.20) during Radio 4’s Today programme (transcribed by the Spectator, which calls Vallance the author, and gives it the headlineHow ‘herd immunity’ can help fight coronavirusas if it is his main message). The Today Programme also tweeted only 30 seconds to single out that brief exchange:

Yet, clearly his overall message – in this and other interviews – was that some interventions (e.g. staying at home; self-isolating with symptoms) would have bigger effects than others (e.g. school closures; prohibiting mass gatherings) during the ‘flattening of the peak’ strategy (‘What we don’t want is everybody to end up getting it in a short period of time so that we swamp and overwhelm NHS services’). Rather than describing ‘herd immunity’ as a strategy, he is really describing how to deal with its inevitability (‘Well, I think that we will end up with a number of people getting it’).

See also: British government wants UK to acquire coronavirus ‘herd immunity’, writes Robert Peston (12.3.20) and live debates (and reports grasping at straws) on whether or not ‘herd immunity’ was the goal of the UK government:

See also: Why weren’t we ready? (Harry Lambert) which is a good exemplar of the ‘U turn’ argument, and compare with the evidence to the Health and Social Care Committee (CMO Whitty, DCMO Harries) that it describes.

A more careful forensic analysis (such as this one) will try to relate each government choice to the ways in which key advisory bodies (such as the New and Emerging Respiratory Virus Threats Advisory Group, NERVTAG) received and described evidence on the current nature of the problem:

See also: Special Report: Johnson listened to his scientists about coronavirus – but they were slow to sound the alarm (Reuters)

Some aspects may also be clearer when there is systematic qualitative interview data on which to draw. Right now, there are bits and pieces of interviews sandwiched between whopping great editorial discussions (e.g. FT Alphaville Imperial’s Neil Ferguson: “We don’t have a clear exit strategy”; compare with the more useful Let’s flatten the coronavirus confusion curve) or confused accounts by people speaking to someone who has spoken to someone else (e.g. Buzzfeed Even The US Is Doing More Coronavirus Tests Than The UK. Here Are The Reasons Why).

See also: other rabbit holes are available

[OK, that proved to be a big departure from the trial-and-error discussion. Here we are, back again]

In some cases, maybe people are making the argument that trial-and-error is the best way to respond quickly, and adapt quickly, in a crisis but that the UK Government version is not what, say, the WHO thinks of as good kind of adaptive response. It is not possible to tell, at least from the general ways in which they justify acting quickly.

See also the BBC’s provocative question (which I expect to be replaced soon):

Compare with:

The take home messages

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results to not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing.

Further reading, until I can think of a better conclusion:

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

See also: Advisers, Governments and why blunders happen? (Colin Talbot)

See also: Why we might disagree about … Covid-19 (Ruth Dixon and Christopher Hood)

See also: Pandemic Science and Politics (Daniel Sarewitz)

See also: We knew this would happen. So why weren’t we ready? (Steve Bloomfield)

See also: Europe’s coronavirus lockdown measures compared (Politico)

.

.

.

.

.

7 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

Can A Government Really Take Control Of Public Policy?

This post first appeared on the MIHE blog to help sell my book.

During elections, many future leaders give the impression that they will take control of public policy. They promise major policy change and give little indication that anything might stand in their way.

This image has been a major feature of Donald Trump’s rhetoric on his US Presidency. It has also been a feature of campaigns for the UK withdrawal from the European Union (‘Brexit’) to allow its leaders to take back control of policy and policymaking. According to this narrative, Brexit would allow (a) the UK government to make profound changes to immigration and spending, and (b) Parliament and the public to hold the UK government directly to account, in contrast to a distant EU policy process less subject to direct British scrutiny.

Such promises are built on the false image of a single ‘centre’ of government, in which a small number of elected policymakers take responsibility for policy outcomes. This way of thinking is rejected continuously in the modern literature. Instead, policymaking is ‘multi-centric’: responsibility for policy outcomes is spread across many levels and types of government (‘centres’), and shared with organisations outside of government, to the extent that it is not possible to simply know who is in charge and to blame. This arrangement helps explain why leaders promise major policy change but most outcomes represent a minor departure from the status quo.

Some studies of politics relate this arrangement to the choice to share power across many centres. In the US, a written constitution ensures power sharing across different branches (executive, legislative, judicial) and between federal and state or local jurisdictions. In the UK, central government has long shared power with EU, devolved, and local policymaking organisations.

However, policy theories show that most aspects of multi-centric governance are necessary. The public policy literature provides many ways to describe such policy processes, but two are particularly useful.

The first approach is to explain the diffusion of power with reference to an enduring logic of policymaking, as follows:

  • The size and scope of the state is so large that it is always in danger of becoming unmanageable. Policymakers manage complexity by breaking the state’s component parts into policy sectors and sub-sectors, with power spread across many parts of government.
  • Elected policymakers can only pay attention to a tiny proportion of issues for which they are responsible. They pay attention to a small number and ignore the rest. They delegate policymaking responsibility to other actors such as bureaucrats, often at low levels of government.
  • At this level of government and specialisation, bureaucrats rely on specialist organisations for information and advice. Those organisations trade that information/advice and other resources for access to, and influence within, the government.
  • Most public policy is conducted primarily through small and specialist ‘policy communities’ that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement.

This description suggests that senior elected politicians are less important than people think, their impact on policy is questionable, and elections may not provide major changes in policy. Most decisions are taken in their name but without their intervention.

A second, more general, approach is to show that elected politicians deal with such limitations by combining cognition and emotion to make choices quickly. Although such action allows them to be decisive, they occur within a policymaking environment over which governments have limited control. Government bureaucracies only have the coordinative capacity to direct policy outcomes in a small number of high priority areas. In most other cases, policymaking is spread across many venues, each with their own rules, networks, ways of seeing the world, and ways of responding to socio-economic factors and events.

In that context, we should always be sceptical when election candidates and referendum campaigners (or, in many cases, leaders of authoritarian governments) make such promises about political leadership and government control.

A more sophisticated knowledge of policy processes allows us to identify the limits to the actions of elected policymakers, and develop a healthier sense of pragmatism about the likely impact of government policy. The question of our age is not: how can governments take back control? Rather, it is: how can we hold policymakers to account in a complex system over which they have limited knowledge and even less control?

Leave a comment

Filed under public policy, UK politics and policy

Beware the well-intentioned advice of unusually successful academics

This post – by Dr Kathryn Oliver and me – originally appeared on the LSE Impact Blog. I have replaced the picture of a thumb up with a cat hanging in there. 

Many academics want to see their research have an impact on policy and practice, and there is a lot of advice on how to seek it. It can be helpful to take advice from experienced and successful people. However, is this always the best advice? Guidance based on best practice and success stories in particular, often reflect unequal access to policymakers, institutional support, and credibility attached to certain personal characteristics.

To take stock of the vast amount of advice being offered to academics, we decided to compare it with the more systematic analyses available in the peer-reviewed literature, on the ‘barriers’ between evidence and policy, and policy studies. This allowed us to situate this advice in a wider context, see whether it was generalisable across settings and career stages, and to think through the inconsistencies and dilemmas which underlie these suggestions.

The advice: Top tips on influencing policy

The key themes and individual recommendations we identified from the 86 most-relevant publications are:

  1. Do high quality research: Use well-established research designs, methods, or metrics.
  2. Make your research relevant and readable: Provide easily-understandable, clear, relevant and high-quality research. Aim for the general reader. Produce good stories based on emotional appeals or humour.
  3. Understand the policymaking context. Note the busy and constrained lives of policy actors. Maximise established ways to engage, such as in advisory committees. Be pragmatic, accepting that research rarely translates directly into policy.
  4. Be ‘accessible’ to policymakers. This may involve discussing topics beyond your narrow expertise. Be humble, courteous, professional, and recognise the limits to your skills.
  5. Decide if you want to be an ‘issue advocate’. Decide whether to simply explain the evidence, remain an ‘honest broker, or recommend specific policy options. Negative consequences may include peer criticism, being seen as an academic lightweight, being used to add legitimacy to a policy position, and burnout.
  6. Build relationships (and ground rules) with policymakers: Relationship-building requires investment and skills, but working collaboratively is often necessary. Academics could identify policy actors to provide insights into policy problems, act as champions for their research, and identify the most helpful policy actors.
  7. Be ‘entrepreneurial’ or find someone who is. Be a daring, persuasive scientist, comfortable in policy environments and available when needed. Or, seek brokers to act on your behalf.
  8. Reflect continuously: should you engage, do you want to, and is it working? Academics may enjoy the work or are passionate about the issue. Even so, keep track of when and how you have had impact, and revise your practices continuously.

hang-in-there-baby

Inconsistencies and dilemmas

This advice tends not to address wider issues. For example, there is no consensus over what counts as good evidence for policy, or therefore how best to communicate good evidence. We know little about how to gain the wide range of skills that researchers and policymakers need to act collectively, including to: produce evidence syntheses, manage expert communities, ‘co-produce’ research and policy with a wide range of stakeholders, and be prepared to offer policy recommendations as well as scientific advice. Further, a one-size fits-all model won’t help researchers navigate a policymaking environment where different venues have different cultures and networks. Researchers therefore need to decide what policy engagement is for—to frame problems or simply measure them according to an existing frame—and how far researchers should go to be useful and influential. If academics need to go ‘all in’ to secure meaningful impact, we need to reflect on the extent to which they have the resources and support to do so. This means navigating profound dilemmas:

Source: The dos and don’ts of influencing policy: a systematic review of advice to academics

 

Can academics try to influence policy? The financial costs of seeking impact are prohibitive for junior or untenured researchers, while women and people of colour may be more subject to personal abuse. Such factors undermine the diversity of voices available.

How should academics influence policy? Many of these new required skills – such as storytelling – are not a routine part of academic training, and may be looked down on by our colleagues.  

What is the purpose of academics engagement in policymaking? To go beyond tokenistic and instrumental engagement is to build genuine rapport with policymakers, which may require us to co-produce knowledge and cede some control over the research process. It involves a fundamentally different way of doing public engagement: one with no clear aim in mind other than to listen and learn, with the potential to transform research practices and outputs.

Where is the evidence that this advice helps us improve impact?

The existing advice offered to academics on how to create impact is – although often well-meaning – not based on systematic research or comprehensive analysis of empirical evidence. Few advice-givers draw clearly on key literatures on policymaking or evidence use. This leads to significant misunderstandings, which can have potentially costly repercussions for research, researchers and policy.  These limitations matter, as they lead to advice which fails to address core dilemmas for academics—whether to engage, how to engage, and why—which have profound implications for how scientists and universities should respond to the calls for increased impact.

Most tips focus on individual experience, whereas engagement between research and policy is driven by systemic factors. Many of the tips may be sensible and effective, but often only within particular settings. The advice is likely to be useful mostly to a relatively similar group of people who are confident and comfortable in policy environments, and have access and credibility within policy arenas. Thus, the current advice and structures may help reproduce and reinforce existing power dynamics and an underrepresentation of people who do not fit a very narrow mould.

The overall result may be that each generation of scientists has to fight the same battles, and learn the same lessons over again. Our best response as a profession is to interrogate current advice, shape and frame it, and to help us all to find ways to navigate the complex practical, political, moral and ethical challenges associated with being researchers today. The ‘how to’ literature can help, but only if authors are cognisant of their wider role in society and complex policymaking systems.

This blog post is based on the authors’ co-written articles, The dos and don’ts of influencing policy: a systematic review of advice to academics, published in Palgrave Communications, and ‘How should academics engage in policymaking to achieve impact?’  published in Political Studies Review 

About the authors

Kathryn Oliver is Associate Professor of Sociology and Public Health, London School of Hygiene and Tropical Medicine (@oliver_kathryn ). Her interest is in how knowledge is produced, mobilized and used in policy and practice, and how this affects the practice of research. She co-runs the research collaborative Transforming Evidence with Annette Boaz. https://transformure.wordpress.com and her writings can be found here: https://kathrynoliver.wordpress.com

Paul Cairney is Professor of Politics and Public Policy, University of Stirling, UK (@Cairneypaul).  His research interests are in comparative public policy and policy theories, which he uses to explain the use of evidence in policy and policymaking, in one book (The Politics of Evidence-Based Policy Making, 2016), several articles, and many, many blog posts: https://paulcairney.wordpress.com/ebpm/

See also:

  1. Adam Wellstead, Paul Cairney, and Kathryn Oliver (2018) ‘Reducing ambiguity to close the science-policy gap’, Policy Design and Practice, 1, 2, 115-25 PDF
  2. Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x PDF AM
  3. Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, 76, 3, 399–402 DOI:10.1111/puar.12555 PDF

1 Comment

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, public policy, Storytelling, UK politics and policy

Can Westminster take back control after Brexit?

All going well, this discussion will be in a box in Chapter 8 of Understanding Public Policy 2nd ed.

“The ‘Brexit’ referendum was dominated by a narrative of taking back control of policy and policy making. Control of policy would allow the UK government to make profound changes to immigration and spending. Control of policymaking would allow Parliament and the public to hold the UK government directly to account, in contrast to a more complex and distant EU policy process less subject to direct British scrutiny.

Such high level political debate is built on the false image of a small number of elected policymakers – and the Prime Minister in particular – responsible for the outcomes of the policy process.

There is a strange disconnect between the ways in which elected politicians and elected policymakers describe UK policymaking. Ministers have mostly given up the language of control; modern manifestos no longer make claims – such as to secure ‘full employment’ or eradicate health inequalities – that suggest they control the economy or can solve problems by providing public services. Yet, much Brexit rhetoric suggests that a vote to leave the EU will put control back in the hands of ministers to solve major problems.

The main problem with the latter way of thinking is that it is rejected continuously in the modern literature on policymaking. Policymaking is multi-centric: responsibility for outcomes is spread across many levels and types of government, to the extent that it is not possible to simply know who is in charge and to blame.

Some multi-level governance (MLG) relates to the choice to share power with EU, devolved, and local policymaking organisations.

However, most MLG is necessary because ministers do not have the cognitive or coordinative capacity to control policy outcomes.

They can only pay attention to a tiny proportion of their responsibilities, and have to delegate the rest. Most decisions are taken in their name but without their intervention. They occur within a policymaking environment over which ministers have limited knowledge and control.

The problem with using Brexit as a lens through which to understand British politics is that it emphasises the choice to no longer spread power across a political system, without acknowledging the necessity of doing so.

Our understanding of the future of UK policy and policymaking is incomplete without a focus on the concepts and evidence that help us understand why UK ministers must accept their limitations and act accordingly.

Yet, clearly the Westminster model archetype remains important even if it does not exist (Duggett, 2009). Policy studies have challenged successfully its image of central control, but, the model’s importance resides in its rhetorical power in wider politics when people maintain a simple argument during general election and referendum debates: we know who is – or should be – in charge. This perspective has a profound effect on the ways in which policymakers defend their actions, and political actors compete for votes, even when it is ridiculously misleading (Rhodes, 2013; Bevir, 2013)”

See also Policy Concepts in 1000 Words: the Westminster Model and Multi-level Governance

1 Comment

Filed under POLU9UK, public policy, UK politics and policy

Managing expectations about the use of evidence in policy

Notes for the #transformURE event hosted by Nuffield, 25th September 2018

I like to think that I can talk with authority on two topics that, much like a bottle of Pepsi and a pack of Mentos, you should generally keep separate:

  1. When talking at events on the use of evidence in policy, I say that you need to understand the nature of policy and policymaking to understand the role of evidence in it.
  2. When talking with students, we begin with the classic questions ‘what is policy?’ and ‘what is the policy process’, and I declare that we don’t know the answer. We define policy to show the problems with all definitions of policy, and we discuss many models and theories that only capture one part of the process. There is no ‘general theory’ of policymaking.

The problem, when you put together those statements, is that you need to understand the role of evidence within a policy process that we don’t really understand.

It’s an OK conclusion if you just want to declare that the world is complicated, but not if you seek ways to change it or operate more effectively within it.

Put less gloomily:

  • We have ways to understand key parts of the policy process. They are not ready-made to help us understand evidence use, but we can use them intelligently.
  • Most policy theories exist to explain policy dynamics, not to help us adapt effectively to them, but we can derive general lessons with often-profound implications.

Put even less gloomily, it is not too difficult to extract/ synthesise key insights from policy theories, explain their relevance, and use them to inform discussions about how to promote your preferred form of evidence use.

The only remaining problem is that, although the resultant advice looks quite straightforward, it is far easier said than done. The proposed actions are more akin to the Labours of Hercules than [PAC: insert reference to something easier].

They include:

  1. Find out where the ‘action’ is, so that you can find the right audience for your evidence. Why? There are many policymakers and influencers spread across many levels and types of government.
  2. Learn and follow the ‘rules of the game’. Why? Each policymaking venue has its own rules of engagement and evidence gathering, and the rules are often informal and unwritten.
  3. Gain access to ‘policy networks’. Why? Most policy is processed at a low level of government, beyond the public spotlight, between relatively small groups of policymakers and influencers. They build up trust as they work together, learning who is reliable and authoritative, and converging on how to use evidence to understand the nature and solution to policy problems.
  4. Learn the language. Why? Each venue has its own language to reflect dominant ideas, beliefs, or ways to understand a policy problem. In some arenas, there is a strong respect for a ‘hierarchy’ of evidence. In others, they key reference point may be value for money. In some cases, the language reflects the closing-off of some policy solutions (such as redistributing resources from one activity to another).
  5. Exploit windows of opportunity. Why? Events, and changes in socioeconomic conditions, often prompt shifts of attention to policy issues. ‘Policy entrepreneurs’ lie in wait for the right time to exploit a shift in the motive and opportunity of a policymaker to pay attention to and try to solve a problem.

So far so good, until you consider the effort it would take to achieve any of these things: you may need to devote the best part of your career to these tasks with no guarantee of success.

Put more positively, it is better to be equipped with these insights, and to appreciate the limits to our actions, than to think we can use top tips to achieve ‘research impact’ in a more straightforward way.

Kathryn Oliver and I describe these ‘how to’ tips in this post and, in this article in Political Studies Review, use a wider focus on policymaking environments to produce a more realistic sense of what individual researchers – and research-producing organisations – could achieve.

There is some sensible-enough advice out there for individuals – produce good evidence, communicate it well, form relationships with policymakers, be available, and so on – but I would exercise caution when it begins to recommend being ‘entrepreneurial’. The opportunities to be entrepreneurial are not shared equally, most entrepreneurs fail, and we can likely better explain their success with reference to their environment than their skill.

hang-in-there-baby

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

The UK government’s imaginative use of evidence to make policy

This post describes a new article published in British Politics (Open Access). Please find:

(1) A super-exciting video/audio powerpoint I use for a talk based on the article

https://youtu.be/A7qYj8nRkYg

(2) The audio alone (link)

(3) The powerpoint to download, so that the weblinks work (link) or the ppsx/ presentation file in case you are having a party (link)

(4) A written/ tweeted discussion of the main points

https://twitter.com/CairneyPaul/status/950317933158334464

In retrospect, I think the title was too subtle and clever-clever. I wanted to convey two meanings: imaginative as a euphemism for ridiculous/ often cynical and to argue that a government has to be imaginative with evidence. The latter has two meanings: imaginative (1) in the presentation and framing of evidence-informed agenda, and (2) when facing pressure to go beyond the evidence and envisage policy outcomes.

So I describe two cases in which its evidence-use seems cynical, when:

  1. Declaring complete success in turning around the lives of ‘troubled families’
  2. Exploiting vivid neuroscientific images to support ‘early intervention’

Then I describe more difficult cases in which supportive evidence is not clear:

  1. Family intervention project evaluations are of limited value and only tentatively positive
  2. Successful projects like FNP and Incredible Years have limited applicability or ‘scalability’

As scientists, we can shrug our shoulders about the uncertainty, but elected policymakers in government have to do something. So what do they do?

At this point of the article it will look like I have become an apologist for David Cameron’s government. Instead, I’m trying to demonstrate the value of comparing sympathetic/ unsympathetic interpretations and highlight the policy problem from a policymaker’s perspective:

Cairney 2018 British Politics discussion section

I suggest that they use evidence in a mix of ways to: describe an urgent problem, present an image of success and governing competence, and provide cover for more evidence-informed long term action.

The result is the appearance of top-down ‘muscular’ government and ‘a tendency for policy to change as is implemented, such as when mediated by local authority choices and social workers maintaining a commitment to their professional values when delivering policy’

I conclude by arguing that ‘evidence-based policy’ and ‘policy-based evidence’ are political slogans with minimal academic value. The binary divide between EBP/ PBE distracts us from more useful categories which show us the trade-offs policymakers have to make when faced with the need to act despite uncertainty.

Cairney British Politics 2018 Table 1

As such, it forms part of a far wider body of work …

https://twitter.com/CairneyPaul/status/950317956189302784

https://twitter.com/CairneyPaul/status/950317958529798144

In both cases, the common theme is that, although (1) the world of top-down central government gets most attention, (2) central governments don’t even know what problem they are trying to solve, far less (3) how to control policymaking and outcomes.

In that wider context, it is worth comparing this talk with the one I gave at the IDS (which, I reckon is a good primer for – or prequel to – the UK talk):

https://twitter.com/Bloggs74/status/1085874777158500352

https://www.facebook.com/idsuk/videos/364796097654832/

See also:

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Why doesn’t evidence win the day in policy and policymaking?

(found by searching for early intervention)

See also:

Here’s why there is always an expectations gap in prevention policy

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

(found by searching for prevention)

Powerpoint for guest lecture: Paul Cairney UK Government Evidence Policy

5 Comments

Filed under Evidence Based Policymaking (EBPM), POLU9UK, Prevention policy, UK politics and policy

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

https://twitter.com/r0bdavies/status/879239843011862528

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Here’s why there is always an expectations gap in prevention policy

Prevention is the most important social policy agenda of our time. Many governments make a sincere commitment to it, backed up by new policy strategies and resources. Yet, they also make limited progress before giving up or changing tack. Then, a new government arrives, producing the same cycle of enthusiasm and despair. This fundamental agenda never seems to get off the ground. We aim to explain this ‘prevention puzzle’, or the continuous gap between policymaker expectations and actual outcomes.

What is prevention policy and policymaking?

When engaged in ‘prevention’, governments seek to:

  1. Reform policy. To move from reactive to preventive public services, intervening earlier in people’s lives to ward off social problems and their costs when they seem avoidable.
  2. Reform policymaking. To (a) ‘join up’ government departments and services to solve ‘wicked problems’ that transcend one area, (b) give more responsibility for service design to local public bodies, stakeholders, ‘communities’ and service users, and (c) produce long term aims for outcomes, and reduce short term performance targets.
  3. Ensure that policy is ‘evidence based’.

Three reasons why they never seem to succeed

We use well established policy theories/ studies to explain the prevention puzzle.

  1. They don’t know what prevention means. They express a commitment to something before defining it. When they start to make sense of it, they find out how difficult it is to pursue, and how many controversial choices it involves.
  2. They engage in a policy process that is too complex to control. They try to share responsibility with many actors and coordinate action to direct policy outcomes, without the ability to design those relationships and control policy outcomes. Yet, they need to demonstrate to the electorate that they are in control. When they make sense of policymaking, they find out how difficult it is to localise and centralise.
  3. They are unable and unwilling to produce ‘evidence based policymaking’. Policymakers seek ‘rational’ and ‘irrational’ shortcuts to gather enough information to make ‘good enough’ decisions. When they seek evidence on preventing problems before they arise, they find that it is patchy, inconclusive, often counter to their beliefs, and unable to provide a ‘magic bullet’ to help make and justify choices.

Who knows what happens when they address these problems at the same time?

We draw on empirical and comparative UK and devolved government analysis to show in detail how policymaking differs according to the (a) type of government, (b) issue, and (c) era in which they operate.

Although it is reasonable to expect policymaking to be very different in, for example, the UK versus Scottish, or Labour versus Conservative governments, and in eras of boom versus austerity, a key part of our research is to show that the same basic ‘prevention puzzle’ exists at all times. You can’t simply solve it with a change of venue or government.

Our book – Why Isn’t Government Policy More Preventive? – is in press (Oxford University Press) and will be out in January 2020, with sample chapters appearing here. Our longer term agenda – via IMAJINE – is to examine how policymakers try to address ‘spatial justice’ and reduce territorial inequalities across Europe partly by pursuing prevention and reforming public services.

 

1 Comment

Filed under Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Evidence based policymaking: 7 key themes

7 themes of EBPM

I looked back at my blog posts on the politics of ‘evidence based policymaking’ and found that I wrote quite a lot (particularly from 2016). Here is a list based on 7 key themes.

1. Use psychological insights to influence the use of evidence

My most-current concern. The same basic theme is that (a) people (including policymakers) are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you (b) bombard them with information, or (c) call them idiots.

Three ways to communicate more effectively with policymakers (shows how to use psychological insights to promote evidence in policymaking)

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid? (yes)

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself? (older paper, linking studies of psychology with studies of EBPM)

Older posts on the same theme:

Is there any hope for evidence in emotional debates and chaotic government? (yes)

We are in danger of repeating the same mistakes if we bemoan low attention to ‘facts’

These complaints about ignoring science seem biased and naïve – and too easy to dismiss

How can we close the ‘cultural’ gap between the policymakers and scientists who ‘just don’t get it’?

2. How to use policy process insights to influence the use of evidence

I try to simplify key insights about the policy process to show to use evidence in it. One key message is to give up on the idea of an orderly policy process described by the policy cycle model. What should you do if a far more complicated process exists?

Why don’t policymakers listen to your evidence?

The Politics of Evidence Based Policymaking: 3 messages (3 ways to say that you should engage with the policy process that exists, not a mythical process that will never exist)

Three habits of successful policy entrepreneurs (shows how entrepreneurs are influential in politics)

Why doesn’t evidence win the day in policy and policymaking? and What does it take to turn scientific evidence into policy? Lessons for illegal drugs from tobacco and There is no blueprint for evidence-based policy, so what do you do? (3 posts describing the conditions that must be met for evidence to ‘win the day’)

Writing for Impact: what you need to know, and 5 ways to know it (explains how our knowledge of the policy process helps communicate to policymakers)

How can political actors take into account the limitations of evidence-based policy-making? 5 key points (presentation to European Parliament-European University Institute ‘Policy Roundtable’ 2016)

Evidence Based Policy Making: 5 things you need to know and do (presentation to Open Society Foundations New York 2016)

What 10 questions should we put to evidence for policy experts? (part of a series of videos produced by the European Commission)

3. How to combine principles on ‘good evidence’, ‘good governance’, and ‘good practice’

My argument here is that EBPM is about deciding at the same time what is: (1) good evidence, and (2) a good way to make and deliver policy. If you just focus on one at a time – or consider one while ignoring the other – you cannot produce a defendable way to promote evidence-informed policy delivery.

Kathryn Oliver and I have just published an article on the relationship between evidence and policy (summary of and link to our article on this very topic)

We all want ‘evidence based policy making’ but how do we do it? (presentation to the Scottish Government on 2016)

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

What Works (in a complex policymaking system)?

How Far Should You Go to Make Sure a Policy is Delivered?

4. Face up to your need to make profound choices to pursue EBPM

These posts have arisen largely from my attendance at academic-practitioner conferences on evidence and policy. Many participants tell the same story about the primacy of scientific evidence challenged by post-truth politics and emotional policymakers. I don’t find this argument convincing or useful. So, in many posts, I challenge these participants to think about more pragmatic ways to sum up and do something effective about their predicament.

Political science improves our understanding of evidence-based policymaking, but does it produce better advice? (shows how our knowledge of policymaking clarifies dilemmas about engagement)

The role of ‘standards for evidence’ in ‘evidence informed policymaking’ (argues that a strict adherence to scientific principles may help you become a good researcher but not an effective policy influencer)

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators (you have to make profound ethical and strategic choices when seeking to maximise the use of evidence in policy)

Principles of science advice to government: key problems and feasible solutions (calling yourself an ‘honest broker’ while complaining about ‘post-truth politics’ is a cop out)

What sciences count in government science advice? (political science, obvs)

I know my audience, but does my other audience know I know my audience? (compares the often profoundly different ways in which scientists and political scientists understand and evaluate EBPM – this matters because, for example, we rarely discuss power in scientist-led debates)

Is Evidence-Based Policymaking the same as good policymaking? (no)

Idealism versus pragmatism in politics and policymaking: … evidence-based policymaking (how to decide between idealism and pragmatism when engaging in politics)

Realistic ‘realist’ reviews: why do you need them and what might they look like? (if you privilege impact you need to build policy relevance into systematic reviews)

‘Co-producing’ comparative policy research: how far should we go to secure policy impact? (describes ways to build evidence advocacy into research design)

The Politics of Evidence (review of – and link to – Justin Parkhurt’s book on the ‘good governance’ of evidence production and use)

20170512_095446

5. For students and researchers wanting to read/ hear more

These posts are relatively theory-heavy, linking quite clearly to the academic study of public policy. Hopefully they provide a simple way into the policy literature which can, at times, be dense and jargony.

‘Evidence-based Policymaking’ and the Study of Public Policy

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

Practical Lessons from Policy Theories (series of posts on the policy process, offering potential lessons for advocates of evidence use in policy)

Writing a policy paper and blog post 

12 things to know about studying public policy

Can you want evidence based policymaking if you don’t really know what it is? (defines each word in EBPM)

Can you separate the facts from your beliefs when making policy? (no, very no)

Policy Concepts in 1000 Words: Success and Failure (Evaluation) (using evidence to evaluate policy is inevitably political)

Policy Concepts in 1000 Words: Policy Transfer and Learning (so is learning from the experience of others)

Four obstacles to evidence based policymaking (EBPM)

What is ‘Complex Government’ and what can we do about it? (read about it)

How Can Policy Theory Have an Impact on Policy Making? (on translating policy theories into useful advice)

The role of evidence in UK policymaking after Brexit (argues that many challenges/ opportunities for evidence advocates will not change after Brexit)

Why is there more tobacco control policy than alcohol control policy in the UK? (it’s not just because there is more evidence of harm)

Evidence Based Policy Making: If You Want to Inject More Science into Policymaking You Need to Know the Science of Policymaking and The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty and Revisiting the main ‘barriers’ between evidence and policy: focus on ambiguity, not uncertainty and The barriers to evidence based policymaking in environmental policy (early versions of what became the chapters of the book)

6. Using storytelling to promote evidence use

This is increasingly a big interest for me. Storytelling is key to the effective conduct and communication of scientific research. Let’s not pretend we’re objective people just stating the facts (which is the least convincing story of all). So far, so good, except to say that the evidence on the impact of stories (for policy change advocacy) is limited. The major complication is that (a) the story you want to tell and have people hear interacts with (b) the story that your audience members tell themselves.

Combine Good Evidence and Emotional Stories to Change the World

Storytelling for Policy Change: promise and problems

Is politics and policymaking about sharing evidence and facts or telling good stories? Two very silly examples from #SP16

7. The major difficulties in using evidence for policy to reduce inequalities

These posts show how policymakers think about how to combine (a) often-patchy evidence with (b) their beliefs and (c) an electoral imperative to produce policies on inequalities, prevention, and early intervention. I suggest that it’s better to understand and engage with this process than complain about policy-based-evidence from the side-lines. If you do the latter, policymakers will ignore you.

The UK government’s imaginative use of evidence to make policy 

What do you do when 20% of the population causes 80% of its problems? Possibly nothing.

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Key issues in evidence-based policymaking: comparability, control, and centralisation

The politics of evidence and randomised control trials: the symbolic importance of family nurse partnerships

Two myths about the politics of inequality in Scotland

Social investment, prevention and early intervention: a ‘window of opportunity’ for new ideas?

A ‘decisive shift to prevention’: how do we turn an idea into evidence based policy?

Can the Scottish Government pursue ‘prevention policy’ without independence?

Note: these issues are discussed in similar ways in many countries. One example that caught my eye today:

https://twitter.com/LisaC_Research/status/900182047221661696

 

All of this discussion can be found under the EBPM category: https://paulcairney.wordpress.com/category/evidence-based-policymaking-ebpm/T

See also the special issue on maximizing the use of evidence in policy

Palgrave C special

3 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Storytelling, UK politics and policy

The role of evidence in UK policymaking after Brexit

https://twitter.com/PalCommsOA/status/877828512023015425

We are launching a series of papers on evidence and policy in Palgrave Communications. Of course, we used Brexit as a hook, to tap into current attention to instability and major policy change. However, many of the issues we discuss are timeless and about surprising levels of stability and continuity in policy processes, despite periods of upheaval.

In my day, academics would build their careers on being annoying, and sometimes usefully annoying. This would involve developing counterintuitive insights, identifying gaps in analysis, and challenging a ‘common wisdom’ in political studies. Although not exactly common wisdom, the idea of ‘post truth’ politics, a reduction in respect for ‘experts’, and a belief that Brexit is a policymaking game-changer, are great candidates for some annoyingly contrary analysis.

In policy studies, many of us argue that things like elections, changes of government, and even constitutional changes are far less important than commonly portrayed. In media and social media accounts, we find hyperbole about the destabilising and changing impact of the latest events. In policy studies, we often stress stability and continuity.  My favourite old example regards the debates from the 1970s about electoral reform. While some were arguing that first-past-the-post was a disastrous electoral system since it produces swings of government, instability, and incoherent policy change, Richardson and Jordan would point out surprisingly high levels of stability and continuity.

Finer and Jordan Cairney

In part, this is because the state is huge, policymakers can only pay attention to a tiny part of it, and therefore most of it is processed as a low level of government, out of the public spotlight.

UPP p106

These insights still have profound relevance today, for two key reasons.

  1. The role of experts is more important than you think

This larger process provides far more opportunities for experts than we’d associate with ‘tip of the iceberg’ politics.

Some issues are salient. They command the interest of elected politicians, and those politicians often have firm beliefs that limit the ‘impact’ of any evidence that does not support their beliefs.

However, most issues are not salient. They command minimal interest, they are processed by other policymakers, and those policymakers are looking for information and advice from reliable experts.

Indeed, a lot of policy studies highlight the privileged status of certain experts, at the expense of most members of the public (which is a useful corrective to the story, associated with Brexit, that the public is too emotionally driven, too sceptical of experts, and too much in charge of the future of constitutional change).

So, Brexit will change the role of experts, but expect that change to relate to the venue in which they engage, and the networks of which they are a part, more than the practices of policymakers. Much policymaking is akin to an open door to government for people with useful information and a reputation for being reliable in their dealings with policymakers.

  1. Provide less evidence for more impact

If the problem is that policymakers can only pay attention to a tiny proportion of their responsibilities, the solution is not to bombard them with a huge amount of evidence. Instead, assume that they seek ways to ignore almost all information while still managing to make choices. The trick may be to provide just enough information to prompt demand for more, not oversupply evidence on the assumption that you have only one chance for influence.

With Richard Kwiatkoswki, I draw on policy and psychology studies to help us understand how to supply evidence to anyone using ‘rational’ and ‘irrational’ ways to limit their attention, information processing, and thought before making decisions.

Our working assumption is that policymakers need to gather information quickly and effectively, so they develop heuristics to allow them to make what they believe to be good choices. Their solutions often seem to be driven more by their emotions than a ‘rational’ analysis of the evidence, partly because we hold them to a standard that no human can reach. If so, and if they have high confidence in their heuristics, they will dismiss our criticism as biased and naïve. Under those circumstances, restating the need for ‘evidence-based policymaking’ is futile, and naively ‘speaking truth to power’ counterproductive.

Instead, try out these strategies:

  1. Develop ways to respond positively to ‘irrational’ policymaking

Instead of automatically bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond pragmatically, to pursue the kinds of evidence informed policymaking that is realistic in a complex and constantly changing policymaking environment.

  1. Tailor framing strategies to policymaker cognition

The usual advice is to minimise the cognitive burden of your presentation, and use strategies tailored to the ways in which people pay attention to, and remember information.

The less usual advice includes:

  • If policymakers are combining cognitive and emotive processes, combine facts with emotional appeals.
  • If policymakers are making quick choices based on their values and simple moral judgements, tell simple stories with a hero and a clear moral.
  • If policymakers are reflecting a ‘group emotion’, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with the ‘lens’ through which actors in those coalitions understand the world.
  1. Identify the right time to influence individuals and processes

Understand what it means to find the right time to exploit ‘windows of opportunity’.

‘Timing’ can refer to the right time to influence an individual, which involves how open they are to, say, new arguments and evidence.

Or, timing refers to a ‘window of opportunity’ when political conditions are aligned. I discuss the latter in a separate paper on effective ‘policy entrepreneurs’.

  1. Adapt to real-world organisations rather than waiting for an orderly process to appear

Politicians may appear confident of policy and with a grasp of facts and details, but are (a) often vulnerable and therefore defensive or closed to challenging information, and/ or (b) inadequate in organisational politics, or unable to change the rules of their organisations.

So, develop pragmatic strategies: form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

  1. Recognise that the biases we ascribe to policymakers are present in ourselves and our own groups.

Identifying only the biases in our competitors may help mask academic/ scientific examples of group-think, and it may be counterproductive to use euphemistic terms like ‘low information’ to describe actors whose views we do not respect. This is a particular problem for scholars if they assume that most people do not live up to their own imagined standards of high-information-led action (often described as a ‘deficit model’ of engagement).

It may be more effective to recognise that: (a) people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves.; and, (b) a fundamental aspect of evolutionary psychology is that people need to get on with each other, so showing simple respect – or going further, to ‘mirror’ that person’s non-verbal signals – can be useful even if it looks facile.

This leaves open the ethical question of how far we should go to identify our biases, accept the need to work with people whose ways of thinking we do not share, and how far we should go to secure their trust without lying about one’s beliefs.

At the very least, we do not suggest these 5 strategies as a way to manipulate people for personal gain. They are better seen as ways to use psychology to communicate well. They are also likely to be as important to policy engagement regardless of Brexit. Venues may change quickly, but the ways in which people process information and make choices may not.

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, UK politics and policy

‘Westminster and the devolved institutions’

These are some short answers to some general questions that will likely arise in my oral evidence (22 May, 1.15pm) to the Constitutional and Legislative Affairs Committee inquiry called A stronger voice for Wales: engaging with Westminster and the devolved institutions.

Could you outline your area of research expertise?

I use theories of public policy to understand policymaking, focusing on particular areas such as the UK (and Scotland in particular), issues such as tobacco policy, and themes such as ‘the politics of evidence-based policymaking’ and policy learning or transfer.

Could you elaborate on the “Scottish approach” to policymaking?

There are several related terms, including the:

  1. ‘Scottish policy style’, which academics use to describe two policymaking reputations – (i) for consulting well with stakeholders while making policy, and (ii) for trusting public bodies to deliver policy.
  2. ‘Scottish model of policymaking’, described by former Permanent Secretary Sir John Elvidge, stressing the benefits of reducing departmental silos and a having a scale of policymaking conducive to cooperation (and the negotiation of common aims) between central government and the public sector.
  3. ‘Scottish Approach to Policymaking’ (described by former Permanent Secretary Sir Peter Housden), stressing key principles about how to describe the relationship between research/ policy delivery (‘improvement method’), communities and service users (an ‘assets based’, not ‘deficit focused’ approach), and central government/ public bodies/ stakeholders in policymaking and delivery (‘co-production’).

Each term describes a reputation or aspiration for policymaking, and you’ll tend to find in my published work (click the ‘PDF’ links) a healthy scepticism about the ability of any government to live up to these aims.

Also note that the Scottish style (as with discussions of Welsh policymaking) tends to be praised in comparison with a not-flattering description of UK government policymaking.

In relation to your comments around “size or scale” of Scottish Government, would similar traits be observed in policy-making in Wales and Northern Ireland, or indeed in other small political systems?

Yes. In fact, we have included a comparison with Wales in previous studies of ‘territorial policy communities’ (both have the ‘usual story of everybody knowing everybody else’) and the potential benefits of more consensual approaches to delivery (both display ‘less evidence of a fragmentation of service delivery organisations or the same unintended consequences associated with the pursuit of a top-down policy style’).

These size and scale issues have pros and cons. Small networks can allow for the development of trust between key people, and for policy coordination to be done more personally, with less reliance on distant-looking regulations. Small government capacity can also prompt over-reliance on some groups in policy development which, on occasion, can lead to optimistic plans (when doing interviews in Wales in 2006, the example I remember was homelessness policy). Smallness might also prompt overly romantic expectations about the ability of closer cooperation, on a smaller scale, to resolve policy conflict. Yet, we also know that people often have very fixed beliefs and strong views, and that politics is about making ‘hard choices’ to resolve conflict.

Could you explain the importance of personal relationships to policy-making and implementation?

I think they relate largely to psychology in general, and the specific potential effects of the familiarity and trust that comes with regular personal interaction. Of course, one should not go too far, to assume that personal relationships are necessarily good or less competitive. For example, imagine a room containing some people representing the Welsh Government and all the University Vice Chancellors. Sometimes, it will aid collective policymaking. Sometimes, the VCs would rather hold bilateral discussions to help them compete with the others.

To what extent are territorial policy communities too “cosy” with their respective Governments?

You’ll find in many discussions a reference to ‘the usual suspects’ and the idea of ‘capture’, to describe the assertion that close contact leads to favouritism from both sides. It is helpful to note that any policymaking system will have winners and losers. You can take this for granted in larger and more openly competitive systems, but have to look harder in smaller venues. We would need to avoid telling the same romantic story about Welsh consensus politics and, instead, to design ‘standard operating procedures’ to gather many diverse sources of evidence and opinion routinely.

Could you expand on the extent to which key UK policies impact on devolved policies?

Compared to many countries, the devolved UK governments have more separate arrangements. For example, ‘health policy’ is far more devolved than in, say, Japan (in which multiple levels make policy for hospitals).

Yet, there are always overlaps in relation to economic issues (the UK is largely responsible for devolved budgets, taxation, immigration, etc.), shared responsibilities in cross-cutting issues (such as fuel poverty), and the ‘spillover’ effects of UK policies.

The classic case of spillovers in Wales is higher education/ tuition fees policy, partly because so many staff and students live within commuting distance of the Wales/ England border. Each Welsh policy has been in response to, or with a close eye on, policy for England. There was also the case of NHS policy in the mid-2000s, where Welsh government attempts to think more holistically about healthcare/ public health were undermined somewhat by unflattering comparisons of England/ Wales NHS waiting times. In Scotland, these issues are significant, even if less pronounced.

To what extent is the multi-level nature of policy-making downplayed?

I’d say that it is not sufficiently apparent in any election campaign at any level. People don’t seem to know (and/ or care) about the divisions of responsibilities across levels of government, which makes it almost impossible to hold particular governments to account for particular policy decisions. It’s often not fair to hold certain governments to account for policy outcomes (since they are the result of policies at many levels, and often out of the control of policymakers) but we can at least encourage some clarity about their choices.

Could you expand on the “intergovernmental issues” you refer to in a recent article? Do you have any examples and how these were resolved?

I’d encourage you to speak with my Centre on Constitutional Change colleagues on this topic, since (for example) Professors Nicola McEwen and Michael Keating may have more recent knowledge and examples.

In general, I’d say that IGR issues have traditionally been resolved rather informally, and behind closed doors, particularly but not exclusively when both governments were led by the same party. Formal dispute resolution is far less common in the UK than in most comparator countries. Within the UK, the Scottish Government has not faced the same problem as the Welsh Government, which has faced far more Supreme Court challenges in relation to its competence to pass legislation in devolved areas. Yet, in the past, we have seen similar early-devolution examples of ‘fudged’ decisions, including on ‘free personal care’ in Scotland (it gained far more in the ‘write-off’ of council house debt than it lost in personal care benefits) and EU structural funds in Wales (when the UK initially refused to pass on money from the EU, then magically gave the Welsh Government the same amount another way).

Is there any evidence of devolved Governments and the UK Government learning from one another in terms of policy?

Not as much as you might think (or hope). When we last wrote about this in 2012, we found that the UK government was generally uninterested in learning from devolved policy (not surprising) and there was very little Scottish-Welsh learning (more surprising), beyond isolated examples like the Children’s Commissioner (and, at a push, prescription charging and smoking policy). I recently saw a powerpoint presentation showing very few private telephone calls between Scotland Wales, so perhaps it’s not so surprising!

In general, we’d expect most policy learning or transfer to happen when at least one government is motivated by a sense of closeness to the other, which can relate to geography, but also ideological closeness or a sense that governments are trying to solve similar problems in similar ways. Yet, the Scottish and Welsh governments often face quite different initial conditions relating to their legislative powers, integration with UK policy, and starting points (for example, they have very different education systems). So, we should not assume that they have a routine desire to learn from each other, or that there would be a clear payoff.

What is the likely impact of the UK’s withdrawal from the EU on policy-making in the devolved nations?

I have no idea! The Scottish Government wants to use the event to prompt greater devolution in some areas (such as immigration) and secure the devolution of Europeanised issues (such as agriculture, fishing, and environmental policy).

We should see the practical effect of reduced multi-level policymaking in key areas (even though each government will inherit policies from their EU days) and there are some high profile areas in which things may have been different outside the EU. For example, the Scottish Government would have faced fewer obstacles to enacting its minimum unit price on alcohol (which relates partly to EU rules on the effect of pricing on the ability of firms from other EU countries to compete for market share).

We should also see some ‘stakeholder’ realignment, since interest groups tend to focus their attention on the venues they think are most important. It will be interesting to see the effects on particular groups, since only the larger groups (or the best connected) are able to maintain effective contacts with many levels of government.

What is your view on Whitehall departments’ understanding of devolution in Wales and Scotland?

The usual story is that: (a) London-based policy people tend to know very little about policy in Edinburgh or Cardiff (it’s also told about UK interest groups with devolved arms), (b) devolved-facing UK government units tend to have heroically small numbers of staff, and (c) there are few ‘standard operating procedures’ to ensure that devolved governments are consulted on relevant UK policies routinely. I can’t think of an academic text that tells a different story about the UK-devolved relationship.

That said, it’s difficult to argue that policymakers in Brussels know a great deal about Wales either, and the Cardiff-London train ticket is cheaper if you want to go somewhere to complain about being ignored.

How would you assess the success of stakeholder influence in policy making? What does this say about the effectiveness of stakeholder engagement?

I’d describe winners and losers. Perhaps we might point to a general sense of more open or consensual policymaking in the devolved venues, but also analyse such assumptions critically. In any system, you’ll find a similar logic to consulting with the usual suspects, often because they have the resources to lobby, the power to deliver policy, or the professional knowledge or experience most relevant to policy. In any system, you’ll struggle to measure stakeholder influence. If describing the benefits of more devolved policymaking, I’d find democratic/ principled arguments (about more tailored representation) more convincing than ‘evidence-based’ ones.

Do you have any views about whether powers over, for example, agriculture should go to London or to the devolved nations?

No. I’ll take my views on all constitutional matters to the grave.

Leave a comment

Filed under Scottish politics, UK politics and policy