Category Archives: public policy

COVID-19 policy in the UK: SAGE Theme 1. The language of intervention

This post is part 5 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

There is often a clear distinction between a strategy designed to (a) eliminate a virus/ the spread of disease quickly, and (b) manage the spread of infection over the long term (see The overall narrative).

However, generally, the language of virus management is confusing. We need to be careful with interpreting the language used in these minutes, and other sources such as oral evidence to House of Commons committees, particularly when comparing the language at the beginning (when people were also unsure what to call SARS-CoV-2 and COVID-19) to present day debates.

For example, in January, it is tempting to contrast ‘slow down the spread of the outbreak domestically’ (28.1.20: 2) with a strategy towards ‘extinction’, but the proposed actions may be the same even if the expectations of impact are different. Some people interpret these differences as indicative of a profoundly different approach (delay versus eradicate); some describe the semantic differences as semantics.

By February, SAGE’s expectation is of an inevitable epidemic and inability to contain COVID-19, prompting it to describe the inevitable series of stages:

‘Priorities will shift during a potential outbreak from containment and isolation on to delay and, finally, to case management … When there is sustained transmission in the UK, contact tracing will no longer be useful’ (18.2.20: 1; its discussion on 20.2.20: 2 also concludes that ‘individual cases could already have been missed – including individuals advised that they are not infectious’).

Mitigation versus suppression

On the face of it, it looks like there is a major difference in the ways on which (a) the Imperial College COVID-19 Response Team and (b) SAGE describe possible policy responses. The Imperial paper makes a distinction between mitigation and suppression:

  1. Its ‘mitigation strategy scenarios’ highlight the relative effects of partly-voluntary measures on mortality and demand for ‘critical care beds’ in hospitals: (voluntary) ‘case isolation in the home’ (people with symptoms stay at home for 7 days), ‘voluntary home quarantine’ (all members of the household stay at home for 14 days if one member has symptoms), (government enforced) ‘social distancing of those over 70’ or ‘social distancing of entire population’ (while still going to work, school or University), and closure of most schools and universities. It omits ‘stopping mass gatherings’ because ‘the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants’ (2020a: 8). Assuming 70-75% compliance, it describes the combination of ‘case isolation, home quarantine and social distancing of those aged over 70’ as the most impactful, but predicts that ‘mitigation is unlikely to be a viable option without overwhelming healthcare systems’ (2020a: 8-10). These measures would only ‘reduce peak critical care demand by two-thirds and halve the number of deaths’ (to approximately 250,000).
  2. Its ‘suppression strategy scenarios’ describe what it would take to reduce the rate of infection (R) from the estimated 2.0-2.6 to 1 or below (in other words, the game-changing point at which one person would infect no more than one other person) and reduce ‘critical care requirements’ to manageable levels. It predicts that a combination of four options – ‘case isolation’, ‘social distancing of the entire population’ (the measure with the largest impact), ‘household quarantine’ and ‘school and university closure’ – would reduce critical care demand from its peak ‘approximately 3 weeks after the interventions are introduced’, and contribute to a range of 5,600-48,000 deaths over two years (depending on the current R and the ‘trigger’ for action in relation to the number of occupied critical care beds) (2020a: 13-14).

In comparison, the SAGE meeting paper (26.2.20b: 1-3), produced 2-3 weeks earlier, pretty much assumes away the possible distinction between mitigation versus suppression measures (which Vallance has described as semantic rather than substantive – scroll down to The distinction between mitigation and suppression measures). In other words, it assumes ‘high levels of compliance over long periods of time’ (26.2.20b: 1). As such, we can interpret SAGE’s discussion as (a) requiring high levels of compliance for these measures to work (the equivalent of Imperial’s description of suppression), while (b) not describing how to use (more or less voluntary versus impositional) government policy to secure compliance. In comparison, Imperial equates suppression with the relatively-short-term measures associated with China and South Korea (while noting uncertainty about how to maintain such measures until a vaccine is produced).

One reason for SAGE to assume compliance in its scenario building is to focus on the contribution of each measure, generally taking place over 13 weeks, to delaying the peak of infection (while stating that ‘It will likely not be feasible to provide estimates of the effectiveness of individual control measures, just the overall effectiveness of them all’, 26.2.20b: 1), while taking into account their behavioural implications (26.2.20b: 2-3).

  • School closures could contribute to a 3-week delay, especially if combined with FE/ HE closures (but with an unequal impact on ‘Those in lower socio-economic groups … more reliant on free school meals or unable to rearrange work to provide childcare’).
  • Home isolation (65% of symptomatic cases stay at home for 7 days) could contribute to a 2-3 week delay (and is the ‘Easiest measure to explain and justify to the public’).
  • ‘Voluntary household quarantine’ (all member of the household isolate for 14 days) would have a similar effect – assuming 50% compliance – but with far more implications for behavioural public policy:

‘Resistance & non-compliance will be greater if impacts of this policy are inequitable. For those on low incomes, loss of income means inability to pay for food, heating, lighting, internet. This can be addressed by guaranteeing supplies during quarantine periods.

Variable compliance, due to variable capacity to comply, may lead to dissatisfaction.

Ensuring supplies flow to households is essential. A desire to help among the wider community (e.g. taking on chores, delivering supplies) could be encouraged and scaffolded to support quarantined households.

There is a risk of stigma, so ‘voluntary quarantine’ should be portrayed as an act of altruistic civic duty’.

  • ‘Social distancing’ (‘enacted early’), in which people restrict themselves to essential activity (work and school) could produce a 3-5 week delay (and likely to be supported in relation to mass leisure events, albeit less so when work activities involve a lot of contact.

[Note that it is not until May that it addresses this issue of feasibility directly (and, even then, it does not distinguish between technical and political feasibility: ‘It was noted that a useful addition to control measures SAGE considers (in addition to scientific uncertainty) would be the feasibility of monitoring/ enforcement’ (7.5.20: 3)]

As theme 2 suggests, there is a growing recognition that these measures should have been introduced by early March (such as via the Coronavirus Act 2020 not passed until 25.3.20), and likely would if the UK government and SAGE had more information (or interpreted its information in a different way). However, by mid-March, SAGE expresses a mixture of (a) growing urgency, but also (b) the need to stick to the plan, to reduce the peak and avoid a second peak of infection). On 13th March, it states:

‘There are no strong scientific grounds to hasten or delay implementation of either household isolation or social distancing of the elderly or the vulnerable in order to manage the epidemiological curve compared to previous advice. However, there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic. Household isolation is modelled to have the biggest effect of the three interventions currently planned, but with some risks. SAGE therefore thinks there is scientific evidence to support household isolation being implemented as soon as practically possible’ (13.3.20: 1)

‘SAGE further agreed that one purpose of behavioural and social interventions is to enable the NHS to meet demand and therefore reduce indirect mortality and morbidity. There is a risk that current proposed measures (individual and household isolation and social distancing) will not reduce demand enough: they may need to be coupled with more intensive actions to enable the NHS to cope, whether regionally or nationally’ (13.3.20: 2)

On 16th March, it states:

‘On the basis of accumulating data, including on NHS critical care capacity, the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1)

Overall, we can conclude two things about the language of intervention:

  1. There is now a clear difference between the ways in which SAGE and its critics describe policy: to manage an inevitably long-term epidemic, versus to try to eliminate it within national borders.
  2. There is a less clear difference between terms such as suppress and mitigate, largely because SAGE focused primarily on a comparison of different measures (and their combination) rather than the question of compliance.

See also: There is no ‘herd immunity strategy’, which argues that this focus on each intervention was lost in radio and TV interviews with Vallance.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: SAGE meetings from January-June 2020

This post is part 4 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE began a series of extraordinary meetings from 22nd January 2020. The first was described as ‘precautionary’ (22.1.20: 1) and includes updates from NERVTAG which met from 13th January. Its minutes state that ‘SAGE is unable to say at this stage whether it might be required to reconvene’ (22.1.20: 2). The second meeting notes that SAGE will meet regularly (e.g. 2-3 times per week in February) and coordinate all relevant science advice to inform domestic policy, including from NERVTAG and SPI-M (Scientific Pandemic Influenza Group on Modelling) which became a ‘formal sub-group of SAGE for the duration of this outbreak’ (SPI-M-O) (28.1.20: 1). It also convened an additional Scientific Pandemic Influenza subgroup (SPI-B) in February. I summarise these developments by month, but you can see that, by March, it is worth summarising each meeting. The main theme is uncertainty.

January 2020

The first meeting highlights immense uncertainty. Its description of WN-CoV (Wuhan Coronavirus), and statements such as ‘There is evidence of person-to-person transmission. It is unknown whether transmission is sustainable’, sum up the profound lack of information on what is to come (22.1.20: 1-2). It notes high uncertainty on how to identify cases, rates of infection, infectiousness in the absence of symptoms, and which previous experience (such as MERS) offers the most useful guidance. Only 6 days later, it estimates an R between 2-3, doubling rate of 3-4 days, incubation period of around 5 days, 14-day window of infectivity, varied symptoms such as coughing and fever, and a respiratory transmission route (different from SARS and MERS) (28.1.20: 1). These estimates are fairly constant from then, albeit qualified with reference to uncertainty (e.g. about asymptomatic transmission), some key outliers (e.g. the duration of illness in one case was 41 days – 4.2.20: 1), and some new estimates (e.g. of a 6-day ‘serial interval’, or ‘time between successive cases in a chain of transmission’, 11.2.20: 1). By now, it is preparing a response: modelling a ‘reasonable worst case scenario’ (RWC) based on the assumption of an R of 2.5 and no known treatment or vaccine, considering how to slow the spread, and considering how behavioural insights can be used to encourage self-isolation.

February 2020

SAGE began to focus on what measures might delay or reduce the impact of the epidemic. It described travel restrictions from China as low value, since a 95% reduction would have to be draconian to achieve and only secure a one month delay, which might be better achieved with other measures (3.2.20: 1-2). It, and supporting papers, suggested that the evidence was so limited that they could draw ‘no meaningful conclusions … as to whether it is possible to achieve a delay of a month’ by using one or a combination of these measures: international travel restrictions, domestic travel restrictions, quarantine people coming from infected areas, close schools, close FE/ HE, cancel large public events, contact tracing, voluntary home isolation, facemasks, hand washing. Further, some could undermine each other (e.g. school closures impact on older people or people in self-isolation) and have major societal or opportunity costs (SPI-M-O, 3.2.20b: 1-4). For example, the ‘SPI-M-O: Consensus view on public gatherings’ (11.2.20: 1) notes the aim to reduce duration and closeness of (particularly indoor) contact. Large outdoor gatherings are not worse than small, and stopping large events could prompt people to go to pubs (worse).

Throughout February, the minutes emphasize high uncertainty:

  • if there will be an epidemic outside of China (4.2.20: 2)
  • if it spreads through ‘air conditioning systems’ (4.2.20: 3)
  • the spread from, and impact on, children and therefore the impact of closing schools (4.2.20: 3; discussed in a separate paper by SPI-M-O, 10.2.20c: 1-2)
  • ‘SAGE heard that NERVTAG advises that there is limited to no evidence of the benefits of the general public wearing facemasks as a preventative measure’ (while ‘symptomatic people should be encouraged to wear a surgical face mask, providing that it can be tolerated’ (4.2.20: 3)

At the same time, its meeting papers emphasized a delay in accurate figures during an initial outbreak: ‘Preliminary forecasts and accurate estimates of epidemiological parameters will likely be available in the order of weeks and not days following widespread outbreaks in the UK’ (SPI-M-O, 3.2.20a: 3).

This problem proved to be crucial to the timing of government intervention. A key learning point will be the disconnect between the following statement and the subsequent realisation (3-4 weeks later) that the lockdown measures from mid-to-late March came too late to prevent an unanticipated number of excess deaths:

‘SAGE advises that surveillance measures, which commenced this week, will provide

actionable data to inform HMG efforts to contain and mitigate spread of Covid-19’ … PHE’s surveillance approach provides sufficient sensitivity to detect an outbreak in its early stages. This should provide evidence of an epidemic around 9- 11 weeks before its peak … increasing surveillance coverage beyond the current approach would not significantly improve our understanding of incidence’ (25.2.20: 1)

It also seems clear from the minutes and papers that SAGE highlighted a reasonable worst case scenario on 26.2.20. It was as worrying as the Imperial College COVID-19 Response Team report dated 16.3.20 that allegedly changed the UK Government’s mind on the 16th March. Meeting paper 26.2.20a described the assumption of an 80% infection attack rate and 50% clinical attack rate (i.e. 50% of the UK population would experience symptoms), which underpins the assumption of 3.6 million requiring hospital care of at least 8 days (11% of symptomatic), and 541,200 requiring ventilation (1.65% of symptomatic) for 16 days. While it lists excess deaths as unknown, its 1% infection mortality rate suggests 524,800 deaths. This RWC replaces a previous projection (in Meeting paper 10.2.20a: 1-3, based on pandemic flu assumptions) of 820,000 excess deaths (27.2.20: 1).

As such, the more important difference could come from SAGE’s discussion of ‘non-pharmaceutical interventions (NPIs)’ if it recommends ‘mitigation’ while the Imperial team recommends ‘suppression’. However, the language to describe each approach is too unclear to tell (see Theme 1. The language of intervention; also note that NPIs were often described from March as ‘behavioural and social interventions’ following an SPI-B recommendation, Meeting paper 3.2.20: 1, but the language of NPI seems to have stuck).

March 2020

In March, SAGE focused initially (Meetings 12-14) on preparing for the peak of infection on the assumption that it had time to transition towards a series of isolation and social distancing measures that would be sustainable (and therefore unlikely to contribute to a second peak if lifted too soon). Early meetings and meeting papers express caution about the limited evidence for intervention and the potential for their unintended consequences. This approach began to change somewhat from mid-March (Meeting 15), and accelerate from Meetings 16-18, when it became clear that incidence and virus transmission were much larger than expected, before a new phase began from Meeting 19 (after the UK lockdown was announced on the 23rd).

Meeting 12 (3.3.18) describes preparations to gather and consolidate information on the epidemic and the likely relative effect of each intervention, while its meeting papers emphasise:

  • ‘It is highly likely that there is sustained transmission of COVID-19 in the UK at present’, and a peak of infection ‘might be expected approximately 3-5 months after the establishment of widespread sustained transmission’ (SPI-M Meeting paper 2.3.20: 1)
  • the need the prepare the public while giving ‘clear and transparent reasons for different strategies’ and reducing ambiguity whenever giving guidance (SPI-B Meeting paper 3.2.20: 1-2)
  • The need to combine different measures (e.g. school closure, self-isolation, household isolation, isolating over-65s) at the right time; ‘implementing a subset of measures would be ideal. Whilst this would have a more moderate impact it would be much less likely to result in a second wave’ (Meeting paper 4.3.20a: 3).

Meeting 13 (5.3.20) describes staying in the ‘containment’ phase (which, I think, means isolating people with positive tests at home or in hospital) , and introducing: a 12-week period of individual and household isolation measures in 1-2 weeks, on the assumption of 50% compliance; and a longer period of shielding over-65s 2 weeks later. It describes ‘no evidence to suggest that banning very large gatherings would reduce transmission’, while closing bars and restaurants ‘would have an effect, but would be very difficult to implement’, and ‘school closures would have smaller effects on the epidemic curve than other options’ (5.3.20: 1). Its SPI-B Meeting paper (4.3.20b) expresses caution about limited evidence and reliance on expert opinion, while identifying:

  • potential displacement problems (e.g. school closures prompt people to congregate elsewhere, or be looked after by vulnerable older people, while parents to lose the chance to work)
  • the visibility of groups not complying
  • the unequal impact on poorer and single parent families of school closure and loss of school meals, lost income, lower internet access, and isolation
  • how to reduce discontent about only isolating at-risk groups (the view that ‘explaining that members of the community are building some immunity will make this acceptable’ is not unanimous) (4.3.20b: 2).

Meeting 14 (10.3.20) states that the UK may have 5-10000 cases and ‘10-14 weeks from the epidemic peak if no mitigations are introduced’ (10.3.20: 2). It restates the focus on isolation first, followed by additional measures in April, and emphasizes the need to transition to measures that are acceptable and sustainable for the long term:

‘SAGE agreed that a balance needs to be struck between interventions that theoretically have significant impacts and interventions which the public can feasibly and safely adopt in sufficient numbers over long periods’ …’the public will face considerable challenges in seeking to comply with these measures, (e.g. poorer households, those relying on grandparents for childcare)’ (10.3.20: 2)

Meeting 15 (13.3.20: 1) describes an update to its data, suggesting ‘more cases in the UK than SAGE previously expected at this point, and we may therefore be further ahead on the epidemic curve, but the UK remains on broadly the same epidemic trajectory and time to peak’. It states that ‘household isolation and social distancing of the elderly and vulnerable should be implemented soon, provided they can be done well and equitably’, noting that there are ‘no strong scientific grounds’ to accelerate key measures but ‘there will be some minor gains from going early and potentially useful reinforcement of the importance of taking personal action if symptomatic’ (13.3.20: 1) and ‘more intensive actions’ will be required to maintain NHS capacity (13.3.20: 2).

*******

On the 16th March, the UK Prime Minister Boris Johnson describes an ‘emergency’ (one week before declaring a ‘national emergency’ and UK-wide lockdown)

*******

Meeting 16 (16.3.20) describes the possibility that there are 5-10000 new cases in the UK (there is great uncertainty on the estimate’), doubling every 5-6 days. Therefore, to stay within NHS capacity, ‘the advice from SAGE has changed regarding the speed of implementation of additional interventions. SAGE advises that there is clear evidence to support additional social distancing measures be introduced as soon as possible’ (16.3.20: 1). SPI-M Meeting paper (16.3.20: 1) describes:

‘a combination of case isolation, household isolation and social distancing of vulnerable groups is very unlikely to prevent critical care facilities being overwhelmed … it is unclear whether or not the addition of general social distancing measures to case isolation, household isolation and social distancing of vulnerable groups would curtail the epidemic by reducing the reproduction number to less than 1 … the addition of both general social distancing and school closures to case isolation, household isolation and social distancing of vulnerable groups would be likely to control the epidemic when kept in place for a long period. SPI-M-O agreed that this strategy should be followed as soon as practical’

Meeting 17 (18.3.20) marks a major acceleration of plans, and a de-emphasis of the low-certainty/ beware-the-unintended-consequences approach of previous meetings (on the assumption that it was now 2-4 weeks behind Italy). It recommends school closures as soon as possible (and it, and SPIM Meeting paper 17.3.20b, now downplays the likely displacement effect). It focuses particularly on London, as the place with the largest initial numbers:

‘Measures with the strongest support, in terms of effect, were closure of a) schools, b) places of leisure (restaurants, bars, entertainment and indoor public spaces) and c) indoor workplaces. … Transport measures such as restricting public transport, taxis and private hire facilities would have minimal impact on reducing transmission’ (18.3.20: 2)

Meeting 18 (23.3.20) states that the R is higher than expected (2.6-2.8), requiring ‘high rates of compliance for social distancing’ to get it below 1 and stay under NHS capacity (23.3.20: 1). There is an urgent need for more community testing/ surveillance (and to address the global shortage of test supplies). In the meantime, it needs a ‘clear rationale for prioritising testing for patients and health workers’ (the latter ‘should take priority’) (23.3.20: 3) Closing UK borders ‘would have a negligible effect on spread’ (23.3.20: 2).

*******

The lockdown. On the 23rd March 2020, the UK Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of coronavirus, including police powers to support public health, such as to disperse gatherings of more than two people (unless they live together), close events and shops, and limit outdoor exercise to once per day (at a distance of two metres from others).

*******

Meeting 19 (26.3.20) follows the lockdown. SAGE describes its priorities if the R goes below 1 and NHS capacity remains under 100%: ‘monitoring, maintenance and release’ (based on higher testing); public messaging on mass testing and varying interventions; understanding nosocomial transmission and immunology; clinical trials (avoiding hasty decisions’ on new drug treatment in absence of good data) and ‘how to minimise potential harms from the interventions, including those arising from postponement of normal services, mental ill health and reduced ability to exercise. It needs to consider in particular health impacts on poorer people’ (26.3.20: 1-2). The optimistic scenario is 10,000 deaths from the first wave (SPIM-O Meeting paper 25.3.20: 4).

Meeting 20 Confirms RWC and optimistic scenarios (Meeting paper 25.3.20), but it needs a ‘clearer narrative, clarifying areas subject to uncertainty and sensitivities’ and to clarify that scenarios (with different assumptions on, for example, the R, which should be explained more) are not predictions (29.3.20).

Meeting 21 seeks to establish SAGE ‘scientific priorities’ (e.g. long term health impacts of COVID-19, including socioeconomic impact on health (including mental health), community testing, international work (‘comorbidities such as malaria and malnutrition) (31.3.20: 1-2). NHS to set up an interdisciplinary group (including science and engineering) to ‘understand and tackle nosocomial transmission’ in the context of its growth and urgent need to define/ track it (31.3.20: 1-2). SAGE to focus on testing requirements, not operational issues. It notes the need to identify a single source of information on deaths.

April 2020

The meetings in April highlight four recurring themes.

First, it stresses that it will not know the impact of lockdown measures for some time, that it is too soon to understand the impact of releasing them, and there is high risk of failure: ‘There is a danger that lifting measures too early could cause a second wave of exponential epidemic growth – requiring measures to be re-imposed’ (2.4.20: 1; see also 14.4.20: 1-2). This problem remains even if a reliable testing and contact tracing system is in place, and if there are environmental improvements to reduce transmission (by keeping people apart).

Second, it notes signals from multiple sources (including CO-CIN and the RCGP) on the higher risk of major illness and death among black people, the ongoing investigation of higher risk to ‘BAME’ health workers (16.4.20), and further (high priority) work on ‘ethnicity, deprivation, and mortality’ (21.4.20: 1) (see also: Race, ethnicity, and the social determinants of health).

Third, it highlights the need for a ‘national testing strategy’ to cover NHS patients, staff, an epidemiological survey, and the community (2.4.20). The need for far more testing is a feature of almost every meeting (see also The need to ramp up testing).

Fourth, SAGE describes the need for more short and long-term research, identifying nosocomial infection as a short term priority, and long term priorities in areas such as the long term health impacts of COVID-19 (including socioeconomic impacts on physical and mental health), community testing, and international work (31.3.20: 1-2).

Finally, it reflects shifting advice on the precautionary use of face masks. Previously, advisory bodies emphasized limited evidence of a clear benefit to the wearer, and worried that public mask use would reduce the supply to healthcare professionals and generate a false sense of security (compare with this Greenhalgh et al article on the precautionary principle, the subsequent debate, and work by the Royal Society). Even by April: ‘NERVTAG concluded that the increased use of masks would have minimal effect’ on general population infection (7.4.20: 1), while the WHO described limited evidence that facemasks are beneficial for community use (9.4.20). Still, general face mask use but could have small positive effect, particularly in ‘enclosed environments with poor ventilation, and around vulnerable people’ (14.4.20: 2) and ‘on balance, there is enough evidence to support recommendation of community use of cloth face masks, for short periods in enclosed spaces where social distancing is not possible’ (partly because people can be infectious with no symptoms), as long as people know that it is no substitute for social distancing and handwashing (21.4.20)

May 2020

In May, SAGE continues to discuss high uncertainty on relaxing lockdown measures, the details of testing systems, and the need for research.

Generally, it advises that relaxations should not happen before there is more understanding of transmission in hospitals and care homes, and ‘until effective outbreak surveillance and test and trace systems are up and running’ (14.5.20). It advises specifically ‘against reopening personal care services, as they typically rely on highly connected workers who may accelerate transmission’ (5.5.20: 3) and warns against the too-quick introduction of social bubbles. Relaxation runs the risk of diminishing public adherence to social distancing, and to overwhelm any contact tracing system put in place:

‘SAGE participants reaffirmed their recent advice that numbers of Covid-19 cases remain high (around 10,000 cases per day with wide confidence intervals); that R is 0.7-0.9 and could be very close to 1 in places across the UK; and that there is very little room for manoeuvre especially before a test, trace and isolate system is up and running effectively. It is not yet possible to assess the effect of the first set of changes which were made on easing restrictions to lockdown’ (28.5.20: 3).

It recommends extensive testing in hospitals and care homes (12.5.20: 3) and ‘remains of the view that a monitoring and test, trace & isolate system needs to be put in place’ (12.5.20: 1)

June 2020

In June, SAGE identifies the importance of clusters of infection (super-spreading events) and the importance of a contact tracing system that focuses on clusters (rather than simply individuals) (11.6.20: 3). It reaffirms the value of a 2-metre distance rule. It also notes that the research on immunology remains unclear, which makes immunity passports a bad idea (4.6.20).

It describes the result of multiple meeting papers on the unequal impact of COVID-19:

‘There is an increased risk from Covid-19 to BAME groups, which should be urgently investigated through social science research and biomedical research, and mitigated by policy makers’ … ‘SAGE also noted the importance of involving BAME groups in framing research questions, participating in research projects, sharing findings and implementing recommendations’ (4.6.20: 1-3)

See also: Race, ethnicity, and the social determinants of health

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

3 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: The role of SAGE and science advice to government

This post is part 2 of COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The issue of science advice to government, and the role of SAGE in particular, became unusually high profile in the UK, particularly in relation to four factors:

  1. Ministers described ‘following the science’ to project a certain form of authority and control.
  2. The SAGE minutes and papers – including a record of SAGE members and attendees – were initially unpublished, in line with the previous convention of government to publish after, rather than during, a crisis.

‘SAGE is keen to make the modelling and other inputs underpinning its advice available to the public and fellow scientists’ (13.3.20: 1)

When it agrees to publish SAGE papers/ documents, it stresses: ‘It is important to demonstrate the uncertainties scientists have faced, how understanding of Covid-19 has developed over time, and the science behind the advice at each stage’ (16.3.20: 2)

‘SAGE discussed plans to release the academic models underpinning SAGE and SPI-M discussions and judgements. Modellers agreed that code would become public but emphasised that the effort to do this immediately would distract from other analyses. It was agreed that code should become public as soon as practical, and SPI-M would return to SAGE with a proposal on how this would be achieved. ACTION: SPI-M to advise on how to make public the source code for academic models, working with relevant partners’ (18.3.20: 2).

SAGE welcomes releasing names of SAGE participants (if willing) and notes role of Ian Boyd as ‘independent challenge function’ (28.4.20: 1)

SAGE also describes the need for a better system to allow SAGE participants to function effectively and with proper support (given the immense pressure/ strain on their time and mental health) (7.5.20: 1)

  1. There were growing concerns that ministers would blame their advisers for poor choices (compare Freedman and Snowdon) or at least use science advice as ‘an insurance policy’, and
  2. There was some debate about the appropriateness of Dominic Cummings (Prime Minister Boris Johnson’s special adviser) attending some meetings.

Therefore, its official description reflects its initial role plus a degree of clarification on the role of science advice mechanisms during the COVID-19 pandemic. The SAGE webpage on the gov.uk sites describes its role as:

provides scientific and technical advice to support government decision makers during emergencies … SAGE is responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR). The advice provided by SAGE does not represent official government policy’.

Its more detailed explainer describes:

‘SAGE’s role is to provide unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants. This includes everything from latest knowledge of the virus to modelling the disease course, understanding the clinical picture, and effects of and compliance with interventions. This advice together with a descriptor of uncertainties is then passed onto government ministers. The advice is used by Ministers to allow them to make decisions and inform the government’s response to the COVID-19 outbreak …

The government, naturally, also considers a range of other evidence including economic, social, and broader environmental factors when making its decisions…

SAGE is comprised of leading lights in their representative fields from across the worlds of academia and practice. They do not operate under government instruction and expert participation changes for each meeting, based on the expertise needed to address the crisis the country is faced with …

SAGE is also attended by official representatives from relevant parts of government. There are roughly 20 such officials involved in each meeting and they do not frequently contribute to discussions, but can play an important role in highlighting considerations such as key questions or concerns for policymakers that science needs to help answer or understanding Civil Service structures. They may also ask for clarification on a scientific point’ (emphasis added by yours truly).

Note that the number of participants can be around 60 people, which is more like an assembly with presentations and a modest amount of discussion, than a decision-making function (the Zoom meeting on 4.6.20 lists 76 participants). Even a Cabinet meeting is about 20 and that is too much for coherent discussion/ action (hence separate, smaller, committees).

Further, each set of now-published minutes contains an ‘addendum’ to clarify its operation. For example, its first minutes in 2020 seek to clarify the role of participants. Note that the participants change somewhat at each meeting (see the full list of members/ attendees), and some names are redacted. Dominic Cummings’ name only appears (I think) on 5.3.20, 14.4.20, and two meetings on 1.5.20 (although, as Freedman notes, ‘his colleague Ben Warner was a more regular presence’).

SAGE minutes 1 addendum 22.1.20

More importantly, the minutes from late February begin to distinguish between three types of potential science advice:

  1. to describe the size of the problem (e.g. surveillance of cases and trends, estimating a reasonable worst case scenario)
  2. to estimate the relative impact of many possible interventions (e.g. restrictions on travel, school closures, self-isolation, household quarantine, and social distancing measures)
  3. to recommend the level and timing of state action to achieve compliance in relation to those interventions.

SAGE focused primarily on roles 1 and 2, arguing against role 3 on the basis that state intervention is a political choice to be taken by ministers. Ministers are responsible for weighing up the potential public health benefits of each measure in relation to their social and economic costs (see also: The relationship between science, science advice, and policy).

Example 1: setting boundaries between advice and strategy

  • ‘It is a political decision to consider whether it is preferable to enact stricter measures at first, lifting them gradually as required, or to start with fewer measures and add further measures if required. Surveillance data streams will allow real-time monitoring of epidemic growth rates and thus allow approximate evaluation of the impact of whatever package of interventions is implemented’ (Meeting paper 26.2.20b: 1)

This example highlights a limitation in performing role 2 to inform 3: SAGE would not be able to compare the relative impact of measures without knowing their level of imposition and its impact on compliance. Further, the way in which it addressed this problem is crucial to our interpretation and evaluation of the timing and substance of the UK government’s response.

In short, it simultaneously assumed away and maintained attention to this problem by stating:

  • ‘The measures outlined below assume high levels of compliance over long periods of time. This may be unachievable in the UK population’ (26.2.20b: 1).
  • ‘advice on interventions should be based on what the NHS needs and what modelling of those interventions suggests, not on the (limited) evidence on whether the public will comply with the interventions in sufficient numbers and over time’ (16.3.20: 1)

The assumption of high compliance reduces the need for SAGE to make distinctions between terms such as mitigation versus suppression (see also: Confusion about the language of intervention and stages of intervention). However, it contributes to confusion within wider debates on UK action (see Theme 1. The language of intervention).

Example 2: setting boundaries between advice and value judgements

  • ‘SAGE has not provided a recommendation of which interventions, or package of interventions, that Government may choose to apply. Any decision must consider the impacts these interventions may have on society, on individuals, the workforce and businesses, and the operation of Government and public services’ (Meeting paper 4.3.20a: 1).

To all intents and purposes, SAGE is noting that governments need to make value-based choices to:

  1. Weigh up the costs and benefits of any action (as described by Layard et al, with reference to wellbeing measures and the assumed price of a life), and
  2. Decide whose wellbeing, and lives, matter the most (because any action or inaction will have unequal consequences across a population).

In other words, policy analysis is one part evidence and one part value judgement. Both elements are contested in different ways, and different questions inform political choices (e.g. whose knowledge counts versus whose wellbeing counts?).

[see also:

  • ‘Determining a tolerable level of risk from imported cases requires consideration of a number of non-science factors and is a policy question’ (28.4.20: 3)
  • ‘SAGE reemphasises that its own focus should always be on providing clear scientific advice to government and the principles behind that advice’ (7.5.20: 1)]

Future reflections

Any future inquiry will be heavily contested, since policy learning and evaluation are political acts (and the best way to gather and use evidence during a pandemic is highly contested).  Still, hopefully, it will promote reflection on how, in practice, governments and advisory bodies negotiate the blurry boundary between scientific advice and political choice when they are so interdependent and rely so heavily on judgement in the face of ambiguity and uncertainty (or ‘radical uncertainty’). I discuss this issue in the next post, which highlights the ways in which UK ministers relied on SAGE (and advisers) to define the policy problem.

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

6 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: Did the UK Government ‘follow the science’? Reflections on SAGE meetings

SAGE explainer

SAGE is the Scientific Advisory Group for Emergencies. The text up there comes from the UK Government description. SAGE is the main venue to coordinate science advice to the UK government on COVID-19, including from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group, reporting to PHE), and the SPI-M (Scientific Pandemic Influenza Group on Modelling) sub-groups on modelling (SPI-M) and behavioural public policy (SPI-B) which supply meeting papers to SAGE.

I have summarized SAGE’s minutes (41 meetings, from 22 January to 11 June) and meeting/ background papers (125 papers, estimated range 1-51 pages, median 4, not-peer-reviewed, often produced a day after a request) in a ridiculously long table. This thing is huge (40 pages and 20000 words). It is the sequoia table. It is the humongous fungus. Even Joey Chestnut could not eat this table in one go. To make your SAGE meal more palatable, here is a series of blog posts that situate these minutes and papers in their wider context. This initial post is unusually long, so I’ve put in a photo to break it up a bit.

Did the UK government ‘follow the science’?

I use the overarching question Did the UK Government ‘follow the science’? initially for the clickbait. I reckon that, like a previous favourite (people have ‘had enough of experts’), ‘following the science’ is a phrase used by commentators more frequently than the original users of the phrase. It is easy to google and find some valuable commentaries with that hook (Devlin & Boseley, Siddique, Ahuja, Stevens), but also find ministers using a wider range of messages with more subtle verbs and metaphors:

  • ‘We will take the right steps at the right time, guided by the science’ (Prime Minister Boris Johnson, 3.20)
  • ‘We will be guided by the science’ (Health Secretary Matt Hancock, 2.20)
  • ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’ (Johnson, 3.20)
  • ‘The plan is driven by the science and guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.20)
  • ‘The plan does not set out what the government will do, it sets out the steps we could take at the right time along the basis of the scientific advice’ (Johnson, 3.20).

Still, clearly they are saying ‘the science’ as a rhetorical device, and it raises many questions or objections, including:

  1. There is no such thing as ‘the science’.

Rather, there are many studies described as scientific (generally with reference to a narrow range of accepted methods), and many people described as scientists (with reference to their qualifications and expertise). The same can be said for the rhetorical phrase ‘the evidence’ and the political slogan ‘evidence based policymaking’ (which often comes with its notionally opposite political slogan ‘policy based evidence’). In both cases, a reference to ‘the science’ or ‘the evidence’ often signals one or both of:

  • a particular, restrictive, way to describe evidence that lives up to a professional quality standard created by some disciplines (e.g. based on a hierarchy of evidence, in which the systematic review of randomized control trials is often at the top)
  • an attempt by policymakers to project their own governing competence, relative certainty, control, and authority, with reference to another source of authority

2. Ministers often mean ‘following our scientists

PM_press_conference Vallance Whitty 12.3.20

When Johnson (12.3.20) describes being ‘guided by the science’, he is accompanied by Professor Patrick Vallance (Government Chief Scientific Adviser) and Professor Chris Whitty (the UK government’s Chief Medical Adviser). Hancock (3.3.20) describes being ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’ (Hancock, 3.3.20).

In other words, following ‘the science’ means ‘following the advice of our scientific advisors’, via mechanisms such as SAGE.

As the SAGE minutes and meeting papers show, government scientists and SAGE participants necessarily tell a partial story about the relevant evidence from a particular perspective (note: this is not a criticism of SAGE; it is a truism). Other interpreters of evidence, and sources of advice, are available.

Therefore, the phrase ‘guided by the science’ is, in practice, a way to:

  • narrow the search for information (and pay selective attention to it)
  • close down, or set the terms of, debate
  • associate policy with particular advisors or advisory bodies, often to give ministerial choices more authority, and often as ‘an insurance policy’ to take the heat off ministers.
  1. What exactly is ‘the science’ guiding?

Let’s make a simple distinction between two types of science-guided action. Scientists provide evidence and advice on:

  1. the scale and urgency of a potential policy problem, such as describing and estimating the incidence and transmission of coronavirus
  2. the likely impact of a range of policy interventions, such as contact tracing, self-isolation, and regulations to oblige social distancing

In both cases, let’s also distinguish between science advice to reduce uncertainty and ambiguity:

  • Uncertainty describes a lack of knowledge or a worrying lack of confidence in one’s knowledge.
  • Ambiguity describes the ability to entertain more than one interpretation of a policy problem.

Put both together to produce a wide range of possibilities for policy ‘guided by the science’, from (a) simply providing facts to help reduce uncertainty on the incidence of coronavirus (minimal), to (b) providing information and advice on how to define and try to solve the policy problem (maximal).

If so, note that being guided by science does not signal more or less policy change. Ministers can use scientific uncertainty to defend limited action, or use evidence selectively to propose rapid change. In either case, it can argue – sincerely – that it is guided by science. Therefore, analyzing critically the phraseology of ministers is only a useful first step. Next, we need to identify the extent to which scientific advisors and advisory bodies, such as SAGE, guided ministers.

The role of SAGE: advice on evidence versus advice on strategy and values

In that context, the next post examines the role of SAGE.

It shows that, although science advice to government is necessarily political, the coronavirus has heightened attention to science and advice, and you can see the (subtle and not subtle) ways in SAGE members and its secretariat are dealing with its unusually high level of politicization. SAGE has responded by clarifying its role, and trying to set boundaries between:

  • Advice versus strategy
  • Advice versus value judgements

These aims are understandable, but difficult to do in theory (the fact/value distinction is impossible) and practice (plus, policymakers may not go along with the distinction anyway). I argue that it also had some unintended consequences, which should prompt some further reflection on facts-versus-values science advice during crises.

The ways in which UK ministers followed SAGE advice

With these caveats in mind, my reading of this material is that UK government policy was largely consistent with SAGE evidence and advice in the following ways:

  1. Defining the policy problem

This post (and a post on oral evidence to the Health and Social Care Committee) identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows (although the post provides a more expansive discussion):

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

While SAGE minutes suggest a general reluctance to comment too much on the point 4, government discussions were underpinned by 1-3. For me, this context is the most important. It provides a lens through which to understand all of SAGE advice: how it shapes, and is shaped by, UK government policy.

  1. The timing and substance of interventions before lockdown, maintenance of lockdown for several months, and gradual release of lockdown measures

This post presents a long chronological story of SAGE minutes and papers, divided by month (and, in March, by each meeting). Note the unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, can only be appreciated fully if you read the minutes from 1 to 41. Or, you know, take my word for it.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China (albeit while developing initially-good estimates of R, doubling rate, incubation period, window of infectivity, and symptoms). In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

In other words, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice (and it would not be outrageous to argue that it went ahead of it).

It is more difficult to describe the consistency between UK government policy & SAGE advice in relation to the relaxation of lockdown measures.

SAGE’s minutes and meeting papers describe very low certainty about what will happen after the release of lockdown. Their models do not hide this unusually high level of uncertainty, and they use models (built on assumptions) to generate scenarios rather than estimate what will happen. In this sense, ‘following the science’ could relate to (a) a level of buy-in for this kind of approach, and (b) making choices when scientific groups cannot offer much (if any) advice on what to do or what will happen. The example of reopening schools is a key example, since SPI-M and SPI-B focused intensely on the issue, but their conclusions could not underpin a specific UK government choice.

There are two ways to interpret what happened next.

First, there will always be a mild gap between hesitant SAGE advice and ministerial action. SAGE advice tends to be based on the amount and quality of evidence to support a change, which meant it was hesitant to recommend (a) a full lockdown and (b) a release from lockdown. Just as UK government policy seemed to go ahead of the evidence to enter lockdown on the 23rd March, so too does it seem to go ahead of the cautious approach to relaxing it.

Second, UK ministers are currently going too far ahead of the evidence. SPI-M papers state repeatedly that the too-quick release of measures will cause the R to go above 1 (in some papers, it describes reaching 1.7; in some graphs it models up to 3).

  1. The use of behavioural insights to inform and communicate policy

In March, you can find a lot of external debate about the appropriate role for ‘behavioural science’ and ‘behavioural public policy’ (BPP) (in other words, using insights from psychology to inform policy). Part of the initial problem related to the lack of transparency of the UK government, which prompted concerns that ministers were basing choices on limited evidence (see Hahn et al, Devlin, Mills). Oliver also describes initial confusion about the role of BPP when David Halpern became mildly famous for describing the concept of ‘herd immunity’ rather than sticking to psychology.

External concern focused primarily on the argument that the UK government (and many other governments) used the idea of ‘behavioural fatigue’ to justify delayed or gradual lockdown measures. In other words, if you do it too quickly and for too long, people will tire of it and break the rules.

Yet, this argument about fatigue is not a feature of the SAGE minutes and SPI-B papers (indeed, Oliver wonders if the phrase came from Whitty, based on his experience of people tiring of taking medication).

Rather, the papers tend to emphasise:

  • There is high uncertainty about behavioural change in key scenarios, and this reference to uncertainty should inform any choice on what to do next.
  • The need for effective and continuous communication with citizens, emphasizing transparency, honesty, clarity, and respect, to maintain high trust in government and promote a sense of community action (‘we are all in this together’).

John and Stoker argue that ‘much of behavioural science lends itself to’ a ‘top-down approach because its underlying thinking is that people tend to be limited in cognitive terms, and that a paternalistic expert-led government needs to save them from themselves’. Yet, my overall impression of the SPI-B (and related) work is that (a) although SPI-B is often asked to play that role, to address how to maximize adherence to interventions (such as social distancing), (b) its participants try to encourage the more deliberative or collaborative mechanisms favoured by John and Stoker (particularly when describing how to reopen schools and redesign work spaces). If so, my hunch is that they would not be as confident that UK ministers were taking their advice consistently (for example, throughout table 2, have a look at the need to provide a consistent narrative on two different propositions: we are all in this together, but the impact of each action/inaction will be profoundly unequal).

Expanded themes in SAGE minutes

Throughout this period, I think that one – often implicit – theme is that members of SAGE focused quite heavily on what seemed politically feasible to suggest to ministers, and for ministers to suggest to the public (while also describing technical feasibility – i.e. will it work as intended if implemented?). Generally, it seemed to anticipate policymaker concern about, and any unintended public reactions, to a shift towards more social regulation. For example:

‘Interventions should seek to contain, delay and reduce the peak incidence of cases, in that order. Consideration of what is publicly perceived to work is essential in any decisions’ (25.2.20: 1)

Put differently, it seemed to operate within the general confines of what might work in a UK-style liberal democracy characterised by relatively low social regulation. This approach is already a feature of The overall narrative underpinning SAGE advice and UK government policy, and the remaining posts highlight key themes that arise in that context.

They include how to:

Delaying the inevitable

All of these shorter posts delay your reading of a ridiculously long table summarizing each meeting’s discussion and advice/ action points (Table 2, which also includes a way to chase up the referencing in the blog posts: dates alone refer to SAGE minutes; multiple meeting papers are listed as a, b, c if they have the same date stamp rather than same authors).

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

It is part of a wider project, in which you can also read about:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I hope to get through all of this material (and equivalent material in the devolved governments) somehow, but also to find time to live, love, eat, and watch TV, so please bear with me if you want to know what happened but don’t want to do all of the reading to find out.

If you would rather just read all of this discussion in one document:

The whole thing in PDF

Table 2 in PDF

The whole thing as a Word document

Table 2 as a word document

If you would like some other analyses, compare with:

  • Freedman (7.6.20) ‘Where the science went wrong. Sage minutes show that scientific caution, rather than a strategy of “herd immunity”, drove the UK’s slow response to the Covid-19 pandemic’. Concludes that ‘as the epidemic took hold the government was largely following Sage’s advice’, and that the government should have challenged key parts of that advice (to ensure an earlier lockdown).
  • More or Less (1.7.20) ‘Why Did the UK Have Such a Bad Covid-19 Epidemic?’. Relates the delays in ministerial action to inaccurate scientific estimates of the doubling time of infection (discussed further in Theme 2).
  • Both Freedman and More or Less focus on the mishandling of care home safety, exacerbated by transfers from hospital without proper testing.
  • Snowden (28.5.20) ‘The lockdown’s founding myth. We’ve forgotten that the Imperial model didn’t even call for a full lockdown’. Challenges the argument that ministers dragged their feet while scientists were advising quick and extensive interventions (an argument he associates with Calvert et al (23.5.20) ‘22 days of dither and delay on coronavirus that cost thousands of British lives’). Rather, ministers were following SAGE advice, and the lockdown in Italy had a far bigger impact on ministers (since it changed what seemed politically feasible).

7 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy, UK politics and policy

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

In this post, ‘following the science’ describes UK ministers taking the advice of their scientific advisers and SAGE (the Scientific Advisory Group for Emergencies).

If so, were UK ministers ‘guided by the expert recommendations of the 4 UK Chief Medical Officers and the Scientific Advisory Group for Emergencies’?

The short answer is yes.

They followed advice in two profoundly important ways:

  1. Defining coronavirus as a policy problem.

My reading of the SAGE minutes and meeting papers identifies the consistency of the overall narrative underpinning SAGE advice and UK government policy. It can be summed up as follows:

  1. coronavirus represents a long term problem with no immediate solution (such as a vaccine) and minimal prospect of extinction/ eradication
  2. use policy measures – on isolation and social distancing – to flatten the first peak of infection and avoid overwhelming health service capacity
  3. don’t impose or relax measures too quickly (which will cause a second peak of infection)
  4. reflect on the balance between (a) the positive impact of lockdown (on the incidence and rate of transmission), (b) the negative impact of lockdown (on freedom, physical and mental health, and the immediate economic consequences).

If you examine UK ministerial speeches and SAGE minutes, you will find very similar messages: a coronavirus epidemic is inevitable, we need to ease gradually into suppression measures to avoid a second peak of infection as big as the first, and our focus is exhortation and encouragement over imposition.

  1. The timing and substance of interventions before lockdown

I describe a long chronological story of SAGE minutes and papers. Its main theme is unusually high levels of uncertainty from the beginning. The lack of solid evidence, available to SAGE at each stage, should not be dismissed.

In January, SAGE discusses uncertainty about human-to-human transmission and associates coronavirus strongly with Wuhan in China. In February, it had more data on transmission but described high uncertainty on what measures might delay or reduce the impact of the epidemic. In March, it focused on preparing for the peak of infection on the assumption that it had time to transition gradually towards a series of isolation and social distancing measures. This approach began to change from mid-March when it became clear that the number of people infected, and the rate of transmission, was much larger and faster than expected.

Therefore, the Prime Minister’s declarations – of emergency on 16.3.20 and of lockdown on 23.3.20 – did not lag behind SAGE advice. It would not be outrageous to argue that it went ahead of that advice, at least as recorded in SAGE minutes and meeting papers (compare with Freedman, Snowden, More or Less).

The long answer

If you would like the long answer, I can offer you 35280 words, including a 22380-word table summarizing the SAGE minutes and meeting papers (meetings 1-41, 22.1.20-11.6.20).

It includes:

The full list of SAGE posts:

COVID-19 policy in the UK: yes, the UK Government did ‘follow the science’

Did the UK Government ‘follow the science’? Reflections on SAGE meetings

The role of SAGE and science advice to government

The overall narrative underpinning SAGE advice and UK government policy

SAGE meetings from January-June 2020

SAGE Theme 1. The language of intervention

SAGE Theme 2. Limited capacity for testing, forecasting, and challenging assumptions

SAGE Theme 3. Communicating to the public

COVID-19 policy in the UK: Table 2: Summary of SAGE minutes, January-June 2020

Further reading

So far, the wider project includes:

  • The early minutes from NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group)
  • Oral evidence to House of Commons committees, beginning with Health and Social Care

I am also writing a paper based on this post, but don’t hold your breath.

5 Comments

Filed under COVID-19, Evidence Based Policymaking (EBPM), Prevention policy, Public health, public policy

Policy Concepts in 1000 Words: Policy Change

Christopher M. Weible & Paul Cairney

Policy change is a central concern of policy research and practice. Some want to explain it. Some want to achieve it.

Explanation begins with the ‘what is policy?’ question, since we cannot observe something without defining it.  However, we soon find that: no single definition can capture all forms of policy change, the absence of policy change is often more important, and important changes can be found in the everyday application of rules and practices related to public policies.  Further, studies often focus on changes in public policies without a focus on societal outcomes or effects.

One pragmatic solution is to define public policies as decisions made by policymakers or policymaking venues such as legislatures, executives, regulatory agencies, courts, national and local governments (and, in some countries, citizen-led policy changes).  Focusing on this type of policy change, two major categories of insights unfold:

  1. Patterns of Policy Change: incrementalism, punctuations, and drift

A focus on decisions suggests that most policymaking venues contribute primarily to incremental policy change, or often show little change from year to year but with the occasional punctuation of major policymaking activity.  This pattern reflects a frequent story about governments doing too much or nothing at all. The logic is that policymaking attention is always limited, so a focus on one issue in any policymaking venue requires minimal focus on others.  Then, when attention shifts, we see instances of major policy change as attempts to compensate (or overcompensate) for what was ignored for too long.

An additional focus on institutions highlights factors such as policy drift, to describe slow and small changes to policies, or to aspects of their design, that accumulate eventually and can have huge impacts on outcomes and society.  These drifts often happen outside the public eye or are overlooked as being negative but trivial.  For example, rising economic inequality in the US resulted from the slow accumulation of policies – related to labor unions, tax structures, and corporate governance – as well as globalization and labor-saving technologies.

  1. Factors Associated with Policy Change

Many factors help us understand instances of policy change. We can separate them analytically (as below) but, in practice, they occur simultaneously or sequentially, and can reinforce or stifle each other.

Context

Context includes history, biophysical conditions, socio-economic conditions, culture, and basic institutional structures (such as a constitution).  For example, historical and geographic conditions are often viewed as funneling or constraining the type of policy decisions made by a government.

Events 

Policymaking venues are often described as being resistant to change or in a state of equilibrium of competing political forces.  As a result, one common explanation for change is a focusing event or shock.  Events by themselves don’t create policy change. Rather, they present an opportunity for people or coalitions to exploit.   Focusing events might include disasters or crises, tragic incidents, a terrorist attack, disruptive changes in technology, or more routine events such as elections. Events may have tangible qualities, but studies tend to highlight the ways in which people frame events to construct their meaning and implications for policy.

Public Opinion 

The relationship between public opinion and policy change is a difficult one to assess.  Some research shows that the preferences of the general public only matter when they coincide with the preferences of the elite or major interest groups.  Or, it matters only when the topic is salient and the public is paying attention. Little evidence suggests that public opinion matters when few are paying attention.  Others describe public opinion as setting the boundaries within which the government operates.

Learning

Learning is a process of updating understandings of the world in response to signals from the environment.  Learning is a political activity rather than simply a technical exercise in which people learn from teachers. Learning could involve becoming aware of the severity of a policy problem, evaluating outcomes to determine if a government intervention works, and learning to trust an opponent and reach compromise. For example, certain types of rules in a collaborative process can shape the ways in which individuals gain new knowledge and change their views about the scientific evidence informing a problem.

Diffusion of Ideas 

Sometimes governments learn from or transfer policies from other governments. For example, in collections of policymaking venues (such as US state governments or EU member states) it is common for one venue to adopt a policy and prompt this policy to spread across other venues in a process of diffusion.  There are many explanations for diffusion including learning, a response to competition, mimicking, and coercion. In each case, the explanation for policy change comes from an external impetus and an internal context.

Champions and Political Associations

All policy change is driven, to some extent, by individual or group agency.  Key players include public policy champions in the form of policy entrepreneurs or in groups of government and/or non-government entities in the form of coalitions, social movements, epistemic communities, and political parties.  In each case, individuals or organizations mobilize resources, capitalize on opportunities, and apply pressure to formulate and adopt public policies.

 

The presence of these factors does not always lead to policy change, and no single study can capture a full explanation of policy change. Instead, many quantitative studies focus on multiple instances of policy change and are often broad in geographic scope or spans of time, while many case study or qualitative studies focus intensely on a very particular instance of policy change. Both approaches are essential.

See also:

Policy in 500 Words: what is public policy and why does it matter?

Policy in 500 Words: how much does policy change?

Policy Concepts in 1000 Words: Policy change and measurement (podcast download)

Policy Concepts in 1000 Words: how do policy theories describe policy change?

 

 

Leave a comment

Filed under 1000 words, public policy

Coronavirus and the ‘social determinants’ of health inequalities: lessons from ‘Health in All Policies’ initiatives

Many public health bodies are responding to crisis by shifting their attention and resources from (1) a long-term strategic focus on reducing non-communicable diseases (such as heart diseases, cancers, diabetes), to (2) the coronavirus pandemic.

Of course, these two activities are not mutually exclusive, and smoking provides the most high-profile example of short-term and long-term warnings coming together (see Public Health England’s statement that ‘Emerging evidence from China shows smokers with COVID-19 are 14 times more likely to develop severe respiratory disease’).

There are equally important lessons – such as on health equity – from the experiences of longer-term and lower-profile ‘preventive’ public health agendas such as ‘Health in All Policies’ (HIAP).*

What is ‘Health in All Policies’?

HIAP is a broad (and often imprecise) term to describe:

  1. The policy problem. Address the ‘social determinants’ of health, defined by the WHO as ‘the unfair and avoidable differences in health status … shaped by the distribution of money, power and resources [and] the conditions in which people are born, grow, live, work and age’.
  2. The policy solutions. Identify a range of policy instruments, including redistributive measures to reduce economic inequalities, distributive measures to improve public services and the physical environment (including housing), regulations on commercial and individual behaviour, and health promotion via education and learning.
  3. The policy style. An approach to policymaking that encourages meaningful collaboration across multiple levels and types of government, and between governmental and non-governmental actors (partly because most policy solutions to improve health are not in the gift of health departments).
  4. Political commitment and will. High level political support is crucial to the production of a holistic strategy document, and to dedicate resources to its delivery, partly via specialist organisations and the means to monitor and evaluate progress.

As two distinctive ‘Marmot reviews’ demonstrate, this problem (and potential solutions) can be described differently in relation to:

Either way, each of the 4 HIAP elements highlights issues that intersect with the impact of the coronavirus: COVID-19 has a profoundly unequal impact on populations; there will be a complex mix of policy instruments to address it, and many responses will not be by health departments; an effective response requires intersectoral government action and high stakeholder and citizen ownership; and, we should not expect current high levels of public, media, and policymaker attention and commitment to continue indefinitely or help foster health equity (indeed, even well-meaning policy responses may exacerbate health inequalities). 

A commitment to health equity, or the reduction of health inequalities

At the heart of HIAP is a commitment to health equity and to reduce health inequalities. In that context, the coronavirus provides a stark example of the impact of health inequalities, since (a) people with underlying health conditions are the most vulnerable to major illness and death, and (b) the spread of underlying health conditions is unequal in relation to factors such as income and race or ethnicity. Further, there are major inequalities in relation to exposure to physical and economic risks.

A focus on the social determinants of health inequalities

A ‘social determinants’ focus helps us to place individual behaviour in a wider systemic context. It is tempting to relate health inequalities primarily to ‘lifestyles’ and individual choices, in relation to healthy eating, exercise, and the avoidance of smoking and alcohol. However, the most profound impacts on population health can come from (a) environments largely outside of an individual’s control (e.g. in relation to threats from others, such as pollution or violence), (b) levels of education and employment, and (c) economic inequality, influencing access to warm and safe housing, high quality water and nutrition, choices on transport, and access to safe and healthy environments.

In that context, the coronavirus provides stark examples of major inequalities in relation to self-isolation and social distancing: some people have access to food, private spaces to self-isolate, and open places to exercise away from others; many people have insufficient access to food, no private space, and few places to go outside (also note the disparity in resources between countries).

The pursuit of intersectoral action

A key aspect of HIAP is to identify the ways in which non-health sectors contribute to health. Classic examples include a focus on the sectors that influence early access to high quality education, improving housing and local environments, reducing vulnerability to crime, and reforming the built environment to foster sustainable public transport and access to healthy air, water, and food.

The response to the coronavirus also appears to be a good advert for the potential for intersectoral governmental action, demonstrating that measures with profound impacts on health and wellbeing are made in non-health sectors, including: treasury departments subsidising business and wages, and funding additional healthcare; transport departments regulating international and domestic travel; social care departments responsible for looking after vulnerable people outside of healthcare settings; and, police forces regulating social behaviour.

However, most (relevant) HIAP studies identify a general lack of effective intersectoral government action, related largely to a tendency towards ‘siloed’ policymaking within each department, exacerbated by ‘turf wars’ between departments (even if they notionally share the same aims) and a tendency for health departments to be low status, particularly in relation to economic departments (also note the frequently used term ‘health imperialism’ to describe scepticism about public health in other sectors).  Some studies highlight the potential benefits of ‘win-win’ strategies to persuade non-health sectors that collaboration on health equity also helps deliver their core business (e.g. Molnar et al 2015), but the wider public administration literature is more likely to identify a history of unsuccessful initiatives with a cumulative demoralising effect (e.g. Carey and Crammond, 2015; Molenveld et al, 2020).  

The pursuit of wider collaboration

HIAP ambitions extend to ‘collaborative’ or ‘co-produced’ forms of governance, in which citizens and stakeholders work with policymakers in health and non-health sectors to define the problem of health inequalities and inform potential solutions. These methods can help policymakers make sense of broad HIAP aims through the eyes of citizens, produce priorities that were not anticipated in a desktop exercise, help non-health sector workers understand their role in reducing health inequalities, and help reinforce the importance of collaborative and respectful ways of working.

An excellent example comes from Corburn et al’s (2014) study of Richmond, California’s statutory measures to encourage HIAP. They describe ‘coproducing health equity in all policies’ with initial reference to WHO definitions, but then to social justice in relation to income and wealth, which differs markedly according to race and immigration status. It then reports on a series of community discussions to identify key obstacles to health:

For example, Richmond residents regularly described how, in the same day, they might experience or fear violence, environmental pollution, being evicted from housing, not being able to pay health care bills, discrimination at work or in school, challenges accessing public services, and immigration and customs enforcement (ICE) intimidation … Also emerging from the workshops and health equity discussions was that one of the underlying causes of the multiple stressors experienced in Richmond was structural racism. By structural racism we meant that seemingly neutral policies and practices can function in racist ways by disempowering communities of color and perpetuating unequal historic conditions” (2014: 627-8).

Yet, a tiny proportion of HIAP studies identify this level of collaboration and new knowledge feeding into policy agendas to address health equity.

The cautionary tale: HIAP does not cause health equity

Rather, most of the peer-reviewed academic HIAP literature identifies a major gap between high expectations and low implementation. Most studies identify an urgent and strong impetus for policy action to be proportionate to the size of the policy problem, and ideas about the potential implementation of a HIAP agenda when agreed, but no studies identify implementation success in relation to health equity. In fact, the two most-discussed examples – in Finland and South Australia – seem to describe a successful reform of processes that have a negligible impact on equity.  

A window of opportunity for what?

It is common in the public health field to try to identify ‘windows of opportunity’ to adopt (a) HIAP in principle, and (b) specific HIAP-friendly policy instruments. It is also common to try to identify the factors that would aid HIAP implementation, and to assume that this success would have a major impact on the social determinants of health inequalities. Yet, the cumulative experience from HIAP studies is that governments can pursue health promotion and intersectoral action without reducing health inequalities.

For me, this is the context for current studies of the unequal impact of the coronavirus across the globe and within each country. In some cases, there are occasionally promising discussions of major policymaking reforms, or to use the current crisis as an impetus for social justice as well as crisis response. Yet, the history of the pursuit of HIAP-style reforms should help us reject the simple notion that some people saying the right things will make that happen. Instead, right now, it seems more likely that – in the absence of significantly new action** – the same people and systems that cause inequalities will undermine attempts to reduce them. In other words, health equity will not happen simply because it seems like the right thing to do. Rather, it is a highly contested concept, and many people will use their power to make sure that it does not happen, even if they claim otherwise.

*These are my early thoughts based on work towards a (qualitative) systematic review of the HIAP literature, in partnership with Emily St Denny, Sean Kippin, and Heather Mitchell.

**No, I do not know what that action would be. There is no magic formula to which I can refer.

1 Comment

Filed under COVID-19, Prevention policy, Public health, public policy

Who can you trust during the coronavirus crisis?

By Paul Cairney and Adam Wellstead, based on this paper.

Trust is essential during a crisis. It is necessary for cooperation. Cooperation helps people coordinate action, to reduce the need for imposition. It helps reduce uncertainty in a complex world. It facilitates social order and cohesiveness. In a crisis, almost-instant choices about who to trust or distrust make a difference between life and death.

Put simply, we need to trust: experts to help us understand and address the problem, governments to coordinate policy and make choices about levels of coercion, and each other to cooperate to minimise infection.

Yet, there are three unresolved problems with understanding trust in relation to coronavirus policy.

  1. What does trust really mean?

Trust is one of those words that could mean everything and nothing. We feel like we understand it intuitively, but would also struggle to define it well enough to explain how exactly it works. For example, in social science, there is some agreement on the need to describe individual motivation, social relationships, and some notion of the ‘public good’:

  • the production of trust helps boost the possibility of cooperation, partly by
  • reducing uncertainty (low information about a problem) and ambiguity (low agreement on how to understand it) when making choices, partly by
  • helping you manage the risk of making yourself vulnerable when relying on others, particularly when
  • people demonstrate trustworthiness by developing a reputation for competence, honesty, and/ or reliability, and
  • you combine cognition and emotion to produce a disposition to trust, and
  • social and political rules facilitate this process, from the formal and well-understood rules governing behaviour to the informal rules and norms shaping behaviour.

As such, trust describes your non-trivial belief in the reliability of other people, organisations, or processes. It facilitates the kinds of behaviour that are essential to an effective response to the coronavirus, in which we need to:

  1. Make judgements about the accuracy of information underpinning our choices to change behaviour (such as from scientific agencies).
  2. Assess the credibility of the people with whom we choose to cooperate or take advice (such as more or less trust in each country’s leadership).
  3. Measure the effectiveness of the governments or political systems to which we pledge our loyalty.

Crucially, in most cases, people need to put their trust in actions or outcomes caused by people they do not know, and the explanation for this kind of trust is very different to trusting people you know.

  1. What does trust look like in policymaking?

Think of trust as a mechanism to boost cooperation and coalition formation, help reduce uncertainty, and minimise the ‘transactions costs’ of cooperation (for example, monitoring behaviour, or producing or enforcing contracts). However, uncertainty is remarkably high because the policy process is not easy to understand. We can try to understand the ‘mechanisms’ of trust, to boost cooperation, with reference to these statements about trustees and the trusted:

  1. Individuals need to find ways to make choices about who to trust and distrust.
  2. However, they must act within a complex policymaking environment in which they have minimal knowledge of what will happen and who will make it happen.
  3. To respond effectively, people seek ways to cooperate with others systematically, such as by establishing formal and informal rules.

People seeking to make and influence policy must act despite uncertainty about the probability of success or risk of failure. In a crisis, it happens almost instantly. People generate beliefs about what they want to happen and how their reliance on others can help it happen. This calculation depends on:

  • Another person or organisation’s reputation for being trustworthy, allowing people the ability to increase certainty when they calculate the risk of engagement.
  • The psychology of trust and perceptions of another actor’s motives. To some extent, people gather information and use logic to determine someone’s competence. However, they also use gut feeling or emotion to help them decide to depend on someone else. They may also trust a particular source if the cognitive load is low, such as because (a) the source is familiar (e.g. a well-known politician or a celebrity, or oft-used source), or (b) the information is not challenging to remember or accept.

If so, facilitators of trust include:

  • People share the same characteristics, such as beliefs, norms, or expectations.
  • Some people have reputations for being reliable, predictable, honest, competent, and/ or relatively selfless.
  • Good experiences of previous behaviour, including repeated interactions that foster rewards and help predict future risk (with face to face contact often described as particularly helpful).
  • People may trust people in a position of authority (or the organisation or office), such as an expert or policymaker (although perhaps the threat of rule enforcement is better understood as a substitute for trust, and in practice it is difficult to spot the difference).

High levels of trust are apparent when effective practices – built on reciprocity, emotional bonds, and/ or positive expectations – become the norms or formalised and written down for all to see and agree. High levels of distrust indicate a need to deter the breach of agreements, by introducing expectations combined with sanctions for not behaving as expected.

  1. Who should you trust?

These concepts do not explain fully why people trust particular people more than others, or help us determine who you should trust during a crisis.

Rather, first, they help us reflect on the ways in which people have been describing their own thought processes (click here, and scroll to ‘Limiting the use of evidence’), such as trusting an expert source because they: (a) have a particular scientific background, (b) have proven to be honest and reliable in the past, (c) represent a wider scientific profession/ community, (d) are part of a systematic policymaking machinery, (e) can be held to account for their actions, (f) are open about the limits to their knowledge, and/or (g) engage critically with information to challenge simplistic rushes to judgement. Overall, note how much trust relates to our minimal knowledge about their research skills, prompting us to rely on an assessment of their character or status to judge their behaviour. In most cases, this is an informal process in which people may not state (or really know) why they trust or distrust someone so readily.

Then, we can reflect on who we trust, and why, and if we should change how we make such calculations during a crisis like the coronavirus. Examples include:

  • A strong identity with a left or right wing cause might prompt us only to trust people from one political party. This thought process may be efficient during elections and debates, but does it work so well during a crisis necessitating so high levels of cross-party cooperation?
  • People may be inclined to ignore advice because they do not trust their government, but maybe (a) high empathy for their vulnerable neighbours, and (b) low certainty about the impact of their actions, should prompt them to trust in government advice unless they have a tangible reason not to (while low empathy helps explain actions such as hoarding).
  • Government policy is based strongly on the extent to which policymakers trust people to do the right thing. Most debates in liberal democracies relate to the idea that (a) people can be trusted, so give advice and keep action voluntary, or cannot be trusted, so make them do the right thing, and that (b) citizens can trust their government. In other words, it must be a reciprocal relationship (see the Tweets in Step 3).

Finally, governments make policy based on limited knowledge and minimal control of the outcomes, and they often respond with trial-and-error strategies. The latter is fine if attention to policy is low and trust in government sufficiently high. However, in countries like the UK and US, each new choice prompts many people to question not only the competence of leaders but also their motivation. This is a worrying development for which everyone should take some responsibility.

See also:

Policy Concepts in 1000 Words: the Institutional Analysis and Development Framework (IAD) and Governing the Commons

The coronavirus and evidence-informed policy analysis (short version)

The coronavirus and evidence-informed policy analysis (long version)

 

5 Comments

Filed under 1000 words, 750 word policy analysis, Public health, public policy

The coronavirus and evidence-informed policy analysis (short version)

The coronavirus feels like a new policy problem that requires new policy analysis. The analysis should be informed by (a) good evidence, translated into (b) good policy. However, don’t be fooled into thinking that either of those things are straightforward. There are simple-looking steps to go from defining a problem to making a recommendation, but this simplicity masks the profoundly political process that must take place. Each step in analysis involves political choices to prioritise some problems and solutions over others, and therefore prioritise some people’s lives at the expense of others.

The very-long version of this post takes us through those steps in the UK, and situates them in a wider political and policymaking context. This post is shorter, and only scratches the surface of analysis.

5 steps to policy analysis

  1. Define the problem.

Perhaps we can sum it up as: (a) the impact of this virus and illness will be a level of death and illness that could overwhelm the population and exceed the capacity of public services, so (b) we need to contain the virus enough to make sure it spreads in the right way at the right time, so (c) we need to encourage and make people change their behaviour (primarily via hygiene and social distancing). However, there are many ways to frame this problem to emphasise the importance of some populations over others, and some impacts over others.

  1. Identify technically and politically feasible solutions.

Solutions are not really solutions: they are policy instruments that address one aspect of the problem, including taxation and spending, delivering public services, funding research, giving advice to the population, and regulating or encouraging changes to social behaviour. Each new instrument contributes an existing mix, with unpredictable and unintended consequences. Some instruments seem technically feasible (they will work as intended if implemented), but will not be adopted unless politically feasible (enough people support their introduction). Or vice versa. This dual requirement rules out a lot of responses.

  1. Use values and goals to compare solutions.

Typical judgements combine: (a) broad descriptions of values such as efficiency, fairness, freedom, security, and human dignity, (b) instrumental goals, such as sustainable policymaking (can we do it, and for how long?), and political feasibility (will people agree to it, and will it make me more or less popular or trusted?), and (c) the process to make choices, such as the extent to which a policy process involves citizens or stakeholders (alongside experts) in deliberation. They combine to help policymakers come to high profile choices (such as the balance between individual freedom and state coercion), and low profile but profound choices (to influence the level of public service capacity, and level of state intervention, and therefore who and how many people will die).

  1. Predict the outcome of each feasible solution.

It is difficult to envisage a way for the UK Government to publicise all of the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation. People often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic about who should live or die, or provide a frank account without unintended consequences for public trust or anxiety. If so, one aspect of government policy is to keep some choices implicit and avoid a lot of debate on trade-offs. Another is to make choices continuously without knowing what their impact will be (the most likely scenario right now).

  1. Make a choice, or recommendation to your client.

Your recommendation or choice would build on these four steps. Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem seemed to change. If you are writing your analysis, maybe keep it down to one sheet of paper (in other words, fewer words than in this post up to this point).

Policy analysis is not as simple as these steps suggest, and further analysis of the wider policymaking environment helps describe two profound limitations to simple analytical thought and action.

  1. Policymakers must ignore almost all evidence

The amount of policy relevant information is infinite, and capacity is finite. So, individuals and governments need ways to filter out almost all of it. Individuals combine cognition and emotion to help them make choices efficiently, and governments have equivalent rules to prioritise only some information. They include: define a problem and a feasible response, seek information that is available, understandable, and actionable, and identify credible sources of information and advice. In that context, the vague idea of trusting or not trusting experts is nonsense, and the larger post highlights the many flawed ways in which all people decide whose expertise counts.

  1. They do not control the policy process.

Policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome.

  • There are many policymakers and influencers spread across a political system. For example, consider the extent to which each government department, devolved governments, and public and private organisations are making their own choices that help or hinder the UK government approach.
  • Most choices in government are made in ‘subsystems’, with their own rules and networks, over which ministers have limited knowledge and influence.
  • The social and economic context, and events, are largely out of their control.

The take home messages (if you accept this line of thinking)

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results do not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing. No one is helping their government solve the problem by saying stupid shit on the internet (OK, that last bit was a message of despair).

 

Further reading:

The longer report sets out these arguments in much more detail, with some links to further thoughts and developments.

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

The coronavirus and evidence-informed policy analysis (long version)

This is the long version. It is long. Too long to call a blog post. Let’s call it a ‘living document’ that I update and amend as new developments arise (then start turning into a more organised paper). In most cases, I am adding tweets, so the date of the update is embedded. If I add a new section, I will add a date. If you seek specific topics (like ‘herd immunity’), it might be worth doing a search. The short version is shorter.

The coronavirus feels like a new policy problem. Governments already have policies for public health crises, but the level of uncertainty about the spread and impact of this virus seems to be taking it to a new level of policy, media, and public attention. The UK Government’s Prime Minister calls it ‘the worst public health crisis for a generation’.

As such, there is no shortage of opinions on what to do, but there is a shortage of well-considered opinions, producing little consensus. Many people are rushing to judgement and expressing remarkably firm opinions about the best solutions, but their contributions add up to contradictory evaluations, in which:

  • the government is doing precisely the right thing or the completely wrong thing,
  • we should listen to this expert saying one thing or another expert saying the opposite.

Lots of otherwise-sensible people are doing what they bemoan in politicians: rushing to judgement, largely accepting or sharing evidence only if it reinforces that judgement, and/or using their interpretation of any new development to settle scores with their opponents.

Yet, anyone who feels, without uncertainty, that they have the best definition of, and solution to, this problem is a fool. If people are also sharing bad information and advice, they are dangerous fools. Further, as Professor Madley puts it (in the video below), ‘anyone who tells you they know what’s going to happen over the next six months is lying’.

In that context, how can we make sense of public policy to address the coronavirus in a more systematic way?

Studies of policy analysis and policymaking do not solve a policy problem, but they at least give us a language to think it through.

  1. Let’s focus on the UK as an example, and use common steps in policy analysis, to help us think through the problem and how to try to manage it.
  • In each step, note how quickly it is possible to be overwhelmed by uncertainty and ambiguity, even when the issue seems so simple at first.
  • Note how difficult it is to move from Step 1, and to separate Step 1 from the others. It is difficult to define the problem without relating it to the solution (or to the ways in which we will evaluate each solution).
  1. Let’s relate that analysis to research on policymaking, to understand the wider context in which people pay attention to, and try to address, important problems that are largely out of their control.

Throughout, note that I am describing a thought process as simply as I can, not a full examination of relevant evidence. I am highlighting the problems that people face when ‘diagnosing’ policy problems, not trying to diagnose it myself. To do so, I draw initially on common advice from the key policy analysis texts (summaries of the texts that policy analysis students are most likely to read) that simplify the process a little too much. Still, the thought process that it encourages took me hours alone (spread over three days) to produce no real conclusion. Policymakers and advisers, in the thick of this problem, do not have that luxury of time or uncertainty.

See also: Boris Johnson’s address to the nation in full (23.3.20) and press conference transcripts

https://twitter.com/BorisJohnson/status/1246358936585986048

https://twitter.com/BorisJohnson/status/1243496858095411200

https://twitter.com/R_S_P_H/status/1242833029728477188

Step 1 Define the problem

Common advice in policy analysis texts:

  • Provide a diagnosis of a policy problem, using rhetoric and eye-catching data to generate attention.
  • Identify its severity, urgency, cause, and our ability to solve it. Don’t define the wrong problem, such as by oversimplifying.
  • Problem definition is a political act of framing, as part of a narrative to evaluate the nature, cause, size, and urgency of an issue.
  • Define the nature of a policy problem, and the role of government in solving it, while engaging with many stakeholders.
  • ‘Diagnose the undesirable condition’ and frame it as ‘a market or government failure (or maybe both)’.

Coronavirus as a physical problem is not the same as a coronavirus policy problem. To define the physical problem is to identify the nature, spread, and impact of a virus and illness on individuals and populations. To define a policy problem, we identify the physical problem and relate it (implicitly or explicitly) to what we think a government can, and should, do about it. Put more provocatively, it is only a policy problem if policymakers are willing and able to offer some kind of solution.

This point may seem semantic, but it raises a profound question about the capacity of any government to solve a problem like an epidemic, or for governments to cooperate to solve a pandemic. It is easy for an outsider to exhort a government to ‘do something!’ (or ‘ACT NOW!’) and express certainty about what would happen. However, policymakers inside government:

  1. Do not enjoy the same confidence that they know what is happening, or that their actions will have their intended consequences, and
  2. Will think twice about trying to regulate social behaviour under those circumstances, especially when they
  3. Know that any action or inaction will benefit some and punish others.

For example, can a government make people wash their hands? Or, if it restricts gatherings at large events, can it stop people gathering somewhere else, with worse impact? If it closes a school, can it stop children from going to their grandparents to be looked after until it reopens? There are 101 similar questions and, in each case, I reckon the answer is no. Maybe government action has some of the desired impact; maybe not. If you agree, then the question might be: what would it really take to force people to change their behaviour?

See also: Coronavirus has not suspended politics – it has revealed the nature of power (David Runciman)

The answer is: often too much for a government to consider (in a liberal democracy), particularly if policymakers are informed that it will not have the desired impact.

https://twitter.com/AdamJKucharski/status/1238152492178976769

If so, the UK government’s definition of the policy problem will incorporate this implicit question: what can we do if we can influence, but not determine (or even predict well) how people behave?

Uncertainty about the coronavirus plus uncertainty about policy impact

Now, add that general uncertainty about the impact of government to this specific uncertainty about the likely nature and spread of the coronavirus:

https://www.youtube.com/watch?time_continue=350&v=blkDulsgh3Q&feature=emb_logo

A summary of this video suggests:

  • There will be an epidemic (a profound spread to many people in a short space of time), then the problem will be endemic (a long-term, regular feature of life) (see also UK policy on coronavirus COVID-19 assumes that the virus is here to stay).
  • In the absence of a vaccine, the only way to produce ‘herd immunity’ is for most people to be infected and recover

[Note: there is much debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation, based on levels of trust/distrust in the UK Government, its Prime Minister, and the Prime Minister’s special adviser. I discuss this point below under ‘trial and error policymaking’. See also Who can you trust during the coronavirus crisis? ]

  • The ideal spread involves all well people sharing the virus first, while all vulnerable people (e.g. older, and/or with existing health problems that affect their immune systems) protected in one isolated space, but it won’t happen like that; so, we are trying to minimise damage in the real world.
  • We mainly track the spread via deaths, with data showing a major spike appearing one month later, so the problem may only seem real to most people when it is too late to change behaviour

https://twitter.com/ChrisGiles_/status/1247458186300456960

https://twitter.com/d_spiegel/status/1248157520943857665

https://twitter.com/d_spiegel/status/1247824140645683205

https://twitter.com/EmergMedDr/status/1250039068890726400

See also: Coronavirus: Government expert defends not closing UK schools (BBC, Sir Patrick Vallance 13th March 2020)

https://twitter.com/DrSamSims/status/1247445729439895555

  • The choice in theory is between a rapid epidemic with a high peak, or a slowed-down epidemic over a longer period, but ‘anyone who tells you they know what’s going to happen over the next six months is lying’.
  • Maybe this epidemic will be so memorable as to shift social behaviour, but so much depends on trying to predict (badly) if individuals will actually change (see also Spiegelhalter on communicating risk).

None of this account tells policymakers what to do, but at least it helps them clarify three key aspects of their policy problem:

  1. The impact of this virus and illness could overwhelm the population, to the extent that it causes mass deaths, causes a level of illness that exceeds the capacity of health services to treat, and contributes to an unpredictable amount of social and economic damage.
  2. We need to contain the virus enough to make sure it (a) spreads at the right speed and/or (b) peaks at the right time. The right speed seems to be: a level that allows most people to recover alone, while the most vulnerable are treated well in healthcare settings that have enough capacity. The right time seems to be the part of the year with the lowest demand on health services (e.g. summer is better than winter). In other words, (a) reduce the size of the peak by ‘flattening the curve’, and/or (b) find the right time of year to address the peak, while (c) anticipating more than one peak.

My impression is that the most frequently-expressed aim is (a) …

https://twitter.com/STVNews/status/1238468179036459008

https://twitter.com/DHSCgovuk/status/1238540941717356548

… while the UK Government’s Deputy Chief Medical Officer also seems to be describing (b):

  1. We need to encourage or coerce people to change their behaviour, to look after themselves (e.g. by handwashing) and forsake their individual preferences for the sake of public health (e.g. by self-isolating or avoiding vulnerable people). Perhaps we can foster social trust and empathy to encourage responsible individual action. Perhaps people will only protect others if obliged to do so (compare Stone; Ostrom; game theory).

See also: From across the Ditch: How Australia has to decide on the least worst option for COVID-19 (Prof Tony Blakely on three bad options: (1) the likelihood of ‘elimination’ of the virus before vaccination is low; (2) an 18-month lock-down will help ‘flatten the curve’; (3) ‘to prepare meticulously for allowing the pandemic to wash through society over a period of six or so months. To tool up the production of masks and medical supplies. To learn as quickly as possible which treatments of people sick with COVID-19 saves lives. To work out our strategies for protection of the elderly and those with a chronic condition (for whom the mortality from COVID-19 is much higher’).

https://twitter.com/luciadambruoso/status/1246361265909444608

https://twitter.com/anandMenon1/status/1246712962519310337

From uncertainty to ambiguity

If you are still with me, I reckon you would have worded those aims slightly differently, right? There is some ambiguity about these broad intentions, partly because there is some uncertainty, and partly because policymakers need to set rather vague intentions to generate the highest possible support for them. However, vagueness is not our friend during a crisis involving such high anxiety. Further, they are only delaying the inevitable choices that people need to make to turn a complex multi-faceted problem into something simple enough to describe and manage. The problem may be complex, but our attention focuses only on a small number of aspects, at the expense of the rest. Examples that have arisen, so far, include to accentuate:

  1. The health of the whole population or people who would be affected disproportionately by the illness.
  • For example, the difference in emphasis affects the health advice for the relatively vulnerable (and the balance between exhortation and reassurance)

https://twitter.com/colinrtalbot/status/1238227267471527937?s=09

https://twitter.com/hacscot/status/1240588827829436416?s=09

https://twitter.com/lisatrigg/status/1249670660802187266

 

  1. Inequalities in relation to health, socio-economic status (e.g. income, gender, race, ethnicity), or the wider economy.
  • For example, restrictive measures may reduce the risk of harm to some, but increase the burden on people with no savings or reliable sources of income.
  • For example, some people are hoarding large quantities of home and medical supplies that (a) other people cannot afford, and (b) some people cannot access, despite having higher need.
  • For example, social distancing will limit the spread of the virus (see the nascent evidence), but also produce highly unequal forms of social isolation that increase the risk of domestic abuse (possibly exacerbated by school closures) and undermine wellbeing. Or, there will be major policy changes, such as to the rules to detain people under mental health legislation, regarding abortion, or in relation to asylum (note: some of these tweets are from the US, partly because I’m seeing more attention to race – and the consequence of systematic racism on the socioeconomic inequalities so important to COVID-19 mortality – than in the UK).

See also: COVID-19: how the UK’s economic model contributes towards a mismanagement of the crisis (Carolina Alves and Farwa Sial 30.3.20),

Economic downturn and wider NHS disruption likely to hit health hard – especially health of most vulnerable (Institute for Fiscal Studies 9.4.20),

Don’t be fooled: Britain’s coronavirus bailout will make the rich richer still (Christine Berry 13.4.20)

https://twitter.com/closethepaygap/status/1244579870392422400

https://twitter.com/heyDejan/status/1238944695260233728?s=09

https://twitter.com/TimothyNoah1/status/1240375741809938433

https://twitter.com/politicshome/status/1249236632009691136?s=09

 

https://twitter.com/NPR/status/1246837779474120705?s=09

https://twitter.com/povertyscholar/status/1246487621230092294

https://twitter.com/Yamiche/status/1248028548998344708

https://twitter.com/MalindaSmith/status/1247281226274107392

https://twitter.com/Jas_Athwal/status/1248875273568878592?s=09

https://twitter.com/GKBhambra/status/1248874500764073989

cc

https://twitter.com/sunny_hundal/status/1247454112762990592

https://twitter.com/olivernmoody/status/1248260326140805125

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/boodleoops/status/1246717497308577792

https://twitter.com/MarioLuisSmall/status/1239879542094925825

https://twitter.com/kevinstoneUWE/status/1240000285046640645?s=09

https://twitter.com/colinimckay/status/1240721797731045378?s=09

https://twitter.com/heytherehurley/status/1242113416103432195

https://twitter.com/stellacreasy/status/1244022413865648128

https://twitter.com/NIOgov/status/1246482663738871811

https://twitter.com/refugeecouncil/status/1243842703680471040

https://twitter.com/libertyhq/status/1248173788598013953

https://twitter.com/TheLancet/status/1246039259880054784

https://twitter.com/profhrs/status/1247572112061222914

https://twitter.com/HumzaYousaf/status/1248262165657722885

  • For example, governments cannot ignore the impact of their actions on the economy, however much they emphasise mortality, health, and wellbeing. Most high-profile emphasis was initially on the fate of large and small businesses, and people with mortgages, but a long period of crisis will a tip the balance from low income to unsustainable poverty (even prompting Iain Duncan Smith to propose policy change), and why favour people who can afford a mortgage over people scraping the money together for rent?
  1. A need for more communication and exhortation, or for direct action to change behaviour.
  2. The short term (do everything possible now) or long term (manage behaviour over many months).
  1. How to maintain trust in the UK government when (a) people are more or less inclined to trust a the current part of government and general trust may be quite low, and (b) so many other governments are acting differently from the UK.

https://twitter.com/DrSophieHarman/status/1238893265782530059

https://twitter.com/Sander_vdLinden/status/1242168652180475906?s=09

https://twitter.com/policyatkings/status/1248318259029516289

  • For example, note the visible presence of the Prime Minister, but also his unusually high deference to unelected experts such as (a) UK Government senior scientists providing direct advice to ministers and the public, and (b) scientists drawing on limited information to model behaviour and produce realistic scenarios (we can return to the idea of ‘evidence-based policymaking’ later). This approach is not uncommon with epidemics/ pandemics (LD was then the UK Government’s Chief Medical Officer):

https://twitter.com/AndyBurnhamGM/status/1239153510903619584

  • For example, note how often people are second guessing and criticising the UK Government position (and questioning the motives of Conservative ministers).

See also: Coronavirus: meet the scientists who are now household names

  1. How policy in relation to the coronavirus relates to other priorities (e.g. Brexit, Scottish independence, trade, education, culture)

7. Who caused, or who is exacerbating, the problem? The answers to such questions helps determine which populations are most subject to policy intervention.

  • For example, people often try to lay blame for viruses on certain populations, based on their nationality, race, ethnicity, sexuality, or behaviour (e.g. with HIV).
  • For example, the (a) association between the coronavirus and China and Chinese people (e.g. restrict travel to/ from China; e.g. exacerbate racism), initially overshadowed (b) the general role of international travellers (e.g. place more general restrictions on behaviour), and (c) other ways to describe who might be responsible for exacerbating a crisis.

See also: ‘Othering the Virus‘ by Marius Meinhof

Under ‘normal’ policymaking circumstances, we would expect policymakers to resolve this ambiguity by exercising power to set the agenda and make choices that close off debate. Attention rises at first, a choice is made, and attention tends to move on to something else. With the coronavirus, attention to many different aspects of the problem has been lurching remarkably quickly. The definition of the policy problem often seems to be changing daily or hourly, and more quickly than the physical problem. It will also change many more times, particularly when attention to each personal story of illness or death prompts people to question government policy every hour. If the policy problem keeps changing in these ways, how could a government solve it?

Step 2 Identify technically and politically feasible solutions

Common advice in policy analysis texts:

  • Identify the relevant and feasible policy solutions that your audience/ client might consider.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Provide ‘plausible’ predictions about the future effects of current/ alternative policies.
  • Identify many possible solutions, then select the ‘most promising’ for further analysis.
  • Identify how governments have addressed comparable problems, and a previous policy’s impact.

Policy ‘solutions’ are better described as ‘tools’ or ‘instruments’, largely because (a) it is rare to expect them to solve a problem, and (b) governments use many instruments (in different ways, at different times) to make policy, including:

  1. Public expenditure (e.g. to boost spending for emergency care, crisis services, medical equipment)
  2. Economic incentives and disincentives (e.g. to reduce the cost of business or borrowing, or tax unhealthy products)
  3. Linking spending to entitlement or behaviour (e.g. social security benefits conditional on working or seeking work, perhaps with the rules modified during crises)
  4. Formal regulations versus voluntary agreements (e.g. making organisations close, or encouraging them to close)
  5. Public services: universal or targeted, free or with charges, delivered directly or via non-governmental organisations
  6. Legal sanctions (e.g. criminalising reckless behaviour)
  7. Public education or advertising (e.g. as paid adverts or via media and social media)
  8. Funding scientific research, and organisations to advise on policy
  9. Establishing or reforming policymaking units or departments
  10. Behavioural instruments, to ‘nudge’ behaviour (seemingly a big feature in the UK , such as on how to encourage handwashing).

As a result, what we call ‘policy’ is really a complex mix of instruments adopted by one or more governments. A truism in policy studies is that it is difficult to define or identify exactly what policy is because (a) each new instrument adds to a pile of existing measures (with often-unpredictable consequences), and (b) many instruments designed for individual sectors tend, in practice, to intersect in ways that we cannot always anticipate. When you think through any government response to the coronavirus, note how every measure is connected to many others.

Further, it is a truism in public policy that there is a gap between technical and political feasibility: the things that we think will be most likely to work as intended if implemented are often the things that would receive the least support or most opposition. For example:

  1. Redistributing income and wealth to reduce socio-economic inequalities (e.g. to allay fears about the impact of current events on low-income and poverty) seems to be less politically feasible than distributing public services to deal with the consequences of health inequalities.
  2. Providing information and exhortation seems more politically feasible than the direct regulation of behaviour. Indeed, compared to many other countries, the UK Government seems reluctant to introduce ‘quarantine’ style measures to restrict behaviour.

Under ‘normal’ circumstances, governments may be using these distinctions as simple heuristics to help them make modest policy changes while remaining sufficiently popular (or at least looking competent). If so, they are adding or modifying policy instruments during individual ‘windows of opportunity’ for specific action, or perhaps contributing to the sense of incremental change towards an ambitious goal.

Right now, we may be pushing the boundaries of what seems possible, since crises – and the need to address public anxiety – tend to change what seems politically feasible. However, many options that seem politically feasible may not be possible (e.g. to buy a lot of extra medical/ technology capacity quickly), or may not work as intended (e.g. to restrict the movement of people). Think of technical and political feasibility as necessary but insufficient on their own, which is a requirement that rules out a lot of responses.

https://twitter.com/CairneyPaul/status/1244970044351791104

https://twitter.com/ChrisCEOHopson/status/1249617980859744256?s=09

Step 3 Use value-based criteria and political goals to compare solutions

Common advice in policy analysis texts:

  • Typical value judgements relate to efficiency, equity and fairness, the trade-off between individual freedom and collective action, and the extent to which a policy process involves citizens in deliberation.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions
  • ‘Specify the objectives to be attained in addressing the problem and the criteria  to  evaluate  the  attainment  of  these  objectives  as  well as  the  satisfaction  of  other  key  considerations  (e.g.,  equity,  cost, equity, feasibility)’.
  • ‘Effectiveness, efficiency, fairness, and administrative efficiency’ are common.
  • Identify (a) the values to prioritise, such as ‘efficiency’, ‘equity’, and ‘human dignity’, and (b) ‘instrumental goals’, such as ‘sustainable public finance or political feasibility’, to generate support for solutions.
  • Instrumental questions may include: Will this intervention produce the intended outcomes? Is it easy to get agreement and maintain support? Will it make me popular, or diminish trust in me even further?

Step 3 is the most simple-looking but difficult task. Remember that it is a political, not technical, process. It is also a political process that most people would like to avoid doing (at least publicly) because it involves making explicit the ways in which we prioritise some people over others. Public policy is the choice to help some people and punish or refuse to help others (and includes the choice to do nothing).

Policy analysis texts describe a relatively simple procedure of identifying criteria and producing a table (with a solution in each row, and criteria in each column) to compare the trade-offs between each solution. However, these criteria are notoriously difficult to define, and people resolve that problem by exercising power to decide what each term means, and whose interests should be served when they resolve trade-offs. For example, see Stone on whose needs come first, who benefits from each definition of fairness, and how technical-looking processes such as ‘cost benefit analysis’ mask political choices.

Right now, the most obvious and visible trade-off, accentuated in the UK, is between individual freedom and collective action, or the balance between state, communal, and market/ individual solutions. In comparison with many countries (and China and Italy in particular), the UK Government seems to be favouring individual action over state quarantine measures. However, most trade-offs are difficult to categorise

  1. What should be the balance between efforts to minimise the deaths of some (generally in older populations) and maximise the wellbeing of others? This is partly about human dignity during crisis, how we treat different people fairly, and the balance of freedom and coercion.
  2. How much should a government spend to keep people alive using intensive case or expensive medicines, when the money could be spent improving the lives of far more people? This is partly about human dignity, the relative efficiency of policy measures, and fairness.

If you are like me, you don’t really want to answer such questions (indeed, even writing them looks callous). If so, one way to resolve them is to elect policymakers to make such choices on our behalf (perhaps aided by experts in moral philosophy, or with access to deliberative forums). To endure, this unusually high level of deference to elected ministers requires some kind of reciprocal act:

https://twitter.com/devisridhar/status/1240648925998178304

See also: We must all do everything in our power to protect lives (UK Secretary of State for Health and Social Care)

Still, I doubt that governments are making reportable daily choices with reference to a clear and explicit view of what the trade-offs and priorities should be, because their choices are about who will die, and their ability to predict outcomes is limited.

See also: Media experts despair at Boris Johnson’s coronavirus campaign (Sonia Sodha)

Step 4 Predict the outcome of each feasible solution.

Common advice in policy analysis texts:

  • Focus on the outcomes that key actors care about (such as value for money), and quantify and visualise your predictions if possible. Compare the pros and cons of each solution, such as how much of a bad service policymakers will accept to cut costs.
  • ‘Assess the outcomes of the policy options in light of the criteria and weigh trade-offs between the advantages and disadvantages of the options’.
  • Estimate the cost of a new policy, in comparison with current policy, and in relation to factors such as savings to society or benefits to certain populations. Use your criteria and projections to compare each alternative in relation to their likely costs and benefits.
  • Explain potential solutions in sufficient detail to predict the costs and benefits of each ‘alternative’ (including current policy).
  • Short deadlines dictate that you use ‘logic and theory, rather than systematic empirical evidence’ to make predictions efficiently.
  • Monitoring is crucial because it is difficult to predict policy success, and unintended consequences are inevitable. Try to measure the outcomes of your solution, while noting that evaluations are contested.

It is difficult to envisage a way for the UK Government to publicise the thinking behind its choices (Step 3) and predictions (Step 4) in a way that would encourage effective public deliberation, rather than a highly technical debate between a small number of academics:

Further, people often call for the UK Government to publicise its expert advice and operational logic, but I am not sure how they would separate it from their normative logic, or provide a frank account without unintended consequences for public trust or anxiety. If so, government policy involves (a) to keep some choices implicit to avoid a lot of debate on trade-offs, and (b) to make general statements about choices when they do not know what their impact will be.

Step 5 Make a recommendation to your client

Common advice in policy analysis texts:

  • Examine your case through the eyes of a policymaker. Keep it simple and concise.
  • Make a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Client-oriented advisors identify the beliefs of policymakers and tailor accordingly.
  • ‘Unless your client asks you not to do so, you should explicitly recommend one policy’

I now invite you to make a recommendation (step 5) based on our discussion so far (steps 1-4). Define the problem with one framing at the expense of the others. Romanticise some people and not others. Decide how to support some people, and coerce or punish others. Prioritise the lives of some people in the knowledge that others will suffer or die. Do it despite your lack of expertise and profoundly limited knowledge and information. Learn from experts, but don’t assume that only scientific experts have relevant knowledge (decolonise; coproduce). Recommend choices that, if damaging, could take decades to fix after you’ve gone. Consider if a policymaker is willing and able to act on your advice, and if your proposed action will work as intended. Consider if a government is willing and able to bear the economic and political costs. Protect your client’s popularity, and trust in your client, at the same time as protecting lives. Consider if your advice would change if the problem would seem to change. If you are writing your analysis, maybe keep it down to one sheet of paper (and certainly far fewer words than in this post). Better you than me.

Please now watch this video before I suggest that things are not so simple.

Would that policy analysis were so simple

Imagine writing policy analysis in an imaginary world, in which there is a single powerful ‘rational’ policymaker at the heart of government, making policy via an orderly series of stages.

cycle and cycle spirograph 18.2.20

Your audience would be easy to identify at each stage, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change (since the selection of a solution would lead to implementation).  You could adopt a simple 5 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

Studies of policy analysts describe how unrealistic this expectation tends to be (Radin, Brans, Thissen).

Table for coronavirus 750

For example, there are many policymakers, analysts, influencers, and experts spread across political systems, and engaging with 101 policy problems simultaneously, which suggests that it is not even clear how everyone fits together and interacts in what we call (for the sake of simplicity) ‘the policy process’.

Instead, we can describe real world policymaking with reference to two factors.

The wider policymaking environment: 1. Limiting the use of evidence

First, policymakers face ‘bounded rationality’, in which they only have the ability to pay attention to a tiny proportion of available facts, are unable to separate those facts from their values (since we use our beliefs to evaluate the meaning of facts), struggle to make clear and consistent choices, and do not know what impact they will have. The consequences can include:

  • Limited attention, and lurches of attention. Policymakers can only pay attention to a tiny proportion of their responsibilities, and policymaking organizations struggle to process all policy-relevant information. They prioritize some issues and information and ignore the rest.
  • Power and ideas. Some ways of understanding and describing the world dominate policy debate, helping some actors and marginalizing others.
  • Beliefs and coalitions. Policymakers see the world through the lens of their beliefs. They engage in politics to turn their beliefs into policy, form coalitions with people who share them, and compete with coalitions who don’t.
  • Dealing with complexity. They engage in ‘trial-and-error strategies’ to deal with uncertain and dynamic environments (see the new section on trial-and-error- at the end).
  • Framing and narratives. Policy audiences are vulnerable to manipulation when they rely on other actors to help them understand the world. People tell simple stories to persuade their audience to see a policy problem and its solution in a particular way.
  • The social construction of populations. Policymakers draw on quick emotional judgements, and social stereotypes, to propose benefits to some target populations and punishments for others.
  • Rules and norms. Institutions are the formal rules and informal understandings that represent a way to narrow information searches efficiently to make choices quickly.
  • Learning. Policy learning is a political process in which actors engage selectively with information, not a rational search for truth.

Evidence-based or expert-informed policymaking

Put simply, policymakers cannot oversee a simple process of ‘evidence-based policymaking’. Rather, to all intents and purposes:

  1. They need to find ways to ignore most evidence so that they can focus disproportionately on some. Otherwise, they will be unable to focus well enough to make choices. The cognitive and organisational shortcuts, described above, help them do it almost instantly.
  2. They also use their experience to help them decide – often very quickly – what evidence is policy-relevant under the circumstances. Relevance can include:
  • How it relates to the policy problem as they define it (Step 1).
  • If it relates to a feasible solution (Step 2).
  • If it is timely, available, understandable, and actionable.
  • If it seems credible, such as from groups representing wider populations, or from people they trust.
  1. They use a specific shortcut: relying on expertise.

However, the vague idea of trusting or not trusting experts is a nonsense, largely because it is virtually impossible to set a clear boundary between relevant/irrelevant experts and find a huge consensus on (exactly) what is happening and what to do. Instead, in political systems, we define the policy problem or find other ways to identify the most relevant expertise and exclude other sources of knowledge.

In the UK Government’s case, it appears to be relying primarily on expertise from its own general scientific advisers, medical and public health advisers, and – perhaps more controversially – advisers on behavioural public policy.

box 7.1

Right now, it is difficult to tell exactly how and why it relies on each expert (at least when the expert is not in a clearly defined role, in which case it would be irresponsible not to consider their advice). Further, there are regular calls on Twitter for ministers to be more open about their decisions.

See also: Coronavirus: do governments ever truly listen to ‘the science’?

However, don’t underestimate the problems of identifying why we make choices, then justifying one expert or another (while avoiding pointless arguments), or prioritising one form of advice over another. Look, for example, at the kind of short-cuts that intelligent people use, which seem sensible enough, but would receive much more intense scrutiny if presented in this way by governments:

  • Sophisticated speculation by experts in a particular field, shared widely (look at the RTs), but questioned by other experts in another field:
  • Experts in one field trusting certain experts in another field based on personal or professional interaction:
  • Experts in one field not trusting a government’s approach based on its use of one (of many) sources of advice:
  • Experts representing a community of experts, criticising another expert (Prof John Ashton), for misrepresenting the amount of expert scepticism of government experts (yes, I am trying to confuse you):
  • Expert debate on how well policymakers are making policy based on expert advice
  • Finding quite-sensible ways to trust certain experts over others, such as because they can be held to account in some way (and may be relatively worried about saying any old shit on the internet):

There are many more examples in which the shortcut to expertise is fine, but not particularly better than another shortcut (and likely to include a disproportionately high number of white men with STEM backgrounds).

Update: of course, they are better than the volume trumps expertise approach:

See also:

Further, in each case, we may be receiving this expert advice via many other people, and by the time it gets to us the meaning is lost or reversed (or there is some really sophisticated expert analysis of something rumoured – not demonstrated – to be true):

For what it’s worth, I tend to favour experts who:

(a) establish the boundaries of their knowledge, (b) admit to high uncertainty about the overall problem:

(c) (in this case) make it clear that they are working on scenarios, not simple prediction

(d) examine critically the too-simple ideas that float around, such as the idea that the UK Government should emulate ‘what works’ somewhere else

(e) situate their own position (in Prof Sridhar’s case, for mass testing) within a broader debate

See also:

See also: Prof Sir John Bell (4.3.20) on why an accurate antibody test is at least one month away and these exchanges on the problems with test ‘accuracy’:

(f) use their expertise on governance to highlight problems with thoughtless criticism

However, note that most of these experts are from a very narrow social background, and from very narrow scientific fields (first in modelling, then likely in testing), despite the policy problem being largely about (a) who, and how many people, a government should try to save, and (b) how far a government should go to change behaviour to do it (Update 2.4.20: I wrote that paragraph before adding so many people to the list). It is understandable to defer in this way during a crisis, but it also contributes to a form of ‘depoliticisation’ that masks profound choices that benefit some people and leave others vulnerable to harm.

See also: COVID-19: a living systematic map of the evidence

See also: To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review

See also: Covid-19: why is the UK government ignoring WHO’s advice? (British Medical Journal editorial)

See also: Coronavirus: just 2,000 NHS frontline workers tested so far

See also: ‘What’s important is social distancing’ coronavirus testing ‘is a side issue’, says Deputy Chief Medical Officer [Professor Jonathan Van-Tam talks about the important distinction between a currently available test to see if someone has contracted the virus (an antigen test) and a forthcoming test to see if someone has had and recovered from COVID-19 (an antibody test)]. The full interview is here (please feel free to ignore the editorialising of the uploader):

See also: Why is Germany able to test for coronavirus so much more than the UK? (which is mostly a focus on Germany’s innovation and partly on the UK (Public Health England) focus on making sure its test is reliable, in the context of ‘coronavirus tests produced at great speed which have later proven to be inaccurate’ (such as one with a below-30% accuracy rate, which is worse than not testing at all). Compare with The Coronavirus Hit Germany And The UK Just Days Apart But The Countries Have Responded Differently. Here’s How and the Opinion piece ‘A public inquiry into the UK’s coronavirus response would find a litany of failures

See also: Rights and responsibilities in the Coronavirus pandemic

See also: UK police warned against ‘overreach’ in use of virus lockdown powers (although note that there is no UK police force and that Scotland has its own legal system) and Coronavirus: extra police powers risk undermining public trust (Alex Oaten and Chris Allen)

See also (Calderwood resigned as CMO that night):

See also: Social Licensing of Privacy-Encroaching Policies to Address the COVID-19 Pandemic (U.K.) (research on public opinion)

The wider policymaking environment: 2. Limited control

Second, policymakers engage in a messy and unpredictable world in which no single ‘centre’ has the power to turn a policy recommendation into an outcome. I normally use the following figure to think through the nature of a complex and unwieldy policymaking environment of which no ‘centre’ of government has full knowledge or control.

image policy process round 2 25.10.18

It helps us identify (further) the ways in which we can reject the idea that the UK Prime Minister and colleagues can fully understand and solve policy problems:

Actors. The environment contains many policymakers and influencers spread across many levels and types of government (‘venues’).

For example, consider how many key decisions that (a) have been made by organisations not in the UK central government, and (b) are more or less consistent with its advice, including:

  • Devolved governments announcing their own healthcare and public health responses (although the level of UK coordination seems more significant than the level of autonomy).
  • Public sector employers initiating or encouraging at-home working (and many Universities moving quickly from in-person to online teaching)
  • Private organisations cancelling cultural and sporting events.

Context and events. Policy solutions relate to socioeconomic context and events which can be impossible to ignore and out of the control of policymakers. The coronavirus, and its impact on so many aspects on population health and wellbeing, is an extreme example of this problem.

Networks, Institutions, and Ideas. Policymakers and influencers operate in subsystems (specialist parts of political systems). They form networks or coalitions built on the exchange of resources or facilitated by trust underpinned by shared beliefs or previous cooperation. Many different parts of government have practices driven by their own formal and informal rules. Formal rules are often written down or known widely. Informal rules are the unwritten rules, norms and practices that are difficult to understand, and may not even be understood in the same way by participants. Political actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so established as to be taken for granted. These dominant frames of reference establish the boundaries of the political feasibility of policy solutions.  These kinds of insights suggest that most policy decisions are considered, made, and delivered in the name of – but not in the full knowledge of – government ministers.

Trial and error policymaking in complex policymaking systems (17.3.20)

There are many ways to conceptualise this policymaking environment, but few theories provide specific advice on what to do, or how to engage effectively in it. One notable exception is the general advice that comes from complexity theory, including:

  • Law-like behaviour is difficult to identify – so a policy that was successful in one context may not have the same effect in another.
  • Policymaking systems are difficult to control; policy makers should not be surprised when their policy interventions do not have the desired effect.
  • Policy makers in the UK have been too driven by the idea of order, maintaining rigid hierarchies and producing top-down, centrally driven policy strategies.  An attachment to performance indicators, to monitor and control local actors, may simply result in policy failure and demoralised policymakers.
  • Policymaking systems or their environments change quickly. Therefore, organisations must adapt quickly and not rely on a single policy strategy.

On this basis, there is a tendency in the literature to encourage the delegation of decision-making to local actors:

  1. Rely less on central government driven targets, in favour of giving local organisations more freedom to learn from their experience and adapt to their rapidly-changing environment.
  2. To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly.
  3. Encourage better ways to deal with alleged failure by treating ‘errors’ as sources of learning (rather than a means to punish organisations) or setting more realistic parameters for success/ failure (although see this example and this comment).
  4. Encourage a greater understanding, within the public sector, of the implications of complex systems and terms such as ‘emergence’ or ‘feedback loops’.

In other words, this literature, when applied to policymaking, tends to encourage a movement from centrally driven targets and performance indicators towards a more flexible understanding of rules and targets by local actors who are more able to understand and adapt to rapidly-changing local circumstances.

[See also: Complex systems and systems thinking]

Now, just imagine the UK Government taking that advice right now. I think it is fair to say that it would be condemned continuously (even more so than right now). Maybe that is because it is the wrong way to make policy in times of crisis. Maybe it is because too few people are willing and able to accept that the role of a small group of people at the centre of government is necessarily limited, and that effective policymaking requires trial-and-error rather than a single, fixed, grand strategy to be communicated to the public. The former highlights policy that changes with new information and perspective. The latter highlights errors of judgement, incompetence, and U-turns. In either case, the advice is changing as estimates of the coronavirus’ impact change:

I think this tension, in the way that we understand UK government, helps explain some of the criticism that it faces when changing its advice to reflect changes in its data or advice. This criticism becomes intense when people also question the competence or motives of ministers (and even people reporting the news) more generally, leading to criticism that ranges from mild to outrageous:

For me, this casual reference to a government policy to ‘cull the heard of the weak’ is outrageous, but you can find much worse on Twitter. It reflects wider debate on whether ‘herd immunity’ is or is not government policy. Much of it relates to interpretation of government statements, based on levels of trust/distrust in the UK Government, its Prime Minister and Secretaries of State, and the Prime Minister’s special adviser

However, I think that some of it is also about:

1. Wilful misinterpretation (particularly on Twitter). For example, in the early development and communication of policy, Boris Johnson was accused (in an irresponsibly misleading way) of advocating for herd immunity rather than restrictive measures.

See: Here is the transcript of what Boris Johnson said on This Morning about the new coronavirus (Full Fact)

full fact coronavirus

Below is one of the most misleading videos of its type. Look at how it cuts each segment into a narrative not provided by ministers or their advisors (see also this stinker):

See also:

2. The accentuation of a message not being emphasised by government spokespeople.

See for example this interview, described by Sky News (13.3.20) as: The government’s chief scientific adviser Sir Patrick Vallance has told Sky News that about 60% of people will need to become infected with coronavirus in order for the UK to enjoy “herd immunity”. You might be forgiven for thinking that he was on Sky extolling the virtues of a strategy to that end (and expressing sincere concerns on that basis). This was certainly the write-up in respected papers like the FT (UK’s chief scientific adviser defends ‘herd immunity’ strategy for coronavirus). Yet, he was saying nothing of the sort. Rather, when prompted, he discussed herd immunity in relation to the belief that COVID-19 will endure long enough to become as common as seasonal flu.

The same goes for Vallance’s interview on the same day (13.3.20) during Radio 4’s Today programme (transcribed by the Spectator, which calls Vallance the author, and gives it the headlineHow ‘herd immunity’ can help fight coronavirusas if it is his main message). The Today Programme also tweeted only 30 seconds to single out that brief exchange:

Yet, clearly his overall message – in this and other interviews – was that some interventions (e.g. staying at home; self-isolating with symptoms) would have bigger effects than others (e.g. school closures; prohibiting mass gatherings) during the ‘flattening of the peak’ strategy (‘What we don’t want is everybody to end up getting it in a short period of time so that we swamp and overwhelm NHS services’). Rather than describing ‘herd immunity’ as a strategy, he is really describing how to deal with its inevitability (‘Well, I think that we will end up with a number of people getting it’).

See also: British government wants UK to acquire coronavirus ‘herd immunity’, writes Robert Peston (12.3.20) and live debates (and reports grasping at straws) on whether or not ‘herd immunity’ was the goal of the UK government:

See also: Why weren’t we ready? (Harry Lambert) which is a good exemplar of the ‘U turn’ argument, and compare with the evidence to the Health and Social Care Committee (CMO Whitty, DCMO Harries) that it describes.

A more careful forensic analysis (such as this one) will try to relate each government choice to the ways in which key advisory bodies (such as the New and Emerging Respiratory Virus Threats Advisory Group, NERVTAG) received and described evidence on the current nature of the problem:

See also: Special Report: Johnson listened to his scientists about coronavirus – but they were slow to sound the alarm (Reuters)

Some aspects may also be clearer when there is systematic qualitative interview data on which to draw. Right now, there are bits and pieces of interviews sandwiched between whopping great editorial discussions (e.g. FT Alphaville Imperial’s Neil Ferguson: “We don’t have a clear exit strategy”; compare with the more useful Let’s flatten the coronavirus confusion curve) or confused accounts by people speaking to someone who has spoken to someone else (e.g. Buzzfeed Even The US Is Doing More Coronavirus Tests Than The UK. Here Are The Reasons Why).

See also: other rabbit holes are available

[OK, that proved to be a big departure from the trial-and-error discussion. Here we are, back again]

In some cases, maybe people are making the argument that trial-and-error is the best way to respond quickly, and adapt quickly, in a crisis but that the UK Government version is not what, say, the WHO thinks of as good kind of adaptive response. It is not possible to tell, at least from the general ways in which they justify acting quickly.

See also the BBC’s provocative question (which I expect to be replaced soon):

Compare with:

The take home messages

  1. The coronavirus is an extreme example of a general situation: policymakers will always have very limited knowledge of policy problems and control over their policymaking environment. They make choices to frame problems narrowly enough to seem solvable, rule out most solutions as not feasible, make value judgements to try help some more than others, try to predict the results, and respond when the results to not match their hopes or expectations.
  2. This is not a message of doom and despair. Rather, it encourages us to think about how to influence government, and hold policymakers to account, in a thoughtful and systematic way that does not mislead the public or exacerbate the problem we are seeing.

Further reading, until I can think of a better conclusion:

This series of ‘750 words’ posts summarises key texts in policy analysis and tries to situate policy analysis in a wider political and policymaking context. Note the focus on whose knowledge counts, which is not yet a big feature of this crisis.

These series of 500 words and 1000 words posts (with podcasts) summarise concepts and theories in policy studies.

This page on evidence-based policymaking (EBPM) uses those insights to demonstrate why EBPM is  a political slogan rather than a realistic expectation.

These recorded talks relate those insights to common questions asked by researchers: why do policymakers seem to ignore my evidence, and what can I do about it? I’m happy to record more (such as on the topic you just read about) but not entirely sure who would want to hear what.

See also: Advisers, Governments and why blunders happen? (Colin Talbot)

See also: Why we might disagree about … Covid-19 (Ruth Dixon and Christopher Hood)

See also: Pandemic Science and Politics (Daniel Sarewitz)

See also: We knew this would happen. So why weren’t we ready? (Steve Bloomfield)

See also: Europe’s coronavirus lockdown measures compared (Politico)

.

.

.

.

.

7 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), Policy learning and transfer, POLU9UK, Prevention policy, Psychology Based Policy Studies, Public health, public policy, Social change, UK politics and policy

Research engagement with government: insights from research on policy analysis and policymaking

Many research funders are interested in supporting researchers as they engage with government to inform policy and practice.

In that context, here is a long read on how (a) policy analysis and (b) policy process research can help funders understand the likely impact of such engagement-with-government’ initiatives.

I draw on four blog post series to identify:

  • The wider policymaking context in which such initiatives take place (see posts in 500 or 1000 words, and on ‘evidence based’ policymaking)
  • The advice given to policy analysts about how to improve their impact with research (in 750 words).

I introduce seven broad themes to help us interpret the effectiveness of specific approaches to engagement (in other words, I am not evaluating specific initiatives directly*).

In each case, I suggest that evaluations of impact initiatives lack meaning without a clear sense of (a) what engagement and impact is for, (b) how far researchers and research organisations are willing to go to support them, and (c) how individual initiatives relate to each other (in a vaguely defined ‘culture’ or ‘ecosystem’ of activity).

Seven key themes

  1. Rather than trying to design a new system, learn from and adapt to the systems and practices that exist
  2. Clarify the role of support directed at individuals, institutions, and systems.
  3. Reduce ambiguity by defining the policy problem and strategy in practice
  4. Tailor research support to ‘multi-centric policymaking’
  5. Clarify the role of researchers when they engage with policymakers
  6. Establish the credibility of research through expertise and/or co-production
  7. Establish clear and realistic expectations for researcher engagement practices

 

1. Rather than trying to design a new system, learn from and adapt to the systems and practices that exist

Studies of ‘evidence based policymaking’ (EBPM), from a researcher perspective, tend to identify (a) the barriers between their evidence and policy, and (b) what a good system of research production and use might look like. These discussions form part of a continuous debate on how research organisations might design better political and research systems to improve the production and use of evidence.

Yet, the power to redesign systems is not in their gift. Instead, they form one part of a policymaking system over which they have incomplete knowledge and minimal control.

This specific problem for researchers is part of a general problem identified in policy studies: do not confuse your requirements of policymaking with their actual dynamics or your ability to influence them.

For example, common 5-step guides to policy analysis appear to correspond to a well-known policy cycle model of policymaking, in which analysts: define the problem, identify potential solutions, choose the criteria to compare and evaluate them, recommend a solution, monitor its effects, and evaluate past policy to inform current policy. However, this model based on functional requirements, not actual policymaking practices.

Instead, the modern study of policymaking seeks to understand a far messier or complex policymaking environment in which it is not clear (a) who the most relevant policymakers are, (b) how they think about policy problems and the research that may be relevant, and (c) their ability to turn that evidence into policy outcomes.

In that context, the general advice to policy analysts is to be flexible and iterative rather than rationalistic, to learn the ‘art and craft’ of policy analysis through experience, and to consider many possible analytical styles that could emphasise what is good for knowledge, debate, process, or the client.

Action points

  • Focus less on the abstract design of research engagement strategies based on what researchers would like to see.
  • Focus more on tailoring support to researchers in relation to understanding what policymakers do.

In particular, while few academics will see policymakers as their ‘clients’, they would benefit from knowing more about the many different ways in which policymakers and analysts gather and use evidence. Academic-practitioner workshops provide a general forum for such insights, but we need a collection of more specific analyses of the ways in which full-time analysts and civil servants gather evidence to meet policymaker demands.

2. Clarify the role of support directed at individuals, institutions, and systems.

We can think of research support in terms of individuals, institutions, or systems, but the nature of each type of support – and the relationship between them – is not clear.

Studies of academic policy impact and policy analysis share a focus on the successful individuals who are often called policy entrepreneurs. For Mintrom,

  1. they ‘are energetic actors who engage in collaborative efforts in and around government to promote policy innovations’,
  2. their attributes are ‘ambition’, ‘social acuity’, ‘credibility’, ‘sociability’, and ‘tenacity’
  3. their skills are ‘strategic thinking’, ‘team building’, ‘collecting evidence’, ‘making arguments’, ‘engaging multiple audiences’, ‘negotiating’, and ‘networking’
  4. their strategies include ‘problem framing’, ‘using and expanding networks’, ‘working with advocacy coalitions’, ‘leading by example’, and ‘scaling up change processes’.

These descriptions of successful individuals are familiar to readers of the personal accounts of impactful researchers, whose recommendations are summarised by Oliver and Cairney:

(1) Do high quality research; (2) make your research relevant and readable; (3) understand policy processes; (4) be accessible to policymakers: engage routinely, flexible, and humbly; (5) decide if you want to be an issue advocate or honest broker; (6) build relationships (and ground rules) with policymakers; (7) be ‘entrepreneurial’ or find someone who is; and (8) reflect continuously: should you engage, do you want to, and is it working?

One approach is to learn from, and seek to transfer, their success. However, analyses of entrepreneurship suggest that most fail, and that their success depends more on the nature of (a) their policymaking environments, and (b) the social backgrounds that facilitate their opportunities to engage (reinforcing inequalities in relation to factors such as race and gender, as well as level of seniority, academic discipline, and type of University).

In that context, the idea of institutionalising engagement is attractive. Instead of leaving impact to the individual, set up a system of rules and norms (or a new ‘culture’) to make engagement routine and expected (the range of such initiatives is discussed in more depth in Oliver et al’s report).

However, research on policy analysis and process shows that the ability to set the ‘rules of the game’ is not shared equally, and researchers generally need to adapt to – rather than co-design – the institutions in government with which they engage.

This point informs a tendency to describe research-government interactions in relation to a complex ‘system’ or ‘ecosystem’. This metaphorical language can be useful, to encourage participants not to expect to engage in simple (or even understandable) policy processes, and to encourage ‘systems thinking’. Further, the same systems-language is useful to describe the cross-cutting nature of policy problems, to encourage (a) interdisciplinary research and (b) cross-departmental cooperation in government to solve major policy problems.

However, there are at least 10 different ways to define systems thinking, such as in relation to policy problems and social or policymaking behaviour. Further, many accounts provide contradictory messages:

  • some highlight the potential to find the right ‘leverage’ to make a disproportionate impact from a small reform, while
  • others highlight the inability of governments to understand and control policymaking systems (or for systems to be self-governing).

Action points:

  • Identify how (and why) you would strike a balance between support for individuals, institutions, and systems (and consider the effect of a redistribution of support).
  • Engage in a meaningful discussion of the potential trade-offs between aims, such as to maximise the impact of already successful individuals or support less successful groups.
  • If seeking to encourage the ‘institutionalisation’ of research engagement cultures, clarify the extent to which one organisation can influence overall institutional design.
  • If engaging with a research-policy ‘ecosystem’, clarify what you mean and how researchers can seek to understand and engage in it.

3. Reduce ambiguity by defining the policy problem and strategy in practice

Single strategy documents are useful to identify aims and objectives. However, they do not determine outcomes, partly because (a) they only form one guide to organisational practices, (b) they do not give unambiguous priority to some objectives over others, and (c) many organisational practices and aims are often unwritten.  For example, engagement priorities could relate to an mix of:

  • To foster specific government policy aims
  • To foster an image of high UK research expertise and the value of investing in research funders
  • To foster outcomes not identified explicitly by current governments, such as to (a) provide a long-term institutional memory, or (b) pursue a critical and emancipatory role for research, which often leads researchers to oppose government policy.
  • To ensure an equal distribution of opportunities for researchers, in recognition of the effect of impact on careers.

This mix of aims requires organisations to clarify further their objectives when they seek to deliver them in practice, and there will always be potential trade-offs between them. Examples include:

  • The largest opportunity to demonstrate visible and recordable social science research impact seems to be in cooperation with Westminster, but impact on a House of Commons committee is not an effective way to secure direct impact on policy outcomes.
  • Initiatives may appear to succeed on one measure (such as to give researchers the support to develop skills or the space to criticise policy) and fail on another (such as to demonstrate a direct and positive impact on current policy).

Action points:

  • Clarify the objective of each form of engagement support
  • Evaluate current initiatives in relation to those aims (which may not be the aims of the original project)

4. Tailor research support to ‘multi-centric policymaking’

Policy process research describes multi-centric policymaking in which central governments share responsibility with many other policymakers spread across many levels and types of government. To some extent, it results from choice, such as when the UK government shares responsibilities with devolved and local governments. However, it also results from necessity, since policymakers are only able to pay attention to a tiny proportion of their responsibilities, and they engage in a policymaking environment over which they have limited understanding and less control. This environment contains many policymakers and influencers spread across many venues, each with their own institutionsnetworksideas (and ways to frame policy), and responses to socio-economic context and events.

This image of the policy process presents the biggest challenge to identifying where to invest research funding support. On the one hand, it could support researchers to engage with policy actors with a clearly defined formal role, including government ministers (supported by civil servants) and parliamentary committees (supported by clerks). However, the vast majority of policy takes place out of the public spotlight, processed informally at a relatively low level of central government, or by non-departmental public bodies or subnational governments.

Policy analysts or interest groups may deal with this problem by investing their time to identify: which policymaking venues matter, their informal rules and networks, and which ideas (ways of thinking) are in good currency. In many cases, they work behind closed doors, seek compromise, and accept that they will receive minimal public credit for their work. Many such options may seem unattractive to research funders or researchers (when they seek to document impact), since they involve a major investment of time with no expectation of a demonstrable payoff.

Action points:

  • Equip researchers with a working knowledge of policy processes
  • Incorporate policy science insights into impact training (alongside advice from practitioners on the ‘nuts-and-bolts’ of engagement)
  • Identify how to address trade-offs between forms of formal and informal engagement
  • Identify how to address the potential trade-offs between high-but-low visibility and low-but-high-visibility impact.

5. Clarify the role of researchers when they engage with policymakers

Policy analysts face profound choices on how to engage ethically, and key questions include:

  1. Is your primary role to serve individual clients or a wider notion of the ‘public good’?
  2. Should you maximise your role as an individual or play your part in a wider profession?
  3. What forms of knowledge and evidence count in policy analysis?
  4. What does it mean to communicate policy analysis responsibly?
  5. Should you provide a clear recommendation or encourage reflection?

 

In the field of policy analysis, we can find a range of responses, from

  • the pragmatic client-oriented analyst focused on a brief designed for them, securing impact in the short term, to the
  • critical researcher willing to question a policymaker’s definition of the problem and choice regarding what knowledge counts and what solutions are feasible, to seek impact over the long term.

Further, these choices necessitate the development of a wide range of different skills, and a choice about in which to invest. For example, Radin (2019: 48) identifies the value of:

Case study methods, Cost- benefit analysis, Ethical analysis, Evaluation, Futures analysis, Historical analysis, Implementation analysis, Interviewing, Legal analysis, Microeconomics, Negotiation, mediation, Operations research, Organizational analysis, Political feasibility analysis, Public speaking, Small- group facilitation, Specific program knowledge, Statistics, Survey research methods, Systems analysis.

Such a wide range of possible skills may prompt research funders to consider how to prioritise skills training as a whole, and in relation to each discipline or individual.

Action points:

  • Evaluate the effectiveness of current initiatives via the lens of appropriate practices and requisite skills.

6. Establish the credibility of research through expertise and/or co-production

Expectations for ‘evidence based’ or ‘coproduced’ policies are not mutually exclusive, but we should not underestimate the potential for major tensions between their aims and practices, in relation to questions such as:

  • How many people should participate (a small number of experts or large number of stakeholders)?
  • Whose knowledge counts (including research, experiential, practitioner learning)?
  • Who should coordinate the ‘co-production’ of research and policy (such as researchers leading networks to produce knowledge to inform policy, or engaging in government networks)?

For example, when informing public services, researchers will be following different models of evidence-informed governance, from engagement with central governments to roll out a uniform ‘evidence based’ model, to engagement at a local level to encourage storytelling-based learning.

Further, these forms of engagement not only require very different skills but also competing visions of what counts as policy-relevant evidence. Governments tend not to make such tensions explicit, perhaps prompting researchers to navigate them via experience more than training.

Action points:

  • Evaluate initiatives according to clearly-stated expectations for researchers when they engage with stakeholders
  • Clarify if researchers should be responsible for forming or engaging with existing networks to foster co-produced research

7. Establish realistic expectations for researcher engagement practices

Policy analysts and associated organisations appear to expect far more (direct and short-term) impact from their research than academic researchers (more likely to perform a long-term ‘enlightenment’ function). Academic researchers may desire more direct impact in theory, but also be wary of the costs and compromises in practice.

We can clarify these differences by identifying the ways in which common advice to analysts goes beyond common advice to academics (the latter is summarised in 8 categories by Oliver and Cairney above):

  • Gather data efficiently, tailor your solutions to your audience, and tell a good story (Bardach)
  • Address your client’s question, by their chosen deadline, in a clear and concise way that they can understand (and communicate to others) quickly (Weimer and Vining)
  • Client-oriented advisors identify the beliefs of policymakers and anticipate the options worth researching (Mintrom)
  • Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour (make it ‘concise’ and ‘digestible’), their deadline, and their ability to make or influence the policies you might suggest (Meltzer and Schwartz).
  • ‘Advise strategically’, to help a policymaker choose an effective solution within their political context (Thissen and Walker).
  • Focus on producing ‘policy-relevant knowledge’ by adapting to the evidence-demands of policymakers and rejecting a naïve attachment to ‘facts speaking for themselves’ or ‘knowledge for its own sake’ (Dunn).

In that context, it would be reasonable to set high expectations for engagement and impact, but recognise the professional and practical limits to that engagement.

Action points:

  • Identify the relationships, between researchers and policymakers, that underpin expectations for engagement and impact

 

*I apologise for being a little enigmatic at this stage, and would welcome your thoughts on any aspect of this work-in-development, either in the comment function or via email (p.a.cairney [at] stir.ac.uk).

 

 

Leave a comment

Filed under Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: policy analysis for marginalized groups in racialized political systems

Note: this post forms one part of the Policy Analysis in 750 words series overview.

For me, this story begins with a tweet by Professor Jamila Michener, about a new essay by Dr Fabienne Doucet, ‘Centering the Margins: (Re)defining Useful Research Evidence Through Critical Perspectives’:

https://twitter.com/povertyscholar/status/1207054211759910912

Research and policy analysis for marginalized groups

For Doucet (2019: 1), it begins by describing the William T. Grant Foundation’s focus on improving the ‘use of research evidence’ (URE), and the key questions that we should ask when improving URE:

  1. For what purposes do policymakers find evidence useful?

Examples include to: inform a definition of problems and solutions, foster practitioner learning, support an existing political position, or impose programmes backed by evidence (compare with How much impact can you expect from your analysis?).

  1.   Who decides what to use, and what is useful?

For example, usefulness could be defined by the researchers providing evidence, the policymakers using it, the stakeholders involved in coproduction, or the people affected by research and policy (compare with Bacchi, Stone and Who should be involved in the process of policy analysis?).

  1. How do critical theories inform these questions? (compare with T. Smith)

First, they remind us that so-called ‘rational’ policy processes have incorporated research evidence to help:

‘maintain power hierarchies and accept social inequity as a given. Indeed, research has been historically and contemporaneously (mis)used to justify a range of social harms from enslavement, colonial conquest, and genocide, to high-stakes testing, disproportionality in child welfare services, and “broken windows” policing’ (Doucet, 2019: 2)

Second, they help us redefine usefulness in relation to:

‘how well research evidence communicates the lived experiences of marginalized groups so that the understanding of the problem and its response is more likely to be impactful to the community in the ways the community itself would want’ (Doucet, 2019: 3)

In that context, potential responses include to:

  1. Recognise the ways in which research and policy combine to reproduce the subordination of social groups.
  • General mechanisms include: the reproduction of the assumptions, norms, and rules that produce a disproportionate impact on social groups (compare with Social Construction and Policy Design).
  • Specific mechanism include: judging marginalised groups harshly according to ‘Western, educated, industrialized, rich and democratic’ norms (‘WEIRD’)
  1. Reject the idea that scientific research can be seen as objective or neutral (and that researchers are beyond reproach for their role in subordination).
  2. Give proper recognition to ‘experiential knowledge’ and ‘transdiciplinary approaches’ to knowledge production, rather than privileging scientific knowledge.
  3. Commit to social justice, to help ‘eliminate oppressions and to emancipate and empower marginalized groups’, such as by disrupting ‘the policies and practices that disproportionately harm marginalized groups’ (2019: 5-7)
  4. Develop strategies to ‘center race’, ‘democratize’ research production, and ‘leverage’ transdisciplinary methods (including poetry, oral history and narrative, art, and discourse analysis – compare with Lorde) (2019: 10-22)

Policy analysis in a ‘racialized polity’

A key way to understand these processes is to use, and improve, policy theories to explain the dynamics and impacts of a racialized political system. For example, ‘policy feedback theory’ (PFT) draws on elements from historical institutionalism and SCPD to identify the rules, norms, and practices that reinforce subordination.

In particular, Michener’s (2019: 424) ‘Policy Feedback in a Racialized Polity’ develops a ‘racialized feedback framework (RFF)’ to help explain the ‘unrelenting force with which racism and White supremacy have pervaded social, economic, and political institutions in the United States’. Key mechanisms include (2019: 424-6):

  1. Channelling resources’, in which the rules, to distribute government resources, benefit some social groups and punish others.
  • Examples include: privileging White populations in social security schemes and the design/ provision of education, and punishing Black populations disproportionately in prisons (2019: 428-32).
  • These rules also influence the motivation of social groups to engage in politics to influence policy (some citizens are emboldened, others alienated).
  1. Generating interests’, in which ‘racial stratification’ is a key factor in the power of interest groups (and balance of power in them).
  2. Shaping interpretive schema’, in which race is a lens through which actors understand, interpret, and seek to solve policy problems.
  3. The ways in which centralization (making policy at the federal level) or decentralization influence policy design.
  • For example, the ‘historical record’ suggests that decentralization is more likely to ‘be a force of inequality than an incubator of power for people of color’ (2019: 433).

Insufficient attention to race and racism: what are the implications for policy analysis?

One potential consequence of this lack of attention to race, and the inequalities caused by racism in policy, is that we place too much faith in the vague idea of ‘pragmatic’ policy analysis.

Throughout the 750 words series, you will see me refer generally to the benefits of pragmatism:

In that context, pragmatism relates to the idea that policy analysis consists of ‘art and craft’, in which analysts assess what is politically feasible if taking a low-risk client-oriented approach.

In this context, pragmatism may be read as a euphemism for conservatism and status quo protection.

In other words, other posts in the series warn against too-high expectations for entrepreneurial and systems thinking approaches to major policy change, but they should not be read as an excuse to reject ambitious plans for much-needed changes to policy and policy analysis (compare with Meltzer and Schwartz, who engage with this dilemma in client-oriented advice).

Connections to blog themes

This post connects well to:

 

 

Leave a comment

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy, Storytelling

Policy Analysis in 750 Words: entrepreneurial policy analysis

This post forms one part of the Policy Analysis in 750 words series overview and connects to ‘Three habits of successful policy entrepreneurs’.

The idea of a ‘policy entrepreneur’ is important to policy studies and policy analysis.

Let’s begin with its positive role in analysis, then use policy studies to help qualify its role within policymaking environments.

The take-home-messages are to

  1. recognise the value of entrepreneurship, and invest in relevant skills and strategies, but
  2. not overstate its spread or likely impact, and
  3. note the unequal access to political resources associated with entrepreneurs.

Box 11.3 UPP 2nd ed entrepreneurs

Entrepreneurship and policy analysis

Mintrom identifies the intersection between policy entrepreneurship and policy analysis, to highlighting the benefits of ‘positive thinking’, creativity, deliberation, and leadership.

He expands on these ideas further in So you want to be a policy entrepreneur?:

Policy entrepreneurs are energetic actors who engage in collaborative efforts in and around government to promote policy innovations. Given the enormous challenges now facing humanity, the need is great for such actors to step forward and catalyze change processes” (Mintrom, 2019: 307).

Although many entrepreneurs seem to be exceptional people, Mintrom (2019: 308-20) identifies:

  1. Key attributes to compare
  • ‘ambition’, to invest resources for future reward
  • ‘social acuity’, to help anticipate how others are thinking
  • ‘credibility’, based on authority and a good track record
  • ‘sociability’, to empathise with others and form coalitions or networks
  • ‘tenacity’, to persevere during adversity
  1. The skills that can be learned
  • ‘strategic thinking’, to choose a goal and determine how to reach it
  • ‘team building’, to recognise that policy change is a collective effort, not the responsibility of heroic individuals (compare with Oxfam)
  • ‘collecting evidence’, and using it ‘strategically’ to frame a problem and support a solution
  • ‘making arguments’, using ‘tactical argumentation’ to ‘win others to their cause and build coalitions of supporters’ (2019: 313)
  • ‘engaging multiple audiences’, by tailoring arguments and evidence to their beliefs and interests
  • ‘negotiating’, such as by trading your support in this case for their support in another
  • ‘networking’, particularly when policymaking authority is spread across multiple venues.
  1. The strategies built on these attributes and skills.
  • ‘problem framing’, such as to tell a story of a crisis in need of urgent attention
  • ‘using and expanding networks’, to generate attention and support
  • ‘working with advocacy coalitions’, to mobilise a collection of actors who already share the same beliefs
  • ‘leading by example’, to signal commitment and allay fears about risk
  • ‘scaling up change processes’, using policy innovation in one area to inspire wider adoption.

p308 Mintrom for 750 words

Overall, entrepreneurship is ‘tough work’ requiring ‘courage’, but necessary for policy disruption, by: ‘those who desire to make a difference, who recognize the enormous challenges now facing humanity, and the need for individuals to step forward and catalyze change’ (2019: 320; compare with Luetjens).

Entrepreneurship and policy studies

  1. Most policy actors fail

It is common to relate entrepreneurship to stories of exceptional individuals and invite people to learn from their success. However, the logical conclusion is that success is exceptional and most policy actors will fail.

A focus on key skills takes us away from this reliance on exceptional actors, and ties in with other policy studies-informed advice on how to navigate policymaking environments (see ‘Three habits of successful policy entrepreneurs’, these ANZSOG talks, and box 6.3 below)

box 6.3

However, note the final sentence, which reminds us that it is possible to invest a huge amount of time and effort in entrepreneurial skills without any of that investment paying off.

  1. Even if entrepreneurs succeed, the explanation comes more from their environments than their individual skills

The other side of the entrepreneurship coin is the policymaking environment in which actors operate.

Policy studies of entrepreneurship (such as Kingdon on multiple streams) rely heavily on metaphors on evolution. Entrepreneurs are the actors most equipped to thrive within their environments (see Room).

However, Kingdon uses the additional metaphor of ‘surfers waiting for the big wave’, which suggests that their environments are far more important than them (at least when operating on a US federal scale – see Kingdon’s Multiple Streams Approach).

Entrepreneurs may be more influential at a more local scale, but the evidence of their success (independent of the conditions in which they operate) is not overwhelming. So, self-aware entrepreneurs know when to ‘surf the waves’ or try to move the sea.

  1. The social background of influential actors

Many studies of entrepreneurs highlight the stories of tenacious individuals with limited resources but the burning desire to make a difference.

The alternative story is that political resources are distributed profoundly unequally. Few people have the resources to:

  • run for elected office
  • attend elite Universities, or find other ways to develop the kinds of personal networks that often relate to social background
  • develop the credibility built on a track record in a position of authority (such as in government or science).
  • be in the position to invest resources now, to secure future gains, or
  • be in an influential position to exploit windows of opportunity.

Therefore, when focusing on entrepreneurial policy analysis, we should encourage the development of a suite of useful skills, but not expect equal access to that development or the same payoff from entrepreneurial action.

See also:

Compare these skills with the ones we might associate with ‘systems thinking

If you want to see me say these depressing things with a big grin:

https://youtu.be/L77Y7wynqXY

 

https://www.facebook.com/idsuk/videos/364796097654832/

 

2 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy, Uncategorized

Policy Analysis in 750 Words: complex systems and systems thinking

This post forms one part of the Policy Analysis in 750 words series overview and connects to previous posts on complexity. The first 750 words tick along nicely, then there is a picture of a cat hanging in there baby to signal where it can all go wrong. I updated it (22.6.20) to add category 11.

There are a million-and-one ways to describe systems and systems thinking. These terms are incredibly useful, but also at risk of meaning everything and therefore nothing (compare with planning and consultation).

Let’s explore how the distinction between policy studies and policy analysis can help us clarify the meaning of ‘complex systems’ and ‘systems thinking’ in policymaking.

For example, how might we close a potentially large gap between these two stories?

  1. Systems thinking in policy analysis.
  • Avoid the unintended consequences of too-narrow definitions of problems and processes (systems thinking, not simplistic thinking).
  • If we engage in systems thinking effectively, we can understand systems well enough to control, manage, or influence them.
  1. The study of complex policymaking systems.
  • Policy emerges from complex systems in the absence of: (a) central government control and often (b) policymaker awareness.
  • We need to acknowledge these limitations properly, to accept our limitations, and avoid the mechanistic language of ‘policy levers’ which exaggerate human or government control.

https://twitter.com/apoliticalco/status/1107796576280432640

See also: Systems science and systems thinking for public health: a systematic review of the field

Six meanings of complex systems in policy and policymaking

Let’s begin by trying to clarify many meanings of complex system and relate them to systems thinking storylines.

For example, you will encounter three different meanings of complex system in this series alone, and each meaning presents different implications for systems thinking:

  1. A complex policymaking system

Policy outcomes seem to ‘emerge’ from policymaking systems in the absence of central government control. As such, we should rely less on central government driven targets (in favour of local discretion to adapt to environments), encourage trial-and-error learning, and rethink the ways in which we think about government ‘failure’ (see, for example, Hallsworth on ‘system stewardship’, the OECD on ‘Systemic Thinking for Policy Making‘, and this thread)

  • Systems thinking is about learning and adapting to the limits to policymaker control.

https://twitter.com/CPI_foundation/status/1227211939052445699?s=09

  1. Complex policy problems

Dunn (2017:  73) describes the interdependent nature of problems:

Subjectively experienced problems – crime, poverty, unemployment, inflation, energy, pollution, health, security – cannot be decomposed into independent subsets without running the risk of producing an approximately right solution to the wrong problem. A key characteristic of systems of problems is that the whole is greater – that is, qualitatively different – than the simple sum of its parts” (contrast with Meltzer and Schwartz on creating a ‘boundary’ to make problems seem solveable).

  • Systems thinking is about addressing policy problems holistically.
  1. Complex policy mixes

What we call ‘policy’ is actually a collection of policy instruments. Their overall effect is ‘non-linear’, difficult to predict, and subject to emergent outcomes, rather than cumulative (compare with Lindblom’s hopes for incrementalist change).

This point is crucial to policy analysis: does it involve a rethink of all instruments, or merely add a new instrument to the pile?

  • Systems thinking is about anticipating the disproportionate effect of a new policy instrument.

These three meanings are joined by at least three more (from Munro and Cairney on energy systems):

  1. Socio-technical systems (Geels)

Used to explain the transition from unsustainable to sustainable energy systems.

  • Systems thinking is about identifying the role of new technologies, protected initially in a ‘niche’, and fostered by a supportive ‘social and political environment’.
  1. Socio-ecological systems (Ostrom)

Used to explain how and why policy actors might cooperate to manage finite resources.

  • Systems thinking is about identifying the conditions under which actors develop layers of rules to foster trust and cooperation.
  1. The metaphor of systems

Used by governments – rather loosely – to indicate an awareness of the interconnectedness of things.

  • Systems thinking is about projecting the sense that (a) policy and policymaking is complicated, but (b) governments can still look like they are in control.

Five more meanings of systems thinking

Now, let’s compare these storylines with a small sample of wider conceptions of systems thinking:

  1. The old way of establishing order from chaos

Based on the (now-diminished) faith in science and rational management techniques to control the natural world for human benefit (compare Hughes and Hughes on energy with Checkland on ‘hard’ v ‘soft’ systems approaches, then see What you need as an analyst versus policymaking reality and Radin on the old faith in rationalist governing systems).

  • Systems thinking was about the human ability to turn potential chaos into well-managed systems (such as ‘large technical systems’ to distribute energy)
  1. The new way of accepting complexity but seeking to make an impact

Based on the idea that we can identify ‘leverage points’, or the places that help us ‘intervene in a system’ (see Meadows then compare with Arnold and Wade).

  • Systems thinking is about the human ability to use a small shift in a system to produce profound changes in that system.
  1. A way to rethink cause-and-effect

Based on the idea that current research methods are too narrowly focused on linearity rather than the emergent properties of systems of behaviour (for example, Rutter et al on how to analyse the cumulative effect of public health interventions, and Greenhalgh on responding more effectively to pandemics).

  • Systems thinking is about rethinking the ways in which governments, funders, or professions conduct policy-relevant research on social behaviour.

https://twitter.com/CairneyPaul/status/1278250293843673088

  1. A way of thinking about ourselves

Embrace the limits to human cognition, and accept that all understandings of complex systems are limited.

  • Systems thinking is about developing the ‘wisdom’ and ‘humility’ to accept our limited knowledge of the world.

https://twitter.com/JoBibbyTHF/status/1207586906634104832

11. The performance of systems thinking

Policymakers can use the language of systems thinking, to give the impression that they are thinking and acting differently, but without backing up their words with tangible changes to policy instruments.

hang-in-there-baby

 

How can we clarify systems thinking and use it effectively in policy analysis?

Now, imagine you are in a room of self-styled systems thinkers, and that no-one has yet suggested a brief conversation to establish what you all mean by systems thinking. I reckon you can make a quick visual distinction by seeing who looks optimistic.

I’ll be the morose-looking guy sitting in the corner, waiting to complain about ambiguity, so you would probably be better off sitting next to Luke Craven who still ‘believes in the power of systems thinking’.

If you can imagine some amalgam of these pessimistic/ optimistic positions, perhaps the conversation would go like this:

  1. Reasons to expect some useful collaboration.

Some of these 10 discussions seem to complement each other. For example:

  • We can use 3 and 9 to reject one narrow idea of ‘evidence-based policymaking’, in which the focus is on (a) using experimental methods to establish cause and effect in relation to one policy instrument, without showing (b) the overall impact on policy and outcomes (e.g. compare FNP with more general ‘families’ policy).
  • 1-3 and 10 might be about the need for policy analysts to show humility when seeking to understand and influence complex policy problems, solutions, and policymaking systems.

In other words, you could define systems thinking in relation to the need to rethink the ways in which we understand – and try to address – policy problems. If so, you can stop here and move on to the next post. There is no benefit to completing this post.

  1. Reasons to expect the same old frustrating discussions based on no-one defining terms well enough (collectively) to collaborate effectively (beyond using the same buzzwords).

Although all of these approaches use the language of complex systems and systems thinking, note some profound differences:

Holding on versus letting go.

  • Some are about intervening to take control of systems or, at least, make a disproportionate difference from a small change.
  • Some are about accepting our inability to understand, far less manage, these systems.

Talking about different systems.

  • Some are about managing policymaking systems, and others about social systems (or systems of policy problems), without making a clear connection between both endeavours.

For example, if you use approach 9 to rethink societal cause-and-effect, are you then going to pretend that you can use approach 7 to do something about it? Or, will our group have a difficult discussion about the greater likelihood of 6 (metaphorical policymaking) in the context of 1 (the inability of governments to control the policymaking systems we need to solve the problems raised by 9).

In that context, the reason that I am sitting in the corner, looking so morose, is that too much collective effort goes into (a) restating, over and over and over again, the potential benefits of systems thinking, leaving almost no time for (b) clarifying systems thinking well enough to move on to these profound differences in thinking. Systems thinking has not even helped us solve these problems with systems thinking.

See also:

Why systems thinkers and data scientists should work together to solve social challenges

5 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UKERC

Policy Analysis in 750 Words: how much impact can you expect from your analysis?

This post forms one part of the Policy Analysis in 750 words series overview.

Throughout this series you may notice three different conceptions about the scope of policy analysis:

  1. ‘Ex ante’ (before the event) policy analysis. Focused primarily on defining a problem, and predicting the effect of solutions, to inform current choice (as described by Meltzer and Schwartz and Thissen and Walker).
  2. ‘Ex post’ (after the event) policy analysis. Focused primarily on monitoring and evaluating that choice, perhaps to inform future choice (as described famously by Weiss).
  3. Some combination of both, to treat policy analysis as a continuous (never-ending) process (as described by Dunn).

As usual, these are not hard-and-fast distinctions, but they help us clarify expectations in relation to different scenarios.

  1. The impact of old-school ex ante policy analysis

Radin provides a valuable historical discussion of policymaking with the following elements:

  • a small number of analysts, generally inside government (such as senior bureaucrats, scientific experts, and – in particular- economists),
  • giving technical or factual advice,
  • about policy formulation,
  • to policymakers at the heart of government,
  • on the assumption that policy problems would be solved via analysis and action.

This kind of image signals an expectation for high impact: policy analysts face low competition, enjoy a clearly defined and powerful audience, and their analysis is expected to feed directly into choice.

Radin goes on to describe a much different, modern policy environment: more competition, more analysts spread across and outside government, with a less obvious audience, and – even if there is a client – high uncertainty about where the analysis fits into the bigger picture.

Yet, the impetus to seek high and direct impact remains.

This combination of shifting conditions but unshifting hopes/ expectations helps explain a lot of the pragmatic forms of policy analysis you will see in this series, including:

  • Keep it catchy, gather data efficiently, tailor your solutions to your audience, and tell a good story (Bardach)
  • Speak with an audience in mind, highlight a well-defined problem and purpose, project authority, use the right form of communication, and focus on clarity, precision, conciseness, and credibility ( Smith)
  • Address your client’s question, by their chosen deadline, in a clear and concise way that they can understand (and communicate to others) quickly (Weimer and Vining)
  • Client-oriented advisors identify the beliefs of policymakers and anticipate the options worth researching (Mintrom)
  • Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour (make it ‘concise’ and ‘digestible’), their deadline, and their ability to make or influence the policies you might suggest (Meltzer and Schwartz).
  • ‘Advise strategically’, to help a policymaker choose an effective solution within their political context (Thissen and Walker).
  • Focus on producing ‘policy-relevant knowledge’ by adapting to the evidence-demands of policymakers and rejecting a naïve attachment to ‘facts speaking for themselves’ or ‘knowledge for its own sake’ (Dunn).
  1. The impact of research and policy evaluation

Many of these recommendations are familiar to scientists and researchers, but generally in the context of far lower expectations about their likely impact, particularly if those expectations are informed by policy studies (compare Oliver & Cairney with Cairney & Oliver).

In that context, Weiss’ work is a key reference point. It gives us a menu of ways in which policymakers might use policy evaluation (and research evidence more widely):

  • to inform solutions to a problem identified by policymakers
  • as one of many sources of information used by policymakers, alongside ‘stakeholder’ advice and professional and service user experience
  • as a resource used selectively by politicians, with entrenched positions, to bolster their case
  • as a tool of government, to show it is acting (by setting up a scientific study), or to measure how well policy is working
  • as a source of ‘enlightenment’, shaping how people think over the long term (compare with this discussion of ‘evidence based policy’ versus ‘policy based evidence’).

In other words, researchers may have a role, but they struggle (a) to navigate the politics of policy analysis, (b) find the right time to act, and (c) to secure attention, in competition with many other policy actors.

  1. The potential for a form of continuous impact

Dunn suggests that the idea of ‘ex ante’ policy analysis is misleading, since policymaking is continuous, and evaluations of past choices inform current choices. Think of each policy analysis steps as ‘interdependent’, in which new knowledge to inform one step also informs the other four. For example, routine monitoring helps identify compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly, and if we can make a causal link between the policy solutions and outcomes. Its impact is often better seen as background information with intermittent impact.

Key conclusions to bear in mind

  1. The demand for information from policy analysts may be disproportionately high when policymakers pay attention to a problem, and disproportionately low when they feel that they have addressed it.
  2. Common advice for policy analysts and researchers often looks very similar: keep it concise, tailor it to your audience, make evidence ‘policy relevant’, and give advice (don’t sit on the fence). However, unless researchers are prepared to act quickly, to gather data efficiently (not comprehensively), to meet a tight brief for a client, they are not really in the impact business described by most policy analysis texts.
  3. A lot of routine, continuous, impact tends to occur out of the public spotlight, based on rules and expectations that most policy actors take for granted.

Further reading

See the Policy Analysis in 750 words series overview to continue reading on policy analysis.

See the ‘evidence-based policymaking’ page to continue reading on research impact.

ebpm pic

Bristol powerpoint: Paul Cairney Bristol EBPM January 2020

2 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), Policy learning and transfer, public policy

Policy Analysis in 750 Words: what you need as an analyst versus policymaking reality

This post forms one part of the Policy Analysis in 750 words series overview. Note for the eagle eyed: you are not about to experience déjà vu. I’m just using the same introduction.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The lines between each approach are blurry, and each element makes less sense without the other. However, the distinction is crucial to help us overcome the major confusion associated with this question:

Does policymaking proceed through a series of stages?

The short answer is no.

The longer answer is that you can find about 40 blog posts (of 500 and 1000 words) which compare (a) a stage-based model called the policy cycle, and (b) the many, many policy concepts and theories that describe a far messier collection of policy processes.

cycle

In a nutshell, most policy theorists reject this image because it oversimplifies a complex policymaking system. The image provides a great way to introduce policy studies, and serves a political purpose, but it does more harm than good:

  1. Descriptively, it is profoundly inaccurate (unless you imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and less predictable outputs).
  2. Prescriptively, it gives you rotten advice about the nature of your policymaking task (for more on these points, see this chapter, article, article, and series).

Why does the stages/ policy cycle image persist? Two relevant explanations

 

  1. It arose from a misunderstanding in policy studies

In another nutshell, Chris Weible and I argue (in a secret paper) that the stages approach represents a good idea gone wrong:

  • If you trace it back to its origins, you will find Lasswell’s description of decision functions: intelligence, recommendation, prescription, invocation, application, appraisal and termination.
  • These functions correspond reasonably well to a policy cycle’s stages: agenda setting, formulation, legitimation, implementation, evaluation, and maintenance, succession or termination.
  • However, Lasswell was imagining functional requirements, while the cycle seems to describe actual stages.

In other words, if you take Lasswell’s list of what policy analysts/ policymakers need to do, multiple it by the number of actors (spread across many organisations or venues) trying to do it, then you get the multi-centric policy processes described by modern theories. If, instead, you strip all that activity down into a single cycle, you get the wrong idea.

  1. It is a functional requirement of policy analysis

This description should seem familiar, because the classic policy analysis texts appear to describe a similar series of required steps, such as:

  1. define the problem
  2. identify potential solutions
  3. choose the criteria to compare them
  4. evaluate them in relation to their predicted outcomes
  5. recommend a solution
  6. monitor its effects
  7. evaluate past policy to inform current policy.

However, these texts also provide a heavy dose of caution about your ability to perform these steps (compare Bardach, Dunn, Meltzer and Schwartz, Mintrom, Thissen and Walker, Weimer and Vining)

In addition, studies of policy analysis in action suggest that:

  • an individual analyst’s need for simple steps, to turn policymaking complexity into useful heuristics and pragmatic strategies,

should not be confused with

What you need versus what you can expect

Overall, this discussion of policy studies and policy analysis reminds us of a major difference between:

  1. Functional requirements. What you need from policymaking systems, to (a) manage your task (the 5-8 step policy analysis) and (b) understand and engage in policy processes (the simple policy cycle).
  2. Actual processes and outcomes. What policy concepts and theories tell us about bounded rationality (which limit the comprehensiveness of your analysis) and policymaking complexity (which undermines your understanding and engagement in policy processes).

Of course, I am not about to provide you with a solution to these problems.

Still, this discussion should help you worry a little bit less about the circular arguments you will find in key texts: here are some simple policy analysis steps, but policymaking is not as ‘rational’ as the steps suggest, but (unless you can think of an alternative) there is still value in the steps, and so on.

See also:

The New Policy Sciences

2 Comments

Filed under 750 word policy analysis, agenda setting, public policy

Policy Analysis in 750 Words: Defining policy problems and choosing solutions

This post forms one part of the Policy Analysis in 750 words series overview.

When describing ‘the policy sciences’, Lasswell distinguishes between:

  1. ‘knowledge of the policy process’, to foster policy studies (the analysis of policy)
  2. ‘knowledge in the process’, to foster policy analysis (analysis for policy)

The idea is that both elements are analytically separable but mutually informative: policy analysis is crucial to solving real policy problems, policy studies inform the feasibility of analysis, the study of policy analysts informs policy studies, and so on.

Both elements focus on similar questions – such as What is policy? – and explore their descriptive (what do policy actors do?) and prescriptive (what should they do?) implications.

  1. What is the policy problem?

Policy studies tend to describe problem definition in relation to framing, narrative, social construction, power, and agenda setting.

Actors exercise power to generate attention for their preferred interpretation, and minimise attention to alternative frames (to help foster or undermine policy change, or translate their beliefs into policy).

Policy studies incorporate insights from psychology to understand (a) how policymakers might combine cognition and emotion to understand problems, and therefore (b) how to communicate effectively when presenting policy analysis.

Policy studies focus on the power to reduce ambiguity rather than simply the provision of information to reduce uncertainty. In other words, the power to decide whose interpretation of policy problems counts, and therefore to decide what information is policy-relevant.

This (unequal) competition takes place within a policy process over which no actor has full knowledge or control.

The classic 5-8 step policy analysis texts focus on how to define policy problems well, but they vary somewhat in their definition of doing it well (see also C.Smith):

  • Bardach recommends using rhetoric and eye-catching data to generate attention
  • Weimer and Vining and Mintrom recommend beginning with your client’s ‘diagnosis’, placing it in a wider perspective to help analyse it critically, and asking yourself how else you might define it (see also Bacchi, Stone)
  • Meltzer and Schwartz and Dunn identify additional ways to contextualise your client’s definition, such as by generating a timeline to help ‘map’ causation or using ‘problem-structuring methods’ to compare definitions and avoid making too many assumptions on a problem’s cause.
  • Thissen and Walker compare ‘rational’ and ‘argumentative’ approaches, treating problem definition as something to be measured scientifically or established rhetorically (see also Riker).

These approaches compare with more critical accounts that emphasise the role of power and politics to determine whose knowledge is relevant (L.T.Smith) and whose problem definition counts (Bacchi, Stone). Indeed, Bacchi and Stone provide a crucial bridge between policy analysis and policy studies by reflecting on what policy analysts do and why.

  1. What is the policy solution?

In policy studies, it is common to identify counterintuitive or confusing aspects of policy processes, including:

  • Few studies suggest that policy responses actually solve problems (and many highlight their potential to exacerbate them). Rather, ‘policy solutions’ is shorthand for proposed or alleged solutions.
  • Problem definition often sets the agenda for the production of ‘solutions’, but note the phrase solutions chasing problems (when actors have their ‘pet’ solutions ready, and they seek opportunities to promote them).

Policy studies: problem definition informs the feasibility and success of solutions

Generally speaking, to define the problem is to influence assessments of the feasibility of solutions:

  • Technical feasibility. Will they work as intended, given the alleged severity and cause of the problem?
  • Political feasibility. Will they receive sufficient support, given the ways in which key policy actors weigh up the costs and benefits of action?

Policy studies highlight the inextricable connection between technical and political feasibility. Put simply, (a) a ‘technocratic’ choice about the ‘optimality’ of a solution is useless without considering who will support its adoption, and (b) some types of solution will always be a hard sell, no matter their alleged effectiveness (Box 2.3 below).

In that context, policy studies ask: what types of policy tools or instruments are actually used, and how does their use contribute to policy change? Measures include the size, substance, speed, and direction of policy change.

box 2.3 2nd ed UPP

In turn, problem definition informs: the ways in which actors will frame any evaluation of policy success, and the policy-relevance of the evidence to evaluate solutions. Simple examples include:

  • If you define tobacco in relation to: (a) its economic benefits, or (b) a global public health epidemic, evaluations relate to (a) export and taxation revenues, or (b) reductions in smoking in the population.
  • If you define ‘fracking’ in relation to: (a) seeking more benefits than costs, or (b) minimising environmental damage and climate change, evaluations relate to (a) factors such as revenue and effective regulation, or simply (b) how little it takes place.

Policy analysis: recognising and pushing boundaries

Policy analysis texts tend to accommodate these insights when giving advice:

  • Bardach recommends identifying solutions that your audience might consider, perhaps providing a range of options on a notional spectrum of acceptability.
  • Smith highlights the value of ‘precedent’, or relating potential solutions to previous strategies.
  • Weimer and Vining identify the importance of ‘a professional mind-set’ that may be more important than perfecting ‘technical skills’
  • Mintrom notes that some solutions are easier to sell than others
  • Meltzer and Schwartz describe the benefits of making a preliminary recommendation to inform an iterative process, drawing feedback from clients and stakeholder groups
  • Dunn warns against too-narrow forms of ‘evidence based’ analysis which undermine a researcher’s ability to adapt well to the evidence-demands of policymakers
  • Thissen and Walker relate solution feasibility to a wide range of policy analysis ‘styles’

Still, note the difference in emphasis.

Policy analysis education/ training may be about developing the technical skills to widen definitions and apply many criteria to compare solutions.

Policy studies suggest that problem definition and a search for solutions takes place in an environment where many actors apply a much narrower lens and are not interested in debates on many possibilities (particularly if they begin with a solution).

I have exaggerated this distinction between each element, but it is worth considering the repeated interaction between them in practice: politics and policymaking provide boundaries for policy analysis, analysis could change those boundaries, and policy studies help us reflect on the impact of analysts.

I’ll take a quick break, then discuss how this conclusion relates to the idea of ‘entrepreneurial’ policy analysis.

Further reading

Understanding Public Policy (2020: 28) describes the difference between governments paying for and actually using the ‘tools of policy formulation’. To explore this point, see ‘The use and non-use of policy appraisal tools in public policy making‘ and The Tools of Policy Formulation.

p28 upp 2nd ed policy tools

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: What can you realistically expect policymakers to do?

This post forms one part of the Policy Analysis in 750 words series overview.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts.

In this case, modern theories of the policy process help you identify your audience and their capacity to follow your advice. This simple insight may have a profound impact on the advice you give.

Policy analysis for an ideal-type world

For our purposes, an ideal-type is an abstract idea, which highlights hypothetical features of the world, to compare with ‘real world’ descriptions. It need not be an ideal to which we aspire. For example, comprehensive rationality describes the ideal type, and bounded rationality describes the ‘real world’ limitations to the ways in which humans and organisations process information.

 

Imagine writing policy analysis in the ideal-type world of a single powerful ‘comprehensively rational’ policymaker at the heart of government, making policy via an orderly policy cycle.

Your audience would be easy to identify, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change.

You could adopt a simple 5-8 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.

I have perhaps over-egged this ideal-type pudding, but I think a lot of traditional policy analyses tapped into this basic idea and focused more on the science of analysis than the political and policymaking context in which it takes place (see Radin and Brans, Geva-May, and Howlett).

Policy analysis for the real world

Then imagine a far messier and less predictable world in which the nature of the policy issue is highly contestedresponsibility for policy is unclear, and no single ‘centre’ has the power to turn a recommendation into an outcome.

This image is a key feature of policy process theories, which describe:

  • Many policymakers and influencers spread across many levels and types of government (as the venues in which authoritative choice takes place). Consequently, it is not a straightforward task to identify and know your audience, particularly if the problem you seek to solve requires a combination of policy instruments controlled by different actors.
  • Each venue resembles an institution driven by formal and informal rules. Formal rules are written-down or widely-known. Informal rules are unwritten, difficult to understand, and may not even be understood in the same way by participants. Consequently, it is difficult to know if your solution will be a good fit with the standard operating procedures of organisations (and therefore if it is politically feasible or too challenging).
  • Policymakers and influencers operate in ‘subsystems’, forming networks built on resources such as trust or coalitions based on shared beliefs. Effective policy analysis may require you to engage with – or become part of – such networks, to allow you to understand the unwritten rules of the game and encourage your audience to trust the messenger. In some cases, the rules relate to your willingness to accept current losses for future gains, to accept the limited impact of your analysis now in the hope of acceptance at the next opportunity.
  • Actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so well-established as to be taken for granted. Common terms include paradigms, hegemons, core beliefs, and monopolies of understandings. These dominant frames of reference give meaning to your policy solution. They prompt you to couch your solutions in terms of, for example, a strong attachment to evidence-based cases in public health, value for money in treasury departments, or with regard to core principles such as liberalism or socialism in different political systems.
  • Your solutions relate to socioeconomic context and the events that seem (a) impossible to ignore and (b) out of the control of policymakers. Such factors range from a political system’s geography, demography, social attitudes, and economy, while events can be routine elections or unexpected crises.

What would you recommend under these conditions? Rethinking 5-step analysis

There is a large gap between policymakers’ (a) formal responsibilities versus (b) actual control of policy processes and outcomes. Even the most sophisticated ‘evidence based’ analysis of a policy problem will fall flat if uninformed by such analyses of the policy process. Further, the terms of your cost-benefit analysis will be highly contested (at least until there is agreement on what the problem is, and how you would measure the success of a solution).

Modern policy analysis texts try to incorporate such insights from policy theories while maintaining a focus on 5-8 steps. For example:

  • Meltzer and Schwartz contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’.
  • Bardachand Dunn emphasise the value of political pragmatism and the ‘art and craft’ of policy analysis.
  • Weimer and Vininginvest 200 pages in economic analyses of markets and government, often highlighting a gap between (a) our ability to model and predict economic and social behaviour, and (b) what actually happens when governments intervene.
  • Mintrom invites you to see yourself as a policy entrepreneur, to highlight the value of of ‘positive thinking’, creativity, deliberation, and leadership, and perhaps seek ‘windows of opportunity’ to encourage new solutions. Alternatively, a general awareness of the unpredictability of events can prompt you to be modest in your claims, since the policymaking environment may be more important (than your solution) to outcomes.
  • Thissen and Walker focus more on a range of possible roles than a rigid 5-step process.

Beyond 5-step policy analysis

  1. Compare these pragmatic, client-orientated, and communicative models with the questioning, storytelling, and decolonizing approaches by Bacchi, Stone, and L.T. Smith.
  • The latter encourage us to examine more closely the politics of policy processes, including the importance of framing, narrative, and the social construction of target populations to problem definition and policy design.
  • Without this wider perspective, we are focusing on policy analysis as a process rather than considering the political context in which analysts use it.
  1. Additional posts on entrepreneurs and ‘systems thinking’ [to be added] encourage us to reflect on the limits to policy analysis in multi-centric policymaking systems.

 

 

2 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), public policy

Policy Analysis in 750 Words: Reflecting on your role as a policy analyst

This post forms one part of the Policy Analysis in 750 words series overview.

One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts.

If we take key insights from policy theories seriously, we can use them to identify (a) the constraints to policy analytical capacity, and (b) the ways in which analysts might address them. I use the idea of policy analyst archetypes to compare a variety of possible responses.

Key constraints to policy analytical capacity

Terms like ‘bounded rationality’ highlight major limits on the ability of humans and organisations to process information.

Terms like policymaking ‘context’, ‘environments’, and multi-centric policymaking suggest that the policy process is beyond the limits of policymaker understanding and control.

  • Policy actors need to find ways to act, with incomplete information about the problem they seek to solve and the likely impact of their ‘solution’.
  • They gather information to help reduce uncertainty, but problem definition is really about exercising power to reduce ambiguity: select one way to interpret a problem (at the expense of most others), and limit therefore limit the relevance and feasibility of solutions.
  • This context informs how actors might use the tools of policy analysis. Key texts in this series highlight the use of tools to establish technical feasibility (will it work as intended?), but policymakers also select tools for their political feasibility (who will support or oppose this measure?).

box 2.3 2nd ed UPP

How might policy analysts address these constraints ethically?

Most policy analysis texts (in this series) consider the role of professional ethics and values during the production of policy analysis. However, they also point out that there is not a clearly defined profession and associated code of conduct (e.g. see Adachi). In that context, let us begin with some questions about the purpose of policy analysis and your potential role:

  1. Is your primary role to serve individual clients or some notion of the ‘public good’?
  2. Should you maximise your role as an individual or play your part in a wider profession?
  3. What is the balance between the potential benefits of individual ‘entrepreneurship’ and collective ‘co-productive’ processes?
  4. Which policy analysis techniques should you prioritise?
  5. What forms of knowledge and evidence count in policy analysis?
  6. What does it mean to communicate policy analysis responsibly?
  7. Should you provide a clear recommendation or encourage reflection?

 

Policy analysis archetypes: pragmatists, entrepreneurs, manipulators, storytellers, and decolonisers

In that context, I have created a story of policy analysis archetypes to identify the elements that each text emphasises.

The pragmatic policy analyst

  • Bardach provides the classic simple, workable, 8-step system to present policy analysis to policymakers while subject to time and resource-pressed political conditions.
  • Dunn also uses Wildavsky’s famous phrase ‘art and craft’ to suggest that scientific and ‘rational’ methods can only take us so far.

The professional, clientoriented policy analyst

  • Weimer and Vining provide a similar 7-step client-focused system, but incorporating a greater focus on professional development and economic techniques (such as cost-benefit-analysis) to emphasise a particular form of professional analyst.
  • Meltzer and Schwartz also focus on advice to clients, but with a greater emphasis on a wide variety of methods or techniques (including service design) to encourage the co-design of policy analysis with clients.

The communicative policy analyst

  •  C. Smith focuses on how to write and communicate policy analysis to clients in a political context.
  • Compare with Spiegelhalter and Gigerenzer on how to communicate responsibly when describing uncertainty, probability, and risk.

The manipulative policy analyst.

  • Riker helps us understand the relationship between two aspects of agenda setting: the rules/ procedures to make choice, and the framing of policy problems and solutions.

The entrepreneurial policy analyst

  • Mintrom shows how to combine insights from studies of policy entrepreneurship and policy analysis, to emphasise the benefits of collaboration and creativity.

The questioning policy analyst

  • Bacchi  analyses the wider context in which people give and use such advice, to identify the emancipatory role of analysis and encourage policy analysts to challenge dominant social constructions of problems and populations.

The storytelling policy analyst

  • Stone identifies the ways in which people use storytelling and argumentation techniques to define problems and justify solutions. This process is about politics and power, not objectivity and optimal solutions.

The decolonizing policy analyst.

  • L.T. Smith does not describe policy analysis directly, but shows how the ‘decolonization of research methods’ can inform the generation and use of knowledge.
  • Compare with Hindess on the ways in which knowledge-based hierarchies rely on an untenable, circular logic.
  • Compare with Michener’s thread, discussing Doucet’s new essay on (a) the role of power and knowledge in limiting (b) the ways in which we gather evidence to analyse policy problems.

https://twitter.com/povertyscholar/status/1207054211759910912

Using archetypes to define the problem of policy analysis

Studies of the field (e.g. Radin plus Brans, Geva-May, and Howlett) suggest that there are many ways to do policy analysis. Further, as Thissen and Walker describe, such roles are not mutually exclusive, your views on their relative value could change throughout the process of analysis, and you could perform many of these roles.

Further, each text describes multiple roles, and some seem clustered together:

  • pragmatic, client-orientated, and communicative could sum-up the traditional 5-8 step approaches, while
  • questioning, storytelling, and decolonizing could sum up an important (‘critical’) challenge to narrow ways of thinking about policy analysis and the use of information.

Still, the emphasis matters.

Each text is setting an agenda or defining the problem of policy analysis more-or-less in relation to these roles. Put simply, the more you are reading about economic theory and method, the less you are reading about dominance and manipulation.

How can you read further?

Michener’s ‘Policy Feedback in a Racialized Polity’ connects to studies of historical institutionalism, and reminds us to use insights from policy theories to identify the context for policy analysis.

I have co-authored a lot about uncertainty/ ambiguity in relation to ‘evidence based policymaking’, including:

See also The new policy sciences for a discussion of how these issues inform Lasswell’s original vision for the policy sciences (combining the analysis of and for policy).

3 Comments

Filed under 750 word policy analysis, agenda setting, Evidence Based Policymaking (EBPM), feminism, public policy, Storytelling

Policy Analysis in 750 Words: Who should be involved in the process of policy analysis?

This post forms one part of the Policy Analysis in 750 words series overview.

Think of two visions for policy analysis. It should be primarily:

These choices are not mutually exclusive, but there are key tensions between them that should not be ignored, such as when we ask:

  • how many people should be involved in policy analysis?
  • whose knowledge counts?
  • who should control policy design?

Perhaps we can only produce a sensible combination of the two if we clarify their often very different implications for policy analysis. Let’s begin with one story for each and see where they take us.

A story of ‘evidence-based policymaking’

One story of ‘evidence based’ policy analysis is that it should be based on the best available evidence of ‘what works’.

Often, the description of the ‘best’ evidence relates to the idea that there is a notional hierarchy of evidence according to the research methods used.

At the top would be the systematic review of randomised control trials, and nearer the bottom would be expertise, practitioner knowledge, and stakeholder feedback.

This kind of hierarchy has major implications for policy learning and transfer, such as when importing policy interventions from abroad or ‘scaling up’ domestic projects.

Put simply, the experimental method is designed to identify the causal effect of a very narrowly defined policy intervention. Its importation or scaling up would be akin to the description of medicine, in which the evidence suggests the causal effect of a specific active ingredient to be administered with the correct dosage. A very strong commitment to a uniform model precludes the processes we might associate with co-production, in which many voices contribute to a policy design to suit a specific context (see also: the intersection between evidence and policy transfer).

A story of co-production in policymaking

One story of ‘co-produced’ policy analysis is that it should be ‘reflexive’ and based on respectful conversations between a wide range of policymakers and citizens.

Often, the description is of the diversity of valuable policy relevant information, with scientific evidence considered alongside community voices and normative values.

This rejection of a hierarchy of evidence also has major implications for policy learning and transfer. Put simply, a co-production method is designed to identify the positive effect – widespread ‘ownership’ of the problem and commitment to a commonly-agreed solution – of a well-discussed intervention, often in the absence of central government control.

Its use would be akin to a collaborative governance mechanism, in which the causal mechanism is perhaps the process used to foster agreement (including to produce the rules of collective action and the evaluation of success) rather than the intervention itself. A very strong commitment to this process precludes the adoption of a uniform model that we might associate with narrowly-defined stories of evidence based policymaking.

Where can you find these stories in the 750-words series?

  1. Texts focusing on policy analysis as evidence-based/ informed practice (albeit subject to limits) include: Weimer and Vining, Meltzer and Schwartz, Brans, Geva-May, and Howlett (compare with Mintrom, Dunn)
  2. Texts on being careful while gathering and analysing evidence include: Spiegelhalter
  3. Texts that challenge the ‘evidence based’ story include: Bacchi, T. Smith, Hindess, Stone

 

How can you read further?

See the EBPM page and special series ‘The politics of evidence-based policymaking: maximising the use of evidence in policy

There are 101 approaches to co-production, but let’s see if we can get away with two categories:

  1. Co-producing policy (policymakers, analysts, stakeholders). Some key principles can be found in Ostrom’s work and studies of collaborative governance.
  2. Co-producing research to help make it more policy-relevant (academics, stakeholders). See the Social Policy and Administration special issue ‘Inside Co-production’ and Oliver et al’s ‘The dark side of coproduction’ to get started.

To compare ‘epistemic’ and ‘reflexive’ forms of learning, see Dunlop and Radaelli’s ‘The lessons of policy learning: types, triggers, hindrances and pathologies

My interest has been to understand how governments juggle competing demands, such as to (a) centralise and localise policymaking, (b) encourage uniform and tailored solutions, and (c) embrace and reject a hierarchy of evidence. What could possibly go wrong when they entertain contradictory objectives? For example:

  • Paul Cairney (2019) “The myth of ‘evidence based policymaking’ in a decentred state”, forthcoming in Public Policy and Administration(Special Issue, The Decentred State) (accepted version)
  • Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open AccessPDF
  • Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-x PDF
  • Paul Cairney (2017) “Evidence-based best practice is more political than it looks: a case study of the ‘Scottish Approach’”, Evidence and Policy, 13, 3, 499-515 PDF

 

4 Comments

Filed under 750 word policy analysis, Evidence Based Policymaking (EBPM), public policy