This issue was initially a big feature of the UK government rhetoric in March, in which the idea of ministers ‘following the science’ (or the advice of advisers and bodies such as Royal Colleges – Hancock, 17.4.20: q312) can be used to project a certain form of authority and control (see Weible et al).
It prompted regular debate on the extent to which scientific advisory bodies were subject to group-think and drawn from too-narrow pools of expertise (see for example Dingwall, Today programme 10.6.20, from 838am), and Vallance’s (17.3.20: q96) response:
‘If you thought SAGE and the way SAGE works was a cosy consensus of agreeing scientists, you would be very mistaken. It is a lively, robust discussion, with multiple inputs. We do not try to get everybody saying exactly the same thing. The idea is to look at the evidence and come up with the answers as best we can. There are sub-groups that work and feed into SAGE. The membership of SAGE changes, depending on what we are discussing. It is not as though it is the same group of people who always discuss all the topics; there are members who come for specific items’.
Then, when things began to go very wrong, commentators speculated about the extent to which ministers would blame their advisers for their policy and its timing. The latter problem became a regular feature in oral evidence. For example:
- Vallance (5.5.20: 392-6) states that (a) ‘SAGE does not make decisions. SAGE gives advice; it is an advisory body and Ministers of course have to make decisions’, and (b) they need some confidentiality to make sure that ministers get the information to make choices first (‘to be allowed time to make those decisions’).
- Vallance (5.5.20: q406) is heavy on the line that scientists only give advice, not make policy: ‘we give science advice and then Ministers have to make their decisions. All I can say is that the advice that we have given has been heard and has been taken by the Government. Clearly, what we do not give advice on is absolutely precise policy decisions or absolute timings on things. Those are decisions that Ministers must take on the basis of the science. The correct way of saying it is that the decisions are informed by science. They are not led by science, as you said in opening the question’.
- Vallance (5.5.20: Q407) describes how that advice may be presented when the scientists do not agree: ‘… our output is very much in the form of options, in the form of uncertainty and in the form of what could be done and what the potential consequences might be, not, “Here is the answer. Get on and do this.” That is not how it works.’
The UK’s nascent blame-game problem makes Costello’s (17.4.20: q298) suggestion of ‘a no-blame audit’ (‘where were the system errors that led us to have probably the highest death rates in Europe?’), to inform planning for the second wave, seem unrealistic. Open debates may be common in some scientific conferences (albeit not the ones I attend), but such learning is competitive and contested in adversarial political systems (see Dunlop, and Dunlop & Radaelli). I think this limitation helps explain Vallance’s (5.5.20: q390) reluctance to reflect openly on what he would do differently if he had better data on the doubling time of the virus in March (see also Harries, 5.5.20: q414-7 on excess deaths).
- The need to ramp up testing (for many purposes)
- The inadequate supply of personal protective equipment (PPE)
- Defining the policy problem: ‘herd immunity’, long term management, and the containability of COVID-19
- Uncertainty and hesitancy during initial UK coronavirus responses
- Confusion about the language of intervention and stages of intervention
- The relationship between science, science advice, and policy
- Lower profile changes to policy and practice
- Race, ethnicity, and the social determinants of health