Tag Archives: analysis

What have we learned so far from the UK government’s COVID-19 policy?

This post first appeared on LSE British Politics and Policy (27.11.20) and is based on this article in British Politics.

Paul Cairney assesses government policy in the first half of 2020. He identifies the intense criticism of its response so far, encouraging more systematic assessments grounded in policy research.

In March 2020, COVID-19 prompted policy change in the UK at a speed and scale only seen during wartime. According to the UK government, policy was informed heavily by science advice. Prime Minister Boris Johnson argued that, ‘At all stages, we have been guided by the science, and we will do the right thing at the right time’. Further, key scientific advisers such as Sir Patrick Vallance emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term.

Both ministers and advisors emphasised the need for individual behavioural change, supplemented by government action, in a liberal democracy in which direct imposition is unusual and unsustainable. However, for its critics, the government experience has quickly become an exemplar of policy failure.

Initial criticisms include that ministers did not take COVID-19 seriously enough in relation to existing evidence, when its devastating effect was apparent in China in January and Italy from February; act as quickly as other countries to test for infection to limit its spread; or introduce swift-enough measures to close schools, businesses, and major social events. Subsequent criticisms highlight problems in securing personal protective equipment (PPE), testing capacity, and an effective test-trace-and-isolate system. Some suggest that the UK government was responding to the ‘wrong pandemic’, assuming that COVID-19 could be treated like influenza. Others blame ministers for not pursuing an elimination strategy to minimise its spread until a vaccine could be developed. Some criticise their over-reliance on models which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown. Many describe these problems and delays as the contributors to the UK’s internationally high number of excess deaths.

How can we hold ministers to account in a meaningful way?

I argue that these debates are often fruitless and too narrow because they do not involve systematic policy analysis, take into account what policymakers can actually do, or widen debate to consider whose lives matter to policymakers. Drawing on three policy analysis perspectives, I explore the questions that we should ask to hold ministers to account in a way that encourages meaningful learning from early experience.

These questions include:

Was the government’s definition of the problem appropriate?
Much analysis of UK government competence relates to specific deficiencies in preparation (such as shortages in PPE), immediate action (such as to discharge people from hospitals to care homes without testing them for COVID-19), and implementation (such as an imperfect test-trace-and-isolate system). The broader issue relates to its focus on intervening in late March to protect healthcare capacity during a peak of infection, rather than taking a quicker and more precautionary approach. This judgment relates largely to its definition of the policy problem which underpins every subsequent policy intervention.

Did the government select the right policy mix at the right time? Who benefits most from its choices?

Most debates focus on the ‘lock down or not?’ question without exploring fully the unequal impact of any action. The government initially relied on exhortation, based on voluntarism and an appeal to social responsibility. Initial policy inaction had unequal consequences on social groups, including people with underlying health conditions, black and ethnic minority populations more susceptible to mortality at work or discrimination by public services, care home residents, disabled people unable to receive services, non-UK citizens obliged to pay more to live and work while less able to access public funds, and populations (such as prisoners and drug users) that receive minimal public sympathy. Then, in March, its ‘stay at home’ requirement initiated a major new policy and different unequal impacts in relation to the income, employment, and wellbeing of different groups. These inequalities are list in more general discussions of impacts on the whole population.

Did the UK government make the right choices on the trade-offs between values, and what impacts could the government have reasonably predicted?

Initially, the most high-profile value judgment related to freedom from state coercion to reduce infection versus freedom from the harm of infection caused by others. Then, values underpinned choices on the equitable distribution of measures to mitigate the economic and wellbeing consequences of lockdown. A tendency for the UK government to project centralised and ‘guided by the science’ policymaking has undermined public deliberation on these trade-offs between policies. The latter will be crucial to ongoing debates on the trade-offs associated with national and regional lockdowns.

Did the UK government combine good policy with good policymaking?

A problem like COVID-19 requires trial-and-error policymaking on a scale that seems incomparable to previous experiences. It requires further reflection on how to foster transparent and adaptive policymaking and widespread public ownership for unprecedented policy measures, in a political system characterised by (a) accountability focused incorrectly on strong central government control and (b) adversarial politics that is not conducive to consensus seeking and cooperation.

These additional perspectives and questions show that too-narrow questions – such as was the UK government ‘following the science’ – do not help us understand the longer term development and wider consequences of UK COVID-19 policy. Indeed, such a narrow focus on science marginalises wider discussions of values and the populations that are most disadvantaged by government policy.



Filed under COVID-19, Evidence Based Policymaking (EBPM), POLU9UK, Public health, public policy, UK politics and policy

Policy Analysis in 750 words: William Dunn (2017) Public Policy Analysis

Please see the Policy Analysis in 750 words series overview before reading the summary. This book is a whopper, with almost 500 pages and 101 (excellent) discussions of methods, so 800 words over budget seems OK to me. If you disagree, just read every second word.  By the time you reach the cat hanging in there baby you are about 300 (150) words away from the end.

Dunn 2017 cover

William Dunn (2017) Public Policy Analysis 6th Ed. (Routledge)

Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical assessment, and communication of policy-relevant knowledge … to solve practical problemsIts practitioners are free to choose among a range of scientific methods, qualitative as well as quantitative, and philosophies of science, so long as these yield reliable knowledge’ (Dunn, 2017: 2-3).

Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’, to help solve problems with analysis that people can trust.

This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world should be and how solutions get us there (as opposed to policy studies/ research seeking primarily to explain what happens).

Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:

  1. The idea of ‘best practice’ characterised by 5-step plans.
  • In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather information; the role they perform in an organisation; the time constraints and incentive structures in organisations and political systems; the expectations and standards of their profession; and, the need to work with teams consisting of many professions/ disciplines (2017: 15-6)
  • The cost (in terms of time and resources) of conducting multiple research and analytical methods is high, and highly constrained in political environments (2017: 17-8; compare with Lindblom)
  1. The too-narrow idea of evidence-based policymaking
  • The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’ undermines a researcher’s ability to adapt well to the evidence-demands of policymakers (2017: 68; 4 compare with Why don’t policymakers listen to your evidence?).

To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3) and after (Qs4-5) policy intervention (2017: 5-7; 54-6):

  1. What is the policy problem to be solved?
  • For example, identify its severity, urgency, cause, and our ability to solve it.
  • Don’t define the wrong problem, such as by oversimplifying or defining it with insufficient knowledge.
  • Key aspects of problems including ‘interdependency’ (each problem is inseparable from a host of others, and all problems may be greater than the sum of their parts), ‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change rather than being solved), and ‘hierarchy’ (which level or type of government is responsible) (2017: 70; 75).
  • Problems vary in terms of how many relevant policymakers are involved, how many solutions are on the agenda, the level of value conflict, and the unpredictability of outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
  • ‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a problem, and ward against making too many assumptions about its nature and cause; produce models of cause-and-effect; and make a problem seem solve-able, such as by placing boundaries on its coverage. These methods foster creativity, which is useful when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69; 77; 81-107).
  • Problem definition draws on evidence, but is primarily the exercise of power to reduce ambiguity through argumentation, such as when defining poverty as the fault of the poor, the elite, the government, or social structures (2017: 79; see Stone).
  1. What effect will each potential policy solution have?
  • Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
  • ‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
  • However, even the most-effective expert/ theory-based methods to extrapolate from the past are flawed, and it is important to communicate levels of uncertainty (2017: 118-23; see Spiegelhalter).
  1. Which solutions should we choose, and why?
  • ‘Prescription’ methods help provide a consistent way to compare each potential solution, in terms of its feasibility and predicted outcome, rather than decide too quickly that one is superior (2017: 55; 190-2; 220-42).
  • They help to combine (a) an estimate of each policy alternative’s outcome with (b) a normative assessment.
  • Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer & Schwartz, and Stone on the meaning of these values).
  • For example, cost benefit analysis (CBA) is an established – but problematic – economics method based on finding one metric – such as a $ value – to predict and compare outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
  • Cost effectiveness analysis uses a $ value for costs, but compared with other units of measurement for benefits (such as outputs per $) (2017: 217-9)
  • Although such methods help us combine information and values to compare choices, note the inescapable role of power to decide whose values (and which outcomes, affecting whom) matter (2017: 204)
  1. What were the policy outcomes?
  • ‘Monitoring’ methods help identify (say): levels of compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly (such as on clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
  • Monitoring is crucial because it is so difficult to predict policy success, and unintended consequences are almost inevitable (2017: 250).
  • However, the data gathered are usually no more than proxy indicators of outcomes. Further, the choice of indicators reflect what is available, ‘particular social values’, and ‘the political biases of analysts’ (2017: 262)
  • The idea of ‘evidence based policy’ is linked strongly to the use of experiments and systematic review to identify causality (2017: 273-6; compare with trial-and-error learning in Gigerenzer, complexity theory, and Lindblom).
  1. Did the policy solution work as intended? Did it improve policy outcomes?
  • Although we frame policy interventions as ‘solutions’, few problems are ‘solved’. Instead, try to measure the outcomes and the contribution of your solution, and note that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).  
  • Policy evaluation is not an objective process in which we can separate facts from values.
  • Rather, values and beliefs are part of the criteria we use to gauge success (and even their meaning is contested – 2017: 322-32).
  • We can gather facts about the policy process, and the impacts of policy on people, but this information has little meaning until we decide whose experiences matter.

Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since policymaking is continuous, and evaluations of past choices inform current choices.

Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the impact of knowledge regarding one question on the other four (2017: 7-13; contrast with Meltzer & Schwartz, Thissen & Walker).

Developing arguments and communicating effectively

Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4 for advice on how to write briefs, memos, and executive summaries and prepare oral testimony).

He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:

  • The claim itself, such as a description (size, cause) or evaluation (importance, urgency) of a problem, and prescription of a solution
  • The things that support it (including reasoning, knowledge, authority)
  • Incorporating the things that could undermine it (including any ‘qualifier’, the communication of uncertainty about current knowledge, and counter-arguments).

The key stages of communication (2017: 392-7; 405; 432) include:

  1. ‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing with ‘political and organizational constraints’ and suggesting something that can actually be done.
  2. ‘Documentation’, focusing on synthesising information from many sources, organising it into a coherent argument, translating from jargon or a technical language, simplifying, summarising, and producing user-friendly visuals.
  3. ‘Utilization’, by making sure that (a) communications are tailored to the audience (its size, existing knowledge of policy and methods, attitude to analysts, and openness to challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn from each other.




Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy cycles

Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which culminated in new ways to express old ambitions, such as to:

  1. Use ‘systems thinking’, to understand the interdependence between many elements in complex policymaking systems (see also socio-technical and socio-ecological systems).
  • Note the huge difference between (a) policy analysis discussions of ‘systems thinking’ built on the hope that if we can understand them we can direct them, and (b) policy theory discussions that emphasise ‘emergence’ in the absence of central control (and presence of multi-centric policymaking).
  • Also note that Dunn (2017: 73) describes policy problems – rather than policymaking – as complex systems. I’ll write another post (short, I promise) on the many different (and confusing) ways to use the language of complexity.
  1. Promote ‘evidence based policy, as the new way to describe an old desire for ‘technocratic’ policymaking that accentuates scientific evidence and downplays politics and values (see also 2017: 60-4).

In that context, see Dunn’s (47-52) discussion of comprehensive versus bounded rationality:

  • Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in favour of a continuous process of ‘questioning and answering’.
  • This approach is a pragmatic response to the lack of order and predictability of policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5 step policy analyses (compare with Meltzer & Schwartz).

Dunn (2017: 41-7) also provides an unusually useful discussion of the policy cycle. Rather than seeing it as a mythical series of orderly stages, Dunn highlights:

  1. Lasswell’s original discussion of policymaking functions (or functional requirements of policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering knowledge), ‘promotion’ (persuasion and argumentation while defining problems), ‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is made and carried out), and ‘appraisal’ (2017: 42-3).
  2. The constant interaction between all notional ‘stages’ rather than a linear process: attention to a policy problem fluctuates, actors propose and adopt solutions continuously, actors are making policy (and feeding back on its success) as they implement, evaluation (of policy success) is not a single-shot document, and previous policies set the agenda for new policy (2017: 44-5).

In that context, it is no surprise that the impact of a single policy analyst is usually minimal (2017: 57). Sorry to break it to you. Hang in there, baby.




Filed under 750 word policy analysis, public policy

Putting it all together: dissertation research question and research design (POLU9RM)

Writing a dissertation can be daunting. It is likely the longest piece of work you will plan as an undergraduate (10000 words plus bibliography) but, when you are done, it will not seem long enough.

On the one hand, it is a joyous exploration of research, in which you receive supervision but are in charge. On the other, you don’t want it to go horribly wrong.

It seems unlikely that reading my blog will spark joy, but I can at least give you some tips to avoid unnecessary problems and make your dissertation manageable.

Other advice (such as the reading in your module guide) is available, and I suggest you take it. Indeed, whenever I speak with colleagues about my approach to supervision, it seems relatively conservative and joyless.

On the other hand, why not play it safe with the dissertation then use all the time you’ve saved by seeking joy in a lovely meadow or a summer’s day?

  1. Ask the right research question.

Most undergraduate coursework involves answering your lecturer’s rather generic question. Your task is to produce something a bit different, with some of these characteristics:

  • You should find it interesting and want to answer it.
  • It should be something that you can answer.
  • It should be specific enough to help you manage your time well and answer it with the resources you have.

Compare with Halperin and Heath’s (p164) criteria, in which it should be important, ‘researchable’, and it has not been ‘answered definitively’.

(also compare with Dunleavy’s call for full narrative titles)

For example, many projects that I supervise follow roughly the same format: what is policy, how much has it changed, and why?

We can then narrow it down in several ways by choosing a specific issue, political system, time period, and/ or aspect of policy change.

This narrowing can make the difference between:

(a) feeling the need to explain many theories in the literature review, versus

(b) limiting theory selection by focusing on a small number of political system dynamics.

Action point 1

Describe your initial question or theme with your supervisor, and work with them until you are both happy with the question.

  1. Write the abstract and the introduction first?

Many people suggest that your first main piece of work should be the literature review, for quite good reasons:

  • It allows you to gain enough initial knowledge to help you guide your research
  • It allows you to get writing – often a major stumbling block – and then edit later

I suggest that your first piece of work should be the abstract and introduction for these reasons:

  • Writing a half page abstract allows you to describe what your project adds up to.
  • It really helps you discuss your plans with your supervisor
  • Writing the introduction allows you to describe your research design in enough depth to reflect on it is coherence and feasibility.
  • All going well, it will be only a small jump from your POLU9RM ‘research project design’ exercise.
  • It allows you to make sense of a quite general format for research publications (in many fields): theory, method, results.

Action point 2

Write the question/ title and abstract, share it with your likely supervisor, and talk about how coherent and feasible your plan looks.

  1. Identify the relevant theory or literature.

In some cases, the potentially relevant literature is vast if you have, for example:

  1. A too-general question about political parties or elections.
  • One good solution is to select a subfield like ‘pledge fulfilment’
  1. A too-general question about policy change.

Action point 3.

Make sure to connect your research question to a well-defined literature (and do a preliminary literature search to see what is out there)

  1. Identify your method to gather information.

Halperin and Heath’s chapter 7 goes into some depth about the principles of research design:

  • what data collection is appropriate
  • what we can deduce from certain data
  • how confident you can be about cause/effect in this case (internal validity)
  • and cause-and-effect more generally (external validity)
  • if someone could do your research and get the same results (reliability)

They also describe the types of design you can likely not do (well, 1 and 2) in a UG dissertation, but can get the data to analyse:

  1. Experimental (like an RCT)
  2. Cross-sectional and longitudinal
  3. Comparative (for which I did a separate post)

Then they describe data gathering strategies that you might be tempted to do (subject to ethical clearance):

  • Surveys
  • Interviews
  • Focus groups
  • Ethnographic
  • Discourse analysis

In the lecture, I will put on my dour face and warn you against most of these methods, for reasons such as:

  • Doing a proper survey takes a lot of time and resources, someone has likely already done a better one, and it would be a shame not to find it
  • You can often find things in the public record without interviewing someone (and maybe they will only repeat what is out there)
  • The ethical clearance will be a major issue with ethnographic (and other) methods

I won’t try to put you off entirely. Rather, I will encourage you to ask yourself:

  • Why are you choosing this method?
  1. Does it relate clearly to your research question?
  2. Or, have you begun with the most interesting sounding method?
  3. Or, do you have some sort of connection that gets you access, which seems a shame not to use?
  • Are you prepared to do a literature review on your chosen method?
  • What do you realistically expect to get from your method?
  • What will you do if it goes wrong?

Action point 4

Discuss your choice of data collection with your supervisor.

  1. Think about how you will analyse and interpret the results.

This part tends to make the difference between a very good or an excellent dissertation.

Put most simply, simple description involves summarising things. Analysis is about telling the reader what the results mean. For example, you might:

  • Evaluate the size of the results according to your expectations. Does a survey result seem unusual?
  • Describe how much one should rely on the results. Does the result seem important after taking into account a margin of error?
  • Describe the wider context. Does the result mark a change over time, or seem different from another country?
  • Relate a case study result to your literature review. Is your case unusual, or as expected?

Action point 5

Clarify the difference between summary and analysis

  1. Be clear about the conclusion.

Don’t just to the dissertation equivalent of saying ‘cheerio’ (or, my favourite thing, leaving without saying cheerio).

The conclusion differs from:

  • The introduction, because you should use it to summarise your question and approach (perhaps quite briefly) and relate it in some depth to the results.
  • The analysis of results, because you relate the results much more clearly to your overall project.

Don’t think of it as saying: ‘as I have said before …’

Think of it as saying: ‘here is what it all adds up to …’

  1. The end.

Remember to add your bibliography and ask yourself if you need an appendix for your data (which does not count towards the word count).

POLU9RM action points

PS some of my supervisees write policy analysis reports, which differ somewhat from regular dissertations. If you are keen, please see me and/ or read more here.








Leave a comment

Filed under Research design