Could policy theories help to understand and facilitate the pursuit of equity (or reduction of unfair inequalities)?
We are producing a series of literature reviews to help answer that question, beginning with the study of equity policy and policymaking in health, education, and gender research.
Each field has a broadly similar focus. Most equity researchers challenge the ‘neoliberal’ approaches to policy that favour low state action in favour of individual responsibility and market forces. They seek ‘social justice’ approaches, favouring far greater state intervention to address the social and economic causes of unfair inequalities, via redistributive or regulatory measures. They seek policymaking reforms to reflect the fact that most determinants of inequalities are not contained to one policy sector and cannot be solved in policy ‘silos’. Rather, equity policy initiatives should be mainstreamed via collaboration across (and outside of) government. Each field also projects a profound sense of disenchantment with limited progress, including a tendency to describe a too-large gap between their aspirations and actual policy outcomes. They describe high certainty about what needs to happen, but low confidence that equity advocates have the means to achieve it (or to persuade powerful politicians to change course).
Policy theories could offer some practical insights for equity research, but not always offer the lessons that some advocates seek. In particular, health equity researchers seek to translate insights on policy processes into a playbook for action, such as to frame policy problems to generate more attention to inequalities, secure high-level commitment to radical change, and improve the coherence of cross-cutting policy measures. Yet, policy theories are more likely to identify the dominance of unhelpful policy frames, the rarity of radical change, and the strong rationale for uncoordinated policymaking across a large number of venues. Rather than fostering technical fixes with a playbook, they encourage more engagement with the inescapable dilemmas and trade-offs inherent to policy choice. This focus on contestation (such as when defining and addressing policy problems) is more of a feature of education and gender equity research.
While we ask what policy theories have to offer other disciplines, in fact the most useful lessons emerge from cross-disciplinary insights. They highlight two very different approaches to transformational political change. One offers the attractive but misleading option of radical change through non-radical action, by mainstreaming equity initiatives into current arrangements and using a toolbox to make continuous progress. Yet, each review highlights a tendency for radical aims to be co-opted and often used to bolster the rules and practices that protect the status quo. The other offers radical change through overtly political action, fostering continuous contestation to keep the issue high on the policy agenda and challenge co-option. There is no clear step-by-step playbook for this option, since political action in complex policymaking systems is necessarily uncertain and often unrewarding. Still, insights from policy theories and equity research shows that grappling with these challenges is inescapable.
Ultimately, we conclude that advocates of profound social transformation are wasting each other’s time if they seek short-cuts and technical fixes to enduring political problems. Supporters of policy equity should be cautious about any attempt to turn a transformational political project into a technical process containing a ‘toolbox’ or ‘playbook’.
You can read the original research in Policy & Politics:
My contribution to this interdisciplinary academic-practitioner discussion is to present insights from political science and policy process research, which required me to define some terms (background) before identifying three cautionary messages.
However, note the verb/noun distinction, and common architectural metaphor, to distinguish between the (a) act of design, and (b) the output (e.g. the blueprints).
In terms of the outputs, tools can be defined narrowly as policy instruments – including tax/spending, regulations, staff and other resources for delivery, information sharing, ‘nudging’, etc. – or more widely to include the processes involved in their formulation (such as participatory and deliberative). Therefore, we could be describing:
A highly centralized process, involving very few people, to produce the equivalent of a blueprint.
A decentralized, and perhaps uncoordinated, process involving many people, built on the principle that to seek a blueprint would be to miss the point of participation and deliberation.
Policymaking research tends to focus on
(1) measuring policy change with reference to the ‘policy mix’ of these tools/ instruments, and generally showing that most policy change is minor (and some is major) (link1, link2, link3, link4), and/ or
(2) how to understand the complex policymaking systems or environments in which policy design processes take place.
These studies are the source of my messages of doom.
Three cautionary messages about new policy design
There is a major gap between the act of policy design and actual policies and policy processes. This issue led to the decline of old policy design studies in the 1980s.
While ‘new policy design’ scholars seek to reinvigorate the field, the old issues serve as a cautionary tale, reminding us that (1) policy design is not new, and (2) its decline did not relate to the lack of sophisticated skills or insights among policy designers.
In other words, these old problems will not simply be solved by modern scientific, methodological, or policy design advances. Rather, I encourage policy designers to pay particular attention to:
1. The gap between functional requirements and real world policymaking.
Policy analysts and designers often focus on what they need, or require to get their job done or produce the outcomes they seek.
Policy process researchers identify the major, inevitable, gaps between those requirements and actual policy processes (to the extent that the link between design and policy is often difficult to identify).
2. The strong rationale for the policy processes that undermine policy design.
Policy processes – and their contribution to policy mixes – may seem incoherent from a design perspective. However, they make sense to the participants involved.
Some relate to choice, including to share responsibility for instruments across many levels or types of government (without focusing on how those responsibilities will connect or be coordinated).
Some result from necessity, to delegate responsibility to many policy communities spread across government, each with their own ways to define and address problems (without the ability to know how those responsibilities will be connected).
3. The policy analysis and design dilemmas that cannot be solved by design methods alone.
When seen from the ‘top down’, design problems often relate to the perceived lack of delivery or follow-through in relation to agreed high level design outputs (great design, poor delivery).
When seen from the ‘bottom up’, they represent legitimate ways to incorporate local stakeholder and citizen perspectives. This process will inevitably produce a gap between different sources and outputs of design, making it difficult to separate poor delivery (bad?) from deviation (good?).
Such dynamics are solved via political choice rather than design processes and techniques.
You can hear my presentation below (it took a while to get going because I wasn’t sure who could hear me):
Notes on the workshop discussion
The workshop discussion prompted us initially to consider how many different people would define it. The range of responses included seeing policy design as:
a specific process with specific tools to produce a well-defined output (applied to specific areas conducive to design methods)
a more general philosophy or way of thinking about things like policy issues (compare with systems thinking)
a means to encourage experimentation (such as to produce a prototype policy instrument, use it, and reflect or learn about its impact) or change completely how people think about an issue
the production of a policy solution, or one part of a large policy mix
a niche activity in one unit of government, or something mainstreamed across governments
something done in government, or inside and outside of government
producing something new (like writing on a blank sheet of paper), adding to a pile of solutions, or redesigning what exists
primarily a means to empower people to tell their story, or as a means to improve policy advocacy (as in discussions of narrative/ storytelling)
something done with authoritative policymakers like government ministers (in other words, people with the power to make policy changes after they participate in design processes) or given to them (in other words, the same people but as the audience for the outcomes of design)
These definitions matter since they have very different implications for policy and practice. Take, for example, the link – made by Professor Liz Richardson – between policy design and the idea of evidence-based policymaking, to consider two very different scenarios:
A minister is directly involved in policy design processes. They use design thinking to revisit how they think about a policy problem (and target populations), seek to foster participation and deliberation, and use that process – perhaps continuously – to consider how to reconcile very different sources of evidence (including, say, new data from randomized control trials and powerful stories from citizens, stakeholders, service users). I reckon that this kind of scenario would be in the minds of people who describe policy design optimistically.
A minister is the intended audience of a report on the outcomes of policy design. You assume that their thoughts on a policy problem are well-established. There is no obvious way for them to reconcile different sources of policy-relevant evidence. Crucially, the fruits of your efforts have made a profound impact on the people involved but, for the minister, the outcome is just one of too-many sources of information (likely produced too soon before or after they want to consider the issue).
The second scenario is closer to the process that I describe in the main post, although policy studies would warn against seeing someone like a government minister as authoritative in the sense that they reside in the centre of government. Rather, studies of multi-centric policymaking remind us that there are many possible centres spread across political systems. If so, policy design – according to approaches like the IAD – is about ways to envisage a much bigger context in which design success depends on the participation and agreement of a large number of influential actors (who have limited or no ability to oblige others to cooperate).
By James Nicholls and Paul Cairney, for the University of Stirling MPH and MPP programmes.
There are strong links between the study of public health and public policy. For example, public health scholars often draw on policy theories to help explain (often low amounts of) policy change to foster population health or reduce health inequalities. Studies include a general focus on public health strategies (such as HiAP) or specific policy instruments (such as a ban on smoking in public places). While public health scholars may seek to evaluate or influence policy, policy theories tend to focus on explaining processes and outcomes.
To demonstrate these links, we present:
A long-read blog post to (a) use an initial description of a key alcohol policy instrument (minimum unit pricing, adopted by the Scottish Government but not the UK Government) to (b) describe the application of policy concepts and theories and reflect on the empirical and practical implications. We then added some examples of further reading.
A 45 minute podcast to describe and explain these developments (click below or scroll to the end)
Minimum Unit Pricing in Scotland: background and development
Minimum Unit Pricing for alcohol was introduced in Scotland in 2018. In 2012, the UK Government had also announced plans to introduce MUP, but within a year dopped the policy following intense industry pressure. What do these two journeys tell us about policy processes?
When MUP was first proposed by Scottish Health Action on Alcohol Problems in 2007, it was a novel policy idea. Public health advocates had long argued that raising the price of alcohol could help tackle harmful consumption. However, conventional tax increases were not always passed onto consumers, so would not necessarily raise prices in the shops (and the Scottish Government did not have such taxation powers). MUP appeared to present a neat solution to this problem. It quickly became a prominent policy goal of public health advocates in Scotland and across the UK, while gaining increasing attention, and support, from the global alcohol policy community.
In 2008, the UK Minister for Health, Dawn Primarolo, had commissioned researchers at the University of Sheffield to look into links between alcohol pricing and harm. The Sheffield team developed economic models to analysis the predicted impact of different systems. MUP was included, and the ‘Sheffield Model’ would go on to play a decisive role in developing the case for the policy.
What problem would MUP help to solve?
Descriptions of the policy problem often differed in relation to each government. In the mid-2000s, alcohol harm had become a political problem for the UK government. Increasing consumption, alongside changes to the night-time economy, had started to gain widespread media attention. In 2004, just as a major liberalisation of the licensing system was underway in England, news stories began documenting the apparent horrors of ‘Binge Britain’: focusing on public drunkenness and disorder, but also growing rates of liver disease and alcohol-related hospital admissions.
Politicians began to respond, and the issue became especially useful for the Conservatives who were developing a narrative that Britain was ‘broken’ under New Labour. Labour’s liberalising reforms of alcohol licensing could conveniently be linked to this political framing. The newly formed Alcohol Health Alliance, a coalition set up under the leadership of Professor Sir Ian Gilmore, was also putting pressure on the UK Government to introduce stricter controls. In Scotland, while much of the debate on alcohol focused on crime and disorder, Scottish advocates were focused on framing the problem as one of public health. Emerging evidence showed that Scotland had dramatically higher rates of alcohol-related illness and death than the rest of Europe – a situation strikingly captured in a chart published in the Lancet.
The notion that Scotland faced an especially acute public health problem with alcohol was supported by key figures in the increasingly powerful Scottish National Party (in government since 2007), which, around this time, had developed working relationships with Alcohol Focus Scotland and other advocacy groups.
What happened next?
The SNP first announced that it would support MUP in 2008, but it did not implement this change until 2018. There are two key reasons for the delay:
Its minority government did not achieve enough parliamentary support to pass legislation. It then formed a majority government in 2011, and its legislation to bring MUP into law was passed in 2012.
Court action took years to resolve. The alcohol industry, which is historically powerful in Scotland, was vehemently opposed. A coalition of industry bodies, led by the Scotch Whisky Association, took the Scottish Government to court in an attempt to prove the policy was illegal. Ultimately, this process would take years, and conclude in rulings by the European Court of Justice (2016), Scottish Court of Session Inner House (2016), and UK Supreme Court (2017) which found in favour of the Scottish Government.
This public campaign was accompanied by intense behind-the-scenes lobbying, aided by the fact that the leadership of industry groups had close ties to Government and that the All-Party Parliamentary Group on Beer had the largest membership of any APPG in Westminster. The industry campaign made much of the fact there was little evidence to suggest MUP would reduce crime, but also argued strongly that the modelling produced by Sheffield University was not valid evidence in the first place. A year after the adopting the policy, the UK Government announced a U-turn and MUP was dropped.
How can we use policy theories and concepts to interpret these dynamics?
Here are some examples of using policy theories and concepts as a lens to interpret these developments.
1. What was the impact of evidence in the case for policy change?
Third, policymakers do not control the policy process.
There is no centralised and orderly policy cycle. Rather, policymaking involves policymakers and influencers spread across many authoritative ‘venues’, with each venue having its own rules, networks, and ways of thinking.
In that context, policy theories identify the importance of contestation between policy actors, and describe the development of policy problems, and how evidence fits in. Approaches include:
The acceptability of a policy solution will often depend on how the problem is described. Policymakers use evidence to reduce uncertainty, or a lack of information around problems and how to solve them. However, politics is about exercising power to reduce ambiguity, or the ability to interpret the same problem in different ways.
By suggesting MUP would solve problems around crime, the UK Government made it easier for opponents to claim the policy wasn’t evidence-based. In Scotland, policymakers and advocates focused on health, where the evidence was stronger. In addition, the SNP’s approach fitted within a wider political independence frame, in which more autonomy meant more innovation.
Policy actors tell stories to appeal to the beliefs (or exploit the cognitive shortcuts) of their audiences. A narrative contains a setting (the policy problem), characters (such as the villain who caused it, or the victim of its effects), plot (e.g. a heroic journey to solve the problem), and moral (e.g. the solution to the problem).
Supporters of MUP tended to tell the story that there was an urgent public health crisis, caused largely by the alcohol industry, and with many victims, but that higher alcohol prices pointed to one way out of this hole. Meanwhile opponents told the story of an overbearing ‘nanny state’, whose victims – ordinary, moderate drinkers – should be left alone by government.
Policymakers make strategic and emotional choices, to identify ‘good’ populations deserving of government help, and ‘bad’ populations deserving punishment or little help. These judgements inform policy design (government policies and practices) and provide positive or dispiriting signals to citizens.
For example, opponents of MUP rejected the idea that alcohol harms existed throughout the population. They focused instead on dividing the majority of moderate drinkers from irresponsible minority of binge drinkers, suggesting that MUP would harm the former more than help the latter.
This competition to frame policy problems takes place in political systems that contain many ‘centres’, or venues for authoritative choice. Some diffusion of power is by choice, such as to share responsibilities with devolved and local governments. Some is by necessity, since policymakers can only pay attention to a small proportion of their responsibilities, and delegate the rest to unelected actors such as civil servants and public bodies (who often rely on interest groups to process policy).
For example, ‘alcohol policy’ is really a collection of instruments made or influenced by many bodies, including (until Brexit) European organisations deciding on the legality of MUP, UK and Scottish governments, as well as local governments responsible for alcohol licensing. In Scotland, this delegation of powers worked in favour of MUP, since Alcohol Focus Scotland were funded by the Scottish Government to help deliver some of their alcohol policy goals, and giving them more privileged access than would otherwise have been the case.
The role of evidence in MUP
In the case of MUP, similar evidence was available and communicated to policymakers, but used and interpreted differently, in different centres, by the politicians who favoured or opposed MUP.
In Scotland, the promotion, use of, and receptivity to research evidence – on the size of the problem and potential benefit of a new solution – played a key role in increasing political momentum. The forms of evidence were complimentary. The ‘hard’ science on a potentially effective solution seemed authoritative (although few understood the details), and was preceded by easily communicated and digested evidence on a concrete problem:
There was compelling evidence of a public health problem put forward by a well-organised ‘advocacy coalition’ (see below) which focused clearly on health harms. In government, there was strong attention to this evidence, such as the Lancet chart which one civil servant described as ‘look[ing] like the north face of the Eiger’. There were also influential ‘champions’ in Government willing to frame action as supporting the national wellbeing.
Reports from Sheffield University appeared to provide robust evidence that MUP could reduce harm, and advocacy was supported by research from Canada which suggested that similar policies there had been successful elsewhere.
Advocacy in England was also well-organised and influential, but was dealing with a larger – and less supportive – Government machine, and the dominant political frame for alcohol harms remained crime and disorder rather than health.
Debates on MUP modelling exemplify these differences in evidence communication and use. Those in favour appealed to econometric models, but sometimes simplifying their complexity and blurring the distinction between projected outcomes and proof of efficacy. Opponents went the other way and dismissed the modelling as mere speculation. What is striking is the extent to which an incredibly complex, and often poorly understand, set of econometric models – and the ’Sheffield Model’ in particular – came to occupy centre stage in a national policy debate. Katikireddi and colleagues talked about this as an example of evidence as rhetoric:
Support became less about engagement with the econometric modelling, and more an indicator of general concern about alcohol harm and the power of the industry.
Scepticism was often viewed as the ‘industry position’, and an indicator of scepticism towards public health policy more broadly.
2. Who influences policy change?
Advocacy plays a key role in alcohol policy, with industry and other actors competing with public health groups to define and solve alcohol policy problems. It prompts our attention to policy networks, or the actors who make and influence policy.
People engage in politics to turn their beliefs into policy. They form advocacy coalitions with people who share their beliefs, and compete with other coalitions. The action takes place within a subsystem devoted to a policy issue, and a wider policymaking process that provides constraints and opportunities to coalitions. Beliefs about how to interpret policy problems act as a glue to bind actors together within coalitions. If the policy issue is technical and humdrum, there may be room for routine cooperation. If the issue is highly charged, then people romanticise their own cause and demonise their opponents.
MUP became a highly charged focus of contestation between a coalition of public health advocates, who saw themselves as fighting for the wellbeing of the wider community (and who believed fundamentally that government had a duty to promote population health), and a coalition of industry actors who were defending their commercial interests, while depicting public health policies as illiberal and unfair.
3. Was there a ‘window of opportunity’ for MUP?
Policy theories – including Punctuated Equilibrium Theory – describe a tendency for policy change to be minor in most cases and major in few. Paradigmatic policy change is rare and may take place over decades, as in the case of UK tobacco control where many different policy instruments changed from the 1980s. Therefore, a major change in one instrument could represent a sea-change overall or a modest adjustment to the overall approach.
Multiple Streams Analysis is a popular way to describe the adoption of a new policy solution such as MUP. It describes disorderly policymaking, in which attention to a policy problem does not produce the inevitable development, implementation, and evaluation of solutions. Rather, these ‘stages’ should be seen as separate ‘streams’. A ‘window of opportunity’ for policy change occurs when the three ‘streams’ come together:
Problem stream. There is high attention to one way to define a policy problem.
Policy stream. A technically and politically feasible solution already exists (and is often pushed by a ‘policy entrepreneur’ with the resources and networks to exploit opportunities).
Politics stream. Policymakers have the motive and opportunity to choose that solution.
However, these windows open and close, often quickly, and often without producing policy change.
This approach can help to interpret different developments in relation to Scottish and UK governments:
The Scottish Government paid high attention to public health crises, including the role of high alcohol consumption.
The UK government paid often-high attention to alcohol’s role in crime and anti-social behaviour (‘Binge Britain’ and ‘Broken Britain’)
In Scotland, MUP connected strongly to the dominant framing, offering a technically feasible solution that became politically feasible in 2011.
The UK Prime Minister David Cameron’s made a surprising bid to adopt MUP in 2012, but ministers were divided on its technical feasibility (to address the problem they described) and its political feasibility seemed to be more about distracting from other crises than public health.
The Scottish Government was highly motivated to adopt MUP. MUP was a flagship policy for the SNP; an opportunity to prove its independent credentials, and to be seen to address a national public health problem. It had the opportunity from 2011, then faced interest group opposition that delayed implementation.
The Coalition Government was ideologically more committed to defending commercial interests, and to framing alcohol harms as one of individual (rather than corporate) responsibility. It took less than a year for the alcohol industry to successfully push for a UK government U-turn.
As a result, MUP became policy (eventually) in Scotland, but the window closed (without resolution) in England.
Paul Cairney and Donley Studlar (2014) ‘Public Health Policy in the United Kingdom: After the War on Tobacco, Is a War on Alcohol Brewing?’ World Medical and Health Policy, 6, 3, 308-323PDF
Niamh Fitzgerald and Paul Cairney (2022) ‘National objectives, local policymaking: public health efforts to translate national legislation into local policy in Scottish alcohol licensing’, Evidence and Policy, https://doi.org/10.1332/174426421X16397418342227, PDF
You can listen directly here:
You can also listen on Spotify or iTunes via Anchor
By James Nicholls and Paul Cairney, for the University of Stirling MPH and MPP programmes.
There are strong links between the study of public health and public policy. For example, public health scholars often draw on policy theories to help explain (often low amounts of) policy change to foster population health or reduce health inequalities. Studies include a general focus on public health strategies (such as HiAP) or specific policy instruments (such as a ban on smoking in public places). While public health scholars may seek to evaluate or influence policy, policy theories tend to focus on explaining processes and outcomes,.
To demonstrate these links, we present this podcast and blog post to (1) use an initial description of a key alcohol policy instrument (minimum unit pricing in Scotland) to (2) describe the application of policy concepts and theories and reflect on the empirical and practical implications.
Using policy theories to interpret public health case studies: the example of a minimum unit price for alcohol | Paul Cairney: Politics & Public Policy (wordpress.com)
It would be a mistake to equate public policy with whatever a government says it is doing (or wants to do).
The most obvious, but often unhelpful, explanation for this statement is that politicians are not sincere when making policy promises, or not competent enough to see them through.
This focus on sincerity and ‘political will’ can be useful, but only scratches the surface of explanation.
The bigger source of explanation comes from the routine, pervasive, and inevitable contradictions of policy and policymaking.
The basic idea of contradictory aims and necessary trade-offs
I want to eat crisps and lose weight, but making a commitment to both does not achieve both. Rather, I cycle between each aim, often unpredictably, producing what might appear to be an inconsistent approach to my wellbeing.
These problems only get worse when more people and aims are involved. Indeed, a general description of ‘politics’ regards trying to find ways to resolve the many different preferences of many people in the same society. These preferences are intransitive, prompting policy actors to try to manipulate choice situations, or produce effective stories or narratives, to encourage one choice over another. Even if successful in once case, the overall impact of political action is not consistent.
The inevitable result of politics is that policymakers want to prioritise many policy aims and the aims that undermine them. When they pursue many contradictory aims, they have to make trade-offs and prioritise some aims over others. Sometimes, this choice is explicit. Sometimes, you have to work out what a government’s real priorities are when they seem sincerely committed to so many things. If so, we should not deduce government policy overall from specific statements and policies.
This basic idea plays out in many different ways, including:
Policymakers need to address many contradictory demands
Contradictions are inevitable when policymakers seek to offer policy benefits to many different groups for different reasons. Some benefits are largely rhetorical, others more substantive.
Ambiguity allows policy actors to downplay contradictions (temporarily) when generating support.
Contradictions are masked by ambiguity, such as when many different actors support the same vague ambition for very different reasons.
Policy silos contribute to contradictory action
Contradictions are exacerbated by inevitable and pervasive policy silos or ‘communities’ that seem immune to ‘holistic’ government. They multiply when governments have many departments pursuing many different aims. There may be a vague hope for joined-up policy, but a strong rationale for policy communities to specialise and become insulated.
The power to make policies – or create or amend policy instruments – is spread across many different venues of authority. If so, a key aim – stated often – is to find ways to cooperate to avoid contradictory policies and practices. The logical consequence of this distribution of powers, and the continuous search for meaningful cooperation, is that such contradictions are routine features, not bugs, of political systems.
Some of these outcomes simply emerge from routine policy delivery, when the actors carrying out policy have different ideas than the actors sending them instructions. Or, implementing actors do not have the resources or clarity to do what they think they are being told.
Examples of contradictions in policy and policymaking
Most governments are committed rhetorically (and often sincerely) to the public health agenda ‘Health in All Policies’ but also the social and economic policies that undermine it. The same goes for the more general aim of ‘prevention’.
In these kinds of cases, it is tempting to conclude that governments make promises energetically as a substitute for – not a signal of – action.
Levin et al note that the governments seeking to reduce climate change are also responsible for its inevitability.
The US and EU have subsidised the production and/or encouraged the sale of tobacco (to foster economic aims) at the same time as seeking tobacco control and discouraging smoking (to foster public health aims).
I apologise for every word in this post, and the capitalised 5-letter words in particular.
WORDLE is a SIMPLE word game (in US English). The aim is to identify a 5-letter word correctly in 6 guesses or fewer. Each guess has to be a real word, and you receive informative feedback each time: GREEN means you have the letter RIGHT and in the right position; yellow means the right letter in the wrong position; grey MEANS the letter does not appear in the word.
One strategy involves trial-and-error learning via 3 or 4 simple steps:
1. Use your initial knowledge of the English language to inform initial guesses, such as guessing a word with common vowels (I go for E and A) and consonants (e.g. S, T).
2. Learn from feedback on your correct and incorrect estimates.
3. Use your new information and deduction (e.g. about which combinations work when you exclude many options) to make informed guesses.
4. Do so while avoiding unhelpful heuristics, such as assuming that each letter will only appear once (or that the spelling is in UK English).
At least, that is how I play it. I get it in 3 just over half the time, and 4 or 5 in the rest. I make 2-4 ‘errors’ then succeed. In the context of the game’s rules, that is consistent success, RIGHT?
[insert crowbar GIF to try to get away with the segue]
That is the spirit of the idea of trial-and-error learning.
It is informed by previous knowledge, but also a recognition of the benefits of trying things out to generate new information, update your knowledge and skills (the definition of learning), and try again.
A positive normative account of this approach can be found in classic discussions of incrementalism and modern discussions of policymaking informed by complex systems insights:
‘To deal with uncertainty and change, encourage trial-and-error projects, or pilots, that can provide lessons, or be adopted or rejected, relatively quickly’.
Advocates of such approaches also suggest that we change how we describe them, replacing the language of policy failure with ERROR, at least when part of a process of continuous policy learning in the face of uncertainty.
At the heart of such advice are two guiding principles:
1. Recognise the limits to centralism when giving policy advice. There is no powerful centre of government, able to carry out all of its aims successfully, so do not build policy advice on that assumption.
2. Recognise the limits to our knowledge. Policymakers must make and learn from choices in the face of uncertainty, so do not kid yourself that one piece of analysis and action will do.
Much like the first two WORDLE guesses, your existing knowledge alone does not tell you how to proceed (regardless of the number of times that people repeat the slogan of ‘evidence-based policymaking’).
Political problems with trial and error
The main political problem with this approach is that many political systems – including adversarial and/or Westminster systems – are not conducive to learning from error. You may think that adapting continuously to uncertainty is crucial, but also be wary of recommending it to:
1. Politicians who will be held to account for failure. A government’s apparent failure to deliver on promises represents a resource for its opposition.
2. Organisations subject to government targets. Failure to meet strict statutory requirements is not seen as a learning experience.
More generally, your audience may face criticism whenever errors are associated with negative policy consequences (with COVID-19 policy representing a vivid, extreme example).
These limitations produce a major dilemma in policy analysis, in which you believe that you will not learn how to make good policy without trial-and-error but recognise that this approach will not be politically feasible. In many political systems, policymakers need to pretend to their audience that they know what the problem is and that they have the knowledge and power to solve it. You may not be too popular if you encourage open-minded experimentation. This limitation should not warn you against trial-and-error recommendations completely, but rather remind you to relate good-looking ideas to your policymaking context.
Please note that I missed my train stop while writing this post, despite many opportunities to learn from the other times it happened.
Policy studies and policy analysis guidebooks identify the importance of feasible policy solutions:
Technical feasibility: will this solution work as intended if implemented?
Political feasibility: will it be acceptable to enough powerful people?
For example, Kingdon treats feasibility as one of three conditions for major policy change during a ‘window of opportunity’: (1) there is high attention to the policy problem, (2) a feasible solution already exists, and (3) key policymakers have the motive and opportunity to select it.
Guidebooks relate this requirement initially to your policymaker client: what solutions will they rule out, to the extent that they are not even worth researching as options (at least for the short term)?
Further, this assessment relates to types of policy ‘tool’ or ‘instrument’: one simple calculation is that ‘redistributive’ measures are harder to sell than ‘distributive’, while both may be less attractive than regulation (although complex problems likely require a mix of instruments).
Incremental analysis. It is better to research in-depth a small number of feasible options than spread your resources too thinly to consider all possibilities.
Strategic analysis. The feasibility of a solution relates strongly to current policy. The more radical a departure from the current negotiated position, the harder it will be to sell.
As many posts in the Policy Analysis in 750 words series describe, this advice is not entirely useful for actors who seek rapid and radical departures from the status quo. Lindblom’s response to such critics was to seek radical change via a series of non-radical steps (at least in political systems like the US), which (broadly speaking) represents one of two possible approaches.
While incrementalism is not as popular as it once was (as a description of, or prescription for, policymaking), it tapped into the enduring insight that policymaking systems produce huge amounts of minor change. Rapid and radical policy change is rare, and it is even rarer to be able to connect it to influential analysis and action (at least in the absence of a major event). This knowledge should not put people off trying, but rather help them understand the obstacles that they seek to overcome.
Relating feasible solutions and strategies to ‘policy success’
One way to incorporate this kind of advice is to consider how (especially elected) policymakers would describe their own policy success. The determination of success and failure is a highly contested and political process (not simply a technical exercise called ‘evaluation’), and policymakers may refer – often implicitly – to the following questions when seeking success:
Political. Will this policy boost my government’s credibility and chances of re-election?
Process. Will it be straightforward to legitimise and maintain support for this policy?
Programmatic. Will it achieve its stated objectives and produce beneficial outcomes if implemented?
The benefit to analysts, in asking themselves these questions, is that they help to identify the potential solutions that are technically but not politically feasible (or vice versa).
The absence of clear technical feasibility does not necessarily rule out solutions with wider political benefits (for example, it can be beneficial to look like you are trying to do something good). Hence the popular phrase ‘good politics, bad policy’.
Nor does a politically unattractive option rule out a technically feasible solution (not all politicians flee the prospect of ‘good policy, bad politics’). However, it should prompt attention to hard choices about whose support to seek, how long to wait, or how hard to push, to seek policy change. You can see this kind of thinking as ‘entrepreneurial‘ or ‘systems thinking’ depending on how much faith you have in agency in highly-unequal political contexts.
It is tempting to conclude that these obstacles to ‘good policy’ reflect the pathological nature of politics. However, if we want to make this argument, we should at least do it well:
1. You can find this kind of argument in fields such as public health and climate change studies, where researchers bemoan the gap between (a) their high-quality evidence on an urgent problem and (b) a disproportionately weak governmental response. To do it well, we need to separate analytically (or at least think about): (a) the motivation and energy of politicians (usually the source of most criticism of low ‘political will’), and (b) the policymaking systems that constrain even the most sincere and energetic policymakers. See the EBPM page for more.
1. Be pragmatic, and change things from the inside
Pragmatism is at the heart of most of the policy analysis texts in this series. They focus on the needs and beliefs of clients (usually policymakers). Policymakers are time-pressed, so keep your analysis short and relevant. See the world through their eyes. Focus on solutions that are politically as well as technically feasible. Propose non-radical steps, which may add up to radical change over the long-term.
This approach will seem familiar to students of research ‘impact’ strategies which emphasise relationship-building, being available to policymakers, and responding to the agendas of governments to maximise the size of your interested audience.
It will also ring bells for advocates of radical reforms in policy sectors such as (public) health and intersectoral initiatives such as gender mainstreaming:
Health in All Policies is a strategy to encourage radical changes to policy and policymaking to improve population health. Common advice includes to: identify to policymakers how HiAP fits into current policy agendas, seek win-win strategies with partners in other sectors, and go to great lengths to avoid the sense that you are interfering in their work (‘health imperialism’).
Gender mainstreaming is a strategy to consider gender in all aspect of policy and policymaking. An equivalent playbook involves steps to: clarify what gender equality is, and what steps may help achieve it; make sure that these ideas translate across all levels and types of policymaking; adopt tools to ensure that gender is a part of routine government business (such as budget processes); and, modify existing policies or procedures while increasing the representation of women in powerful positions.
In other words, the first approach is to pursue your radical agenda via non-radical means, using a playbook that is explicitly non-confrontational. Use your insider status to exploit opportunities for policy change.
2. Be radical, and challenge things from the outside
Challenging the status quo, for the benefit of marginalised groups, is at the heart of critical policy analysis:
Reject the idea that policy analysis is a rationalist, technical, or evidence-based process. Rather, it involves the exercise of power to (a) depoliticise problems to reduce attention to current solutions, and (b) decide whose knowledge counts.
Identify and question the dominant social constructions of problems and populations, asking who decides how to portray these stories and who benefits from their outcomes.
This approach resonates with frequent criticisms of ‘impact’ advice, emphasising the importance of producing research independent of government interference, to challenge policies that further harm already-marginalised populations.
It will also rings bells among advocates of more confrontational strategies to seek radical changes to policy and policymaking. They include steps to: find more inclusive ways to generate and share knowledge, produce multiple perspectives on policy problems and potential solutions, focus explicitly on the impact of the status quo on marginalised populations, politicise issues continuously to ensure that they receive sufficient attention, and engage in outsider strategies to protest current policies and practices.
Does this dichotomy make sense?
It is tempting to say that this dichotomy is artificial and that we can pursue the best of both worlds, such as working from within when it works and resorting to outsider action and protest when it doesn’t.
However, the blandest versions of this conclusion tend to ignore or downplay the politics of policy analysis in favour of more technical fixes. Sometimes collaboration and consensus politics is a wonderful feat of human endeavour. Sometimes it is a cynical way to depoliticise issues, stifle debate, and marginalise unpopular positions.
This conclusion also suggests that it is possible to establish what strategies work, and when, without really saying how (or providing evidence for success that would appeal to audiences associated with both approaches). Indeed, a recurrent feature of research in these fields is that most attempts to produce radical change prove to be dispiriting struggles. Non-radical strategies tend to be co-opted by more powerful actors, to mainstream new ways of thinking without changing the old. Radical strategies are often too easy to dismiss or counter.
The latter point reminds us to avoid excessively optimistic overemphasis on the strategies of analysts and advocates at the expense of context and audience. The 500 and 1000 words series perhaps tip us too far in the other direction, but provide a useful way to separate (analytically) the reasons for often-minimal policy change. To challenge dominant forms of policy and policymaking requires us to separate the intentional sources of inertia from the systemic issues that would constrain even the most sincere and energetic reformer.
This post forms one part of the Policy Analysis in 750 words series. It draws on work for an in-progress book on learning to reduce inequalities. Some of the text will seem familiar if you have read other posts. Think of it as an adventure game in which the beginning is the same but you don’t know the end.
Policy learning is the use of new information to update policy-relevant knowledge. Policy transfer involves the use of knowledge about policy and policymaking in one government to inform policy and policymaking in another.
These processes may seem to relate primarily to research and expertise, but they require many kinds of political choices (explored in this series). They take place in complex policymaking systems over which no single government has full knowledge or control.
Therefore, while the agency of policy analysts and policymakers still matters, they engage with a policymaking context that constrains or facilitates their action.
Two approaches to policy learning: agency and context-driven stories
Analysts compete to define problems and determine the manner and sources of learning, in a multi-centric environment where different contexts will constrain and facilitate action in different ways. For example, varying structural factors – such as socioeconomic conditions – influence the feasibility of proposed policy change, and each centre’s institutions provide different rules for gathering, interpreting, and using evidence.
Think of two different ways to respond to this description of the policy process with this lovely blue summary of concepts. One is your agency-centred strategic response. The other is me telling you why it won’t be straightforward.
There are many policy makers and influencers spread across many policymaking ‘centres’
Find out where the action is and tailor your analysis to different audiences.
There is no straightforward way to influence policymaking if multiple venues contribute to policy change and you don’t know who does what.
Each centre has its own ‘institutions’
Learn the rules of evidence gathering in each centre: who takes the lead, how do they understand the problem, and how do they use evidence?
There is no straightforward way to foster policy learning between political systems if each is unaware of each other’s unwritten rules. Researchers could try to learn their rules to facilitate mutual learning, but with no guarantee of success.
Each centre has its own networks
Form alliances with policymakers and influencers in each relevant venue.
The pervasiveness of policy communities complicates policy learning because the boundary between formal power and informal influence is not clear.
Well-established ‘ideas’ tend to dominate discussion
Learn which ideas are in good currency. Tailor your advice to your audience’s beliefs.
The dominance of different ideas precludes many forms of policy learning or transfer. A popular solution in one context may be unthinkable in another.
Many policy conditions (historic-geographic, technological, social and economic factors) command the attention of policymakers and are out of their control. Routine events and non-routine crises prompt policymaker attention to lurch unpredictably.
The policy conditions may be so different in each system that policy learning is limited and transfer would be inappropriate. Events can prompt policymakers to pay disproportionately low or high attention to lessons from elsewhere, and this attention relates weakly to evidence from analysts.
Feel free to choose one or both forms of advice. One is useful for people who see analysts and researchers as essential to major policy change. The other is useful if it serves as a source of cautionary tales rather than fatalistic responses.
The latter descriptions, reflecting multi-centric policymaking, seem particularly relevant to major contemporary policy problems – such as global public health and climate crises – in which cooperation across (and outside of) many levels and types of government is essential.
Resolving ambiguity in policy analysis texts
This context helps us to interpret common (Step 1) advice in policy analysis textbooks: define a policy problem for your client, using your skills of research and persuasion but tailoring your advice to your client’s interests and beliefs. Yet, gone are the mythical days of elite analysts communicating to a single core executive in charge of formulating and implementing all policy instruments. Many analysts engage with many centres producing (or co-producing) many instruments. Resolving ambiguity in one centre does not guarantee the delivery of your aims across many.
‘Top down’ accounts see this issue through the lens of a single central government, examining how to reassert central control by minimising implementation gaps.
Policy analysis may focus on (a) defining the policy problem, and (b) ensuring the implementation of its solution.
‘Bottom up’ accounts identify the inevitability (and legitimacy) of policy influence in multiple centres. Policy analysis may focus on how to define the problem in cooperation with other centres, or to set a strategic direction and encourage other centres to make sense of it in their context.
This terminology went out of fashion, but note the existence of each tendency in two ideal-type approaches to contemporary policy problems:
1. Centralised and formalised approaches.
Seek clarity and order to address urgent policy problems. Define the policy problem clearly, translate that definition into strategies for each centre, and develop a common set of effective ‘tools’ to ensure cooperation and delivery.
Policy analysis may focus on technical aspects, such as how to create a fine-detail blueprint for action, backed by performance management and accountability measures that tie actors to specific commitments.
The tagline may be: ambiguity is a problem to be solved, to direct policy actors towards a common goal.
Seek collaboration to make sense of, and address, problems. Reject a single definition of the problem, encourage actors in each centre (or in concert) to deliberate to make sense of problems together, and co-create the rules to guide a continuous process of collective behaviour.
Policy analysis may focus on how to contribute to a collaborative process of sense-making and rule-making.
The tagline may be: ambiguity presents an opportunity to energise policy actors, to harness the potential for innovation arising from deliberation.
Pick one approach and stick with it?
Describing these approaches in such binary terms makes the situation – and choice between approaches – look relatively straightforward. However, note the following issues:
Many policy sectors (and intersectoral agendas) are characterised by intense disagreement on which choice to make. These disagreements intersect with others (such as when people seek not only transformative policy change to solve global problems, but also equitable process and outcomes).
Some sectors seem to involve actors seeking the best of both worlds (centralise and localise, formalise and deliberate) without recognising the trade-offs and dilemmas that arise.
I have described these options as choices, but did not establish if anyone is in the position to make or contribute to that choice.
In that context, resolving ambiguity in your favour may still be the prize, but where would you even begin?
Well, that was an unsatisfying end to the post, eh? Maybe I’ll write a better one when some things are published. In the meantime, some of these papers and posts explore some of these issues:
This page describes a book and many posts on ‘prevention’ policy. We complain that governments use the phrase ‘prevention is better than cure’ without defining prevention, and that they want centralised and decentralised approaches to ‘preventive policymaking’.
A key argument in policy studies is that it is impossible to separate facts and values when making policy. We often treat our beliefs as facts, or describe certain facts as objective, but perhaps only to simplify our lives or support a political strategy (a ‘self-evident’ fact is very handy for an argument). People make empirical claims infused with their values and often fail to realise just how their values or assumptions underpin their claims.
This is not an easy argument to explain. One strategy is to use extreme examples to make the point. For example, Herbert Simon points to Hitler’s Mein Kampf as the ultimate example of value-based claims masquerading as facts. We can also identify historic academic research which asserts that men are more intelligent than women and some races are superior to others. In such cases, we would point out, for example, that the design of the research helped produce such conclusions: our values underpin our (a) assumptions about how to measure intelligence or other measures of superiority, and (b) interpretations of the results.
‘Wait a minute, though’ (you might say). “What about simple examples in which you can state facts with relative certainty – such as the statement ‘there are X number of words in this post’”. ‘Fair enough’, I’d say (you will have to speak with a philosopher to get a better debate about the meaning of your X words claim; I would simply say that it is trivially true). But this statement doesn’t take you far in policy terms. Instead, you’d want to say that there are too many or too few words, before you decided what to do about it.
In that sense, we have the most practical explanation of the unclear fact/ value distinction: the use of facts in policy is to underpin evaluations (assessments based on values). For example, we might point to the routine uses of data to argue that a public service is in ‘crisis’ or that there is a public health related epidemic (note: I wrote the post before COVID-19; it referred to crises of ‘non-communicable diseases’). We might argue that people only talk about ‘policy problems’ when they think we have a duty to solve them.
Or, facts and values often seem the hardest to separate when we evaluate the success and failure of policy solutions, since the measures used for evaluation are as political as any other part of the policy process. The gathering and presentation of facts is inherently a political exercise, and our use of facts to encourage a policy response is inseparable from our beliefs about how the world should work.
‘Modern science remains value-laden … even when so many people employ so many systematic methods to increase the replicability of research and reduce the reliance of evidence on individual scientists. The role of values is fundamental. Anyone engaging in research uses professional and personal values and beliefs to decide which research methods are the best; generate research questions, concepts and measures; evaluate the impact and policy relevance of the results; decide which issues are important problems; and assess the relative weight of ‘the evidence’ on policy effectiveness. We cannot simply focus on ‘what works’ to solve a problem without considering how we used our values to identify a problem in the first place. It is also impossible in practice to separate two choices: (1) how to gather the best evidence and (2) whether to centralize or localize policymaking. Most importantly, the assertion that ‘my knowledge claim is superior to yours’ symbolizes one of the most worrying exercises of power. We may decide to favour some forms of evidence over others, but the choice is value-laden and political rather than objective and innocuous’.
Implications for policy analysis
Many highly-intelligent and otherwise-sensible people seem to get very bothered with this kind of argument. For example, it gets in the way of (a) simplistic stories of heroic-objective-fact-based-scientists speaking truth to villainous-stupid-corrupt-emotional-politicians, (b) the ill-considered political slogan that you can’t argue with facts (or ‘science’), (c) the notion that some people draw on facts while others only follow their feelings, and (d) the idea that you can divide populations into super-facty versus post-truthy people.
A more sensible approach is to (1) recognise that all people combine cognition and emotion when assessing information, (2) treat politics and political systems as valuable and essential processes (rather than obstacles to technocratic policymaking), and (3) find ways to communicate evidence-informed analyses in that context. This article and 750 post explore how to reflect on this kind of communication.
One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts. How might we combine insights to think about effective communication?
1. Insights from policy analysis texts
Most texts in this series relate communication to understanding your audience (or client) and the political context. Your audience has limited attention or time to consider problems. They may have a good antennae for the political feasibility of any solution, but less knowledge of (or interest in) the technical details. In that context, your aim is to help them treat the problem as worthy of their energy (e.g. as urgent and important) and the solution as doable. Examples include:
Bardach: communicating with a client requires coherence, clarity, brevity, and minimal jargon.
Dunn: argumentation involves defining the size and urgency of a problem, assessing the claims made for each solution, synthesising information from many sources into a concise and coherent summary, and tailoring reports to your audience.
Smith: your audience makes a quick judgement on whether or not to read your analysis. Ask yourself questions including: how do I frame the problem to make it relevant, what should my audience learn, and how does each solution relate to what has been done before? Maximise interest by keeping communication concise, polite, and tailored to a policymaker’s values and interests.
2. Insights from studies of policymaker psychology
‘Rational’ shortcuts. Goal-oriented reasoning based on prioritizing trusted sources of information.
‘Irrational’ shortcuts. Emotional thinking, or thought fuelled by gut feelings, deeply held beliefs, or habits.
We can use such distinctions to examine the role of evidence-informed communication, to reduce:
Uncertainty, or a lack of policy-relevant knowledge. Focus on generating ‘good’ evidence and concise communication as you collate and synthesise information.
Ambiguity, or the ability to entertain more than one interpretation of a policy problem. Focus on argumentation and framing as you try to maximise attention to (a) one way of defining a problem, and (b) your preferred solution.
Gone are the mythical days of a small number of analysts communicating to a single core executive (and of the heroic researcher changing the world by speaking truth to power). Instead, we have many analysts engaging with many centres, creating a need to not only (a) tailor arguments to different audiences, but also (b) develop wider analytical skills (such as to foster collaboration and the use of ‘design principles’).
How to communicate effectively with policymakers
In that context, we argue that effective communication requires analysts to:
1. Understand your audience and tailor your response (using insights from psychology)
2. Identify ‘windows of opportunity’ for influence (while noting that these windows are outside of anyone’s control)
3. Engage with real world policymaking rather than waiting for a ‘rational’ and orderly process to appear (using insights from policy studies).
This post summarizes a key section of our review of education equity policymaking [see the full article for references to the studies summarized here].
One of the main themes is that many governments present a misleading image of their education policies. There are many variations on this theme, in which policymakers:
Describe the energetic pursuit of equity, and use the right language, as a way to hide limited progress.
Pursue ‘equity for all’ initiatives that ignore or downplay the specific importance of marginalization and minoritization, such as in relation to race and racism, immigration, ethnic minorities, and indigenous populations.
Pursue narrow definitions of equity in terms of access to schools, at the expense of definitions that pay attention to ‘out of school’ factors and social justice.
Minoritization is a strong theme in US studies in particular. US experiences help us categorise multiple modes of marginalisation in relation to race and migration, driven by witting and unwitting action and explicit and implicit bias:
The social construction of students and parents. Examples include: framing white students as ‘gifted’ and more deserving of merit-based education (or victims of equity initiatives); framing non-white students as less intelligent, more in need of special needs or remedial classes, and having cultural or other learning ‘deficits’ that undermine them and disrupt white students; and, describing migrant parents as unable to participate until they learn English.
Maintaining or failing to challenge inequitable policies. Examples include higher funding for schools and colleges with higher white populations, and tracking (segregating students according to perceived ability), which benefit white students disproportionately.
Ignoring social determinants or ‘out of school’ factors.
Creating the illusion of equity with measures that exacerbate inequalities. For example, promoting school choice policies while knowing that the rules restrict access to sought-after schools.
Promoting initiatives to ignore race, including so-called ‘color blind’ or ‘equity for all’ initiatives.
Prioritizing initiatives at the expense of racial or socio-economic equity, such as measures to boost overall national performance at the expense of targeted measures.
Game playing and policy subversion, including school and college selection rules to restrict access and improve metrics.
The wider international – primarily Global North – experience suggests that minoritization and marginalization in relation to race, ethnicity, and migration is a routine impediment to equity strategies, albeit with some uncertainty about which policies would have the most impact.
Other country studies describe the poor treatment of citizens in relation to immigration status or ethnicity, often while presenting the image of a more equitable system. Until recently, Finland’s global reputation for education equity built on universalism and comprehensive schools has contrasted with its historic ‘othering’ of immigrant populations. Japan’s reputation for containing a homogeneous population, allowing its governments to present an image of classless egalitarianism and harmonious society, contrasts with its discrimination against foreign students. Multiple studies of Canadian provinces provide the strongest accounts of the symbolic and cynical use of multiculturalism for political gains and economic ends:
As in the US, many countries use ‘special needs’ categories to segregate immigrant and ethnic minority populations. Mainstreaming versus special needs debates have a clear racial and ethnic dimension when (1) some groups are more likely to be categorised as having learning disabilities or behavioural disorders, and (2) language and cultural barriers are listed as disabilities in many countries. Further, ‘commonwealth’ country studies identify the marginalisation of indigenous populations in ways comparable to the US marginalisation of students of colour.
Overall, these studies generate the sense that the frequently used language of education equity policy can signal a range of possibilities, from (1) high energy and sincere commitment to social justice, to (2) the cynical use of rhetoric and symbolism to protect historic inequalities.
Turner, E.O., and Spain, A.K., (2020) ‘The Multiple Meanings of (In)Equity: Remaking School District Tracking Policy in an Era of Budget Cuts and Accountability’, Urban Education, 55, 5, 783-812 https://doi.org/10.1177%2F0042085916674060
Felix, E.R. and Trinidad, A. (2020) ‘The decentralization of race: tracing the dilution of racial equity in educational policy’, International Journal of Qualitative Studies in Education, 33, 4, 465-490 https://doi.org/10.1080/09518398.2019.1681538
This post first appeared as Who controls public policy? on the UK in a Changing Europe website. There is also a 1-minute video, but you would need to be a completist to want to watch it.
Most coverage of British politics focuses on the powers of a small group of people at the heart of government. In contrast, my research on public policy highlights two major limits to those powers, related to the enormous number of problems that policymakers face, and to the sheer size of the government machine.
First, elected policymakers simply do not have the ability to properly understand, let alone solve, the many complex policy problems they face. They deal with this limitation by paying unusually high attention to a small number of problems and effectively ignoring the rest.
Second, policymakers rely on a huge government machine and network of organisations (containing over 5 million public employees) essential to policy delivery, and oversee a statute book which they could not possibly understand.
In other words, they have limited knowledge and even less control of the state, and have to make choices without knowing how they relate to existing policies (or even what happens next).
These limits to ministerial powers should prompt us to think differently about how to hold them to account. If they only have the ability to influence a small proportion of government business, should we blame them for everything that happens in their name?
My approach is to apply these general insights to specific problems in British politics. Three examples help to illustrate their ability to inform British politics in new ways.
First, policymaking can never be ‘evidence based’. Some scientists cling to the idea that the ‘best’ evidence should always catch the attention of policymakers, and assume that ‘speaking truth to power’ helps evidence win the day.
The truth is that policymakers only have the capacity to consider a tiny proportion of all available information. Therefore, they must find efficient ways to ignore almost all evidence to make timely choices.
Second, the UK government cannot ‘take back control’ of policy following Brexit simply because it was not in control of policy before the UK joined. The idea of control is built on the false image of a powerful centre of government led by a small number of elected policymakers.
This way of thinking assumes that sharing power is simply a choice. However, sharing power and responsibility is borne of necessity because the British state is too large to be manageable.
Governments manage this complexity by breaking down their responsibilities into many government departments. Still, ministers can only pay attention to a tiny proportion of issues managed by each department. They delegate most of their responsibilities to civil servants, agencies, and other parts of the public sector.
In turn, those organisations rely on interest groups and experts to provide information and advice.
As a result, most public policy is conducted through small and specialist ‘policy communities’ that operate out of the public spotlight and with minimal elected policymaker involvement.
The logical conclusion is that senior elected politicians are less important than people think. While we like to think of ministers sitting in Whitehall and taking crucial decisions, most of these decisions are taken in their name but without their intervention.
Third, the current pandemic underlines all too clearly the limits of government power. Of course people are pondering the degree to which we can blame UK government ministers for poor choices in relation to Covid-19, or learn from their mistakes to inform better policy.
Many focus on the extent to which ministers were ‘guided by the science’. However, at the onset of a new crisis, government scientists face the same uncertainty about the nature of the policy problem, and ministers are not really able to tell if a Covid-19 policy would work as intended or receive enough public support.
Some examples from the UK experience expose the limited extent to which policymakers can understand, far less control, an emerging crisis.
Prior to the lockdown, neither scientists nor ministers knew how many people were infected, nor when levels of infection would peak.
They had limited capacity to test. They did not know how often (and how well) people wash their hands. They did not expect people to accept and follow strict lockdown rules so readily, and did not know which combination of measures would have the biggest impact.
When supporting businesses and workers during ‘furlough’, they did not know who would be affected and therefore how much the scheme would cost.
In short, while Covid-19 has prompted policy change and state intervention on a scale not witnessed outside of wartime, the government has never really known what impact its measures would have.
Overall, the take-home message is that the UK narrative of strong central government control is damaging to political debate and undermines policy learning. It suggests that every poor outcome is simply the consequence of bad choices by powerful leaders. If so, we are unable to distinguish between the limited competence of some leaders and the limited powers of them all.
On the 23rd March 2020, the UK Government’s Prime Minister Boris Johnson declared: ‘From this evening I must give the British people a very simple instruction – you must stay at home’. He announced measures to help limit the impact of COVID-19 , including new regulations on behaviour, police powers to support public health, budgetary measures to support businesses and workers during their economic inactivity, the almost-complete closure of schools, and the major expansion of healthcare capacity via investment in technology, discharge to care homes, and a consolidation of national, private, and new health service capacity (note that many of these measures relate only to England, with devolved governments responsible for public health in Northern Ireland, Scotland, and Wales). Overall, the coronavirus prompted almost-unprecedented policy change, towards state intervention, at a speed and magnitude that seemed unimaginable before 2020.
Yet, many have criticised the UK government’s response as slow and insufficient. Criticisms include that UK ministers and their advisors did not:
take the coronavirus seriously enough in relation to existing evidence (when its devastating effect was increasingly apparent in China in January and Italy from February)
act as quickly as some countries to test for infection to limit its spread, and/ or introduce swift measures to close schools, businesses, and major social events, and regulate social behaviour (such as in Taiwan, South Korea, or New Zealand)
introduce strict-enough measures to stop people coming into contact with each other at events and in public transport.
They blame UK ministers for pursuing a ‘mitigation’ strategy, allegedly based on reducing the rate of infection and impact of COVID-19 until the population developed ‘herd immunity’, rather than an elimination strategy to minimise its spread until a vaccine or antiviral could be developed. Or, they criticise the over-reliance on specific models, which underestimated the R (rate of transmission) and ‘doubling time’ of cases and contributed to a 2-week delay of lockdown.
In contrast, scientific advisers to UK ministers have emphasised the need to gather evidence continuously to model the epidemic and identify key points at which to intervene, to reduce the size of the peak of population illness initially, then manage the spread of the virus over the longer term (e.g. Vallance). Throughout, they emphasised the need for individual behavioural change (hand washing and social distancing), supplemented by government action, in a liberal democracy in which direct imposition is unusual and, according to UK ministers, unsustainable in the long term.
Second, policymakers have a limited understanding, and even less control, of their policymaking environments. No single centre of government has the power to control policy outcomes. Rather, there are many policymakers and influencers spread across a political system, and most choices in government are made in subsystems, with their own rules and networks, over which ministers have limited knowledge and influence. Further, the social and economic context, and events such as a pandemic, often appear to be largely out of their control.
Third, even though they lack full knowledge and control, governments must still make choices. Therefore, their choices are necessarily flawed.
Fourth, their choices produce unequal impacts on different social groups.
Overall, the idea that policy is controlled by a small number of UK government ministers, with the power to solve major policy problems, is still popular in media and public debate, but dismissed in policy research .
Hold the UK government to account via systematic analysis, not trials by social media
To make more sense of current developments in the UK, we need to understand how UK policymakers address these limitations in practice, and widen the scope of debate to consider the impact of policy on inequalities.
A policy theory-informed and real-time account helps us avoid after-the-fact wisdom and bad-faith trials by social media.
UK government action has been deficient in important ways, but we need careful and systematic analysis to help us separate (a) well-informed criticism to foster policy learning and hold ministers to account, from (a) a naïve and partisan rush to judgement that undermines learning and helps let ministers off the hook.
To that end, I combine insights from policy analysis guides, policy theories, and critical policy analysis to analyse the UK government’s initial coronavirus policy. I use the lens of 5-step policy analysis models to identify what analysts and policymakers need to do, the limits to their ability to do it, and the distributional consequences of their choices.
Incisive essay from @bailabomba on studying the use of research evidence through critical perspectives that center the marginalized. There is so too much good stuff in here to summarize via twitter (you should just read it). But let me point out a few things that resonated (1/n) https://t.co/nIahyIjwBo
‘maintain power hierarchies and accept social inequity as a given. Indeed, research has been historically and contemporaneously (mis)used to justify a range of social harms from enslavement, colonial conquest, and genocide, to high-stakes testing, disproportionality in child welfare services, and “broken windows” policing’ (Doucet, 2019: 2)
Second, they help us redefine usefulness in relation to:
‘how well research evidence communicates the lived experiences of marginalized groups so that the understanding of the problem and its response is more likely to be impactful to the community in the ways the community itself would want’ (Doucet, 2019: 3)
In that context, potential responses include to:
Recognise the ways in which research and policy combine to reproduce the subordination of social groups.
Commit to social justice, to help ‘eliminate oppressions and to emancipate and empower marginalized groups’, such as by disrupting ‘the policies and practices that disproportionately harm marginalized groups’ (2019: 5-7)
Develop strategies to ‘center race’, ‘democratize’ research production, and ‘leverage’ transdisciplinary methods (including poetry, oral history and narrative, art, and discourse analysis – compare with Lorde) (2019: 10-22)
A key way to understand these processes is to use, and improve, policy theories to explain the dynamics and impacts of a racialized political system. For example, ‘policy feedback theory’ (PFT) draws on elements from historical institutionalism and SCPD to identify the rules, norms, and practices that reinforce subordination.
In particular, Michener’s (2019: 424) ‘Policy Feedback in a Racialized Polity’ develops a ‘racialized feedback framework (RFF)’ to help explain the ‘unrelenting force with which racism and White supremacy have pervaded social, economic, and political institutions in the United States’. Key mechanisms include (2019: 424-6):
‘Channelling resources’, in which the rules, to distribute government resources, benefit some social groups and punish others.
Examples include: privileging White populations in social security schemes and the design/ provision of education, and punishing Black populations disproportionately in prisons (2019: 428-32).
In that context, pragmatism relates to the idea that policy analysis consists of ‘art and craft’, in which analysts assess what is politically feasible if taking a low-risk client-oriented approach.
In this context, pragmatism may be read as a euphemism for conservatism and status quo protection.
In other words, other posts in the series warn against too-high expectations for entrepreneurial and systems thinking approaches to major policy change, but they should not be read as an excuse to reject ambitious plans for much-needed changes to policy and policy analysis (compare with Meltzer and Schwartz, who engage with this dilemma in client-oriented advice).
Throughout this series you may notice three different conceptions about the scope of policy analysis:
‘Ex ante’ (before the event) policy analysis. Focused primarily on defining a problem, and predicting the effect of solutions, to inform current choice (as described by Meltzer and Schwartz and Thissen and Walker).
‘Ex post’ (after the event) policy analysis. Focused primarily on monitoring and evaluating that choice, perhaps to inform future choice (as described famously by Weiss).
Some combination of both, to treat policy analysis as a continuous (never-ending) process (as described by Dunn).
As usual, these are not hard-and-fast distinctions, but they help us clarify expectations in relation to different scenarios.
The impact of old-school ex ante policy analysis
Radin provides a valuable historical discussion of policymaking with the following elements:
a small number of analysts, generally inside government (such as senior bureaucrats, scientific experts, and – in particular- economists),
giving technical or factual advice,
about policy formulation,
to policymakers at the heart of government,
on the assumption that policy problems would be solved via analysis and action.
This kind of image signals an expectation for high impact: policy analysts face low competition, enjoy a clearly defined and powerful audience, and their analysis is expected to feed directly into choice.
Radin goes on to describe a much different, modern policy environment: more competition, more analysts spread across and outside government, with a less obvious audience, and – even if there is a client – high uncertainty about where the analysis fits into the bigger picture.
Yet, the impetus to seek high and direct impact remains.
This combination of shifting conditions but unshifting hopes/ expectations helps explain a lot of the pragmatic forms of policy analysis you will see in this series, including:
Keep it catchy, gather data efficiently, tailor your solutions to your audience, and tell a good story (Bardach)
Speak with an audience in mind, highlight a well-defined problem and purpose, project authority, use the right form of communication, and focus on clarity, precision, conciseness, and credibility ( Smith)
Address your client’s question, by their chosen deadline, in a clear and concise way that they can understand (and communicate to others) quickly (Weimer and Vining)
Client-oriented advisors identify the beliefs of policymakers and anticipate the options worth researching (Mintrom)
Identify your client’s resources and motivation, such as how they seek to use your analysis, the format of analysis they favour (make it ‘concise’ and ‘digestible’), their deadline, and their ability to make or influence the policies you might suggest (Meltzer and Schwartz).
‘Advise strategically’, to help a policymaker choose an effective solution within their political context (Thissen and Walker).
Focus on producing ‘policy-relevant knowledge’ by adapting to the evidence-demands of policymakers and rejecting a naïve attachment to ‘facts speaking for themselves’ or ‘knowledge for its own sake’ (Dunn).
The impact of research and policy evaluation
Many of these recommendations are familiar to scientists and researchers, but generally in the context of far lower expectations about their likely impact, particularly if those expectations are informed by policy studies (compare Oliver & Cairney with Cairney & Oliver).
In that context, Weiss’ work is a key reference point. It gives us a menu of ways in which policymakers might use policy evaluation (and research evidence more widely):
to inform solutions to a problem identified by policymakers
as one of many sources of information used by policymakers, alongside ‘stakeholder’ advice and professional and service user experience
as a resource used selectively by politicians, with entrenched positions, to bolster their case
as a tool of government, to show it is acting (by setting up a scientific study), or to measure how well policy is working
as a source of ‘enlightenment’, shaping how people think over the long term (compare with this discussion of ‘evidence based policy’ versus ‘policy based evidence’).
In other words, researchers may have a role, but they struggle (a) to navigate the politics of policy analysis, (b) find the right time to act, and (c) to secure attention, in competition with many other policy actors.
The potential for a form of continuous impact
Dunn suggests that the idea of ‘ex ante’ policy analysis is misleading, since policymaking is continuous, and evaluations of past choices inform current choices. Think of each policy analysis steps as ‘interdependent’, in which new knowledge to inform one step also informs the other four. For example, routine monitoring helps identify compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly, and if we can make a causal link between the policy solutions and outcomes. Its impact is often better seen as background information with intermittent impact.
Key conclusions to bear in mind
The demand for information from policy analysts may be disproportionately high when policymakers pay attention to a problem, and disproportionately low when they feel that they have addressed it.
Common advice for policy analysts and researchers often looks very similar: keep it concise, tailor it to your audience, make evidence ‘policy relevant’, and give advice (don’t sit on the fence). However, unless researchers are prepared to act quickly, to gather data efficiently (not comprehensively), to meet a tight brief for a client, they are not really in the impact business described by most policy analysis texts.
A lot of routine, continuous, impact tends to occur out of the public spotlight, based on rules and expectations that most policy actors take for granted.
One aim of this series is to combine insights from policy research (1000, 500) and policy analysis texts.
In this case, modern theories of the policy process help you identify your audience and their capacity to follow your advice. This simple insight may have a profound impact on the advice you give.
Policy analysis for an ideal-type world
For our purposes, an ideal-type is an abstract idea, which highlights hypothetical features of the world, to compare with ‘real world’ descriptions. It need not be an ideal to which we aspire. For example, comprehensive rationality describes the ideal type, and bounded rationality describes the ‘real world’ limitations to the ways in which humans and organisations process information.
Imagine writing policy analysis in the ideal-type world of a single powerful ‘comprehensively rational’ policymaker at the heart of government, making policy via an orderly policy cycle.
Your audience would be easy to identify, your analysis would be relatively simple, and you would not need to worry about what happens after you make a recommendation for policy change.
You could adopt a simple 5-8 step policy analysis method, use widely-used tools such as cost-benefit analysis to compare solutions, and know where the results would feed into the policy process.
I have perhaps over-egged this ideal-type pudding, but I think a lot of traditional policy analyses tapped into this basic idea and focused more on the science of analysis than the political and policymaking context in which it takes place (see Radin and Brans, Geva-May, and Howlett).
This image is a key feature of policy process theories, which describe:
Many policymakers and influencers spread across many levels and types of government (as the venues in which authoritative choice takes place). Consequently, it is not a straightforward task to identify and know your audience, particularly if the problem you seek to solve requires a combination of policy instruments controlled by different actors.
Each venue resembles an institution driven by formal and informal rules. Formal rules are written-down or widely-known. Informal rules are unwritten, difficult to understand, and may not even be understood in the same way by participants. Consequently, it is difficult to know if your solution will be a good fit with the standard operating procedures of organisations (and therefore if it is politically feasible or too challenging).
Policymakers and influencers operate in ‘subsystems’, forming networks built on resources such as trust or coalitions based on shared beliefs. Effective policy analysis may require you to engage with – or become part of – such networks, to allow you to understand the unwritten rules of the game and encourage your audience to trust the messenger. In some cases, the rules relate to your willingness to accept current losses for future gains, to accept the limited impact of your analysis now in the hope of acceptance at the next opportunity.
Actors relate their analysis to shared understandings of the world – how it is, and how it should be – which are often so well-established as to be taken for granted. Common terms include paradigms, hegemons, core beliefs, and monopolies of understandings. These dominant frames of reference give meaning to your policy solution. They prompt you to couch your solutions in terms of, for example, a strong attachment to evidence-based cases in public health, value for money in treasury departments, or with regard to core principles such as liberalism or socialism in different political systems.
Your solutions relate to socioeconomic context and the events that seem (a) impossible to ignore and (b) out of the control of policymakers. Such factors range from a political system’s geography, demography, social attitudes, and economy, while events can be routine elections or unexpected crises.
What would you recommend under these conditions? Rethinking 5-step analysis
There is a large gap between policymakers’ (a) formal responsibilities versus (b) actual control of policy processes and outcomes. Even the most sophisticated ‘evidence based’ analysis of a policy problem will fall flat if uninformed by such analyses of the policy process. Further, the terms of your cost-benefit analysis will be highly contested (at least until there is agreement on what the problem is, and how you would measure the success of a solution).
Modern policy analysis texts try to incorporate such insights from policy theories while maintaining a focus on 5-8 steps. For example:
Meltzer and Schwartz contrast their ‘flexible’ and ‘iterative’ approach with a too- rigid ‘rationalistic approach’.
Weimer and Vininginvest 200 pages in economic analyses of markets and government, often highlighting a gap between (a) our ability to model and predict economic and social behaviour, and (b) what actually happens when governments intervene.
These choices are not mutually exclusive, but there are key tensions between them that should not be ignored, such as when we ask:
how many people should be involved in policy analysis?
whose knowledge counts?
who should control policy design?
Perhaps we can only produce a sensible combination of the two if we clarify their often very different implications for policy analysis. Let’s begin with one story for each and see where they take us.
A story of ‘evidence-based policymaking’
One story of ‘evidence based’ policy analysis is that it should be based on the best available evidence of ‘what works’.
Often, the description of the ‘best’ evidence relates to the idea that there is a notional hierarchy of evidence according to the research methods used.
At the top would be the systematic review of randomised control trials, and nearer the bottom would be expertise, practitioner knowledge, and stakeholder feedback.
This kind of hierarchy has major implications for policy learning and transfer, such as when importing policy interventions from abroad or ‘scaling up’ domestic projects.
Put simply, the experimental method is designed to identify the causal effect of a very narrowly defined policy intervention. Its importation or scaling up would be akin to the description of medicine, in which the evidence suggests the causal effect of a specific active ingredient to be administered with the correct dosage. A very strong commitment to a uniform model precludes the processes we might associate with co-production, in which many voices contribute to a policy design to suit a specific context (see also: the intersection between evidence and policy transfer).
A story of co-production in policymaking
One story of ‘co-produced’ policy analysis is that it should be ‘reflexive’ and based on respectful conversations between a wide range of policymakers and citizens.
Often, the description is of the diversity of valuable policy relevant information, with scientific evidence considered alongside community voices and normative values.
This rejection of a hierarchy of evidence also has major implications for policy learning and transfer. Put simply, a co-production method is designed to identify the positive effect – widespread ‘ownership’ of the problem and commitment to a commonly-agreed solution – of a well-discussed intervention, often in the absence of central government control.
Its use would be akin to a collaborative governance mechanism, in which the causal mechanism is perhaps the process used to foster agreement (including to produce the rules of collective action and the evaluation of success) rather than the intervention itself. A very strong commitment to this process precludes the adoption of a uniform model that we might associate with narrowly-defined stories of evidence based policymaking.
Where can you find these stories in the 750-words series?
My interest has been to understand how governments juggle competing demands, such as to (a) centralise and localise policymaking, (b) encourage uniform and tailored solutions, and (c) embrace and reject a hierarchy of evidence. What could possibly go wrong when they entertain contradictory objectives? For example:
Paul Cairney (2019) “The myth of ‘evidence based policymaking’ in a decentred state”, forthcoming in Public Policy and Administration(Special Issue, The Decentred State) (accepted version)
Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open AccessPDF
Paul Cairney and Kathryn Oliver (2017) ‘Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?’ Health Research Policy and Systems (HARPS), DOI: 10.1186/s12961-017-0192-xPDF
Paul Cairney (2017) “Evidence-based best practice is more political than it looks: a case study of the ‘Scottish Approach’”, Evidence and Policy, 13, 3, 499-515 PDF
Please see the Policy Analysis in 750 words series overview before reading the summary. This book is a whopper, with almost 500 pages and 101 (excellent) discussions of methods, so 800 words over budget seems OK to me. If you disagree, just read every second word. By the time you reach the cat hanging in there baby you are about 300 (150) words away from the end.
‘Policy analysis is a process of multidisciplinary inquiry aiming at the creation, critical assessment, and communication of policy-relevant knowledge … to solve practical problems … Its practitioners are free to choose among a range of scientific methods, qualitative as well as quantitative, and philosophies of science, so long as these yield reliable knowledge’ (Dunn, 2017: 2-3).
Dunn (2017: 4) describes policy analysis as pragmatic and eclectic. It involves synthesising policy relevant (‘usable’) knowledge, and combining it with experience and ‘practical wisdom’, to help solve problems with analysis that people can trust.
This exercise is ‘descriptive’, to define problems, and ‘normative’, to decide how the world should be and how solutions get us there (as opposed to policy studies/ research seeking primarily to explain what happens).
Dunn contrasts the ‘art and craft’ of policy analysts with other practices, including:
The idea of ‘best practice’ characterised by 5-step plans.
In practice, analysis is influenced by: the cognitive shortcuts that analysts use to gather information; the role they perform in an organisation; the time constraints and incentive structures in organisations and political systems; the expectations and standards of their profession; and, the need to work with teams consisting of many professions/ disciplines (2017: 15-6)
The cost (in terms of time and resources) of conducting multiple research and analytical methods is high, and highly constrained in political environments (2017: 17-8; compare with Lindblom)
The naïve attachment to ‘facts speak for themselves’ or ‘knowledge for its own sake’ undermines a researcher’s ability to adapt well to the evidence-demands of policymakers (2017: 68; 4 compare with Why don’t policymakers listen to your evidence?).
To produce ‘policy-relevant knowledge’ requires us to ask five questions before (Qs1-3) and after (Qs4-5) policy intervention (2017: 5-7; 54-6):
What is the policy problem to be solved?
For example, identify its severity, urgency, cause, and our ability to solve it.
Don’t define the wrong problem, such as by oversimplifying or defining it with insufficient knowledge.
Key aspects of problems including ‘interdependency’ (each problem is inseparable from a host of others, and all problems may be greater than the sum of their parts), ‘subjectivity’ and ‘artificiality’ (people define problems), ‘instability’ (problems change rather than being solved), and ‘hierarchy’ (which level or type of government is responsible) (2017: 70; 75).
Problems vary in terms of how many relevant policymakers are involved, how many solutions are on the agenda, the level of value conflict, and the unpredictability of outcomes (high levels suggest ‘wicked’ problems, and low levels ‘tame’) (2017: 75)
‘Problem-structuring methods’ are crucial, to: compare ways to define or interpret a problem, and ward against making too many assumptions about its nature and cause; produce models of cause-and-effect; and make a problem seem solve-able, such as by placing boundaries on its coverage. These methods foster creativity, which is useful when issues seem new and ambiguous, or new solutions are in demand (2017: 54; 69; 77; 81-107).
Problem definition draws on evidence, but is primarily the exercise of power to reduce ambiguity through argumentation, such as when defining poverty as the fault of the poor, the elite, the government, or social structures (2017: 79; see Stone).
What effect will each potential policy solution have?
Many ‘forecasting’ methods can help provide ‘plausible’ predictions about the future effects of current/ alternative policies (Chapter 4 contains a huge number of methods).
‘Creativity, insight, and the use of tacit knowledge’ may also be helpful (2017: 55).
However, even the most-effective expert/ theory-based methods to extrapolate from the past are flawed, and it is important to communicate levels of uncertainty (2017: 118-23; see Spiegelhalter).
Which solutions should we choose, and why?
‘Prescription’ methods help provide a consistent way to compare each potential solution, in terms of its feasibility and predicted outcome, rather than decide too quickly that one is superior (2017: 55; 190-2; 220-42).
They help to combine (a) an estimate of each policy alternative’s outcome with (b) a normative assessment.
Normative assessments are based on values such as ‘equality, efficiency, security, democracy, enlightenment’ and beliefs about the preferable balance between state, communal, and market/ individual solutions (2017: 6; 205 see Weimer & Vining, Meltzer & Schwartz, and Stone on the meaning of these values).
For example, cost benefit analysis (CBA) is an established – but problematic – economics method based on finding one metric – such as a $ value – to predict and compare outcomes (2017: 209-17; compare Weimer & Vining, Meltzer & Schwartz, and Stone)
Cost effectiveness analysis uses a $ value for costs, but compared with other units of measurement for benefits (such as outputs per $) (2017: 217-9)
Although such methods help us combine information and values to compare choices, note the inescapable role of power to decide whose values (and which outcomes, affecting whom) matter (2017: 204)
What were the policy outcomes?
‘Monitoring’ methods help identify (say): levels of compliance with regulations, if resources and services reach ‘target groups’, if money is spent correctly (such as on clearly defined ‘inputs’ such as public sector wages), and if we can make a causal link between the policy inputs/ activities/ outputs and outcomes (2017: 56; 251-5)
Monitoring is crucial because it is so difficult to predict policy success, and unintended consequences are almost inevitable (2017: 250).
However, the data gathered are usually no more than proxy indicators of outcomes. Further, the choice of indicators reflect what is available, ‘particular social values’, and ‘the political biases of analysts’ (2017: 262)
The idea of ‘evidence based policy’ is linked strongly to the use of experiments and systematic review to identify causality (2017: 273-6; compare with trial-and-error learning in Gigerenzer, complexity theory, and Lindblom).
Did the policy solution work as intended? Did it improve policy outcomes?
Although we frame policy interventions as ‘solutions’, few problems are ‘solved’. Instead, try to measure the outcomes and the contribution of your solution, and note that evaluations of success and ‘improvement’ are contested (2017: 57; 332-41).
Policy evaluation is not an objective process in which we can separate facts from values.
We can gather facts about the policy process, and the impacts of policy on people, but this information has little meaning until we decide whose experiences matter.
Overall, the idea of ‘ex ante’ (forecasting) policy analysis is a little misleading, since policymaking is continuous, and evaluations of past choices inform current choices.
Policy analysis methods are ‘interdependent’, and ‘knowledge transformations’ describes the impact of knowledge regarding one question on the other four (2017: 7-13; contrast with Meltzer & Schwartz, Thissen & Walker).
Developing arguments and communicating effectively
Dunn (2017: 19-21; 348-54; 392) argues that ‘policy argumentation’ and the ‘communication of policy-relevant knowledge’ are central to policymaking’ (See Chapter 9 and Appendices 1-4 for advice on how to write briefs, memos, and executive summaries and prepare oral testimony).
He identifies seven elements of a ‘policy argument’ (2017: 19-21; 348-54), including:
The claim itself, such as a description (size, cause) or evaluation (importance, urgency) of a problem, and prescription of a solution
The things that support it (including reasoning, knowledge, authority)
Incorporating the things that could undermine it (including any ‘qualifier’, the communication of uncertainty about current knowledge, and counter-arguments).
The key stages of communication (2017: 392-7; 405; 432) include:
‘Analysis’, focusing on ‘technical quality’ (of the information and methods used to gather it), meeting client expectations, challenging the ‘status quo’, albeit while dealing with ‘political and organizational constraints’ and suggesting something that can actually be done.
‘Documentation’, focusing on synthesising information from many sources, organising it into a coherent argument, translating from jargon or a technical language, simplifying, summarising, and producing user-friendly visuals.
‘Utilization’, by making sure that (a) communications are tailored to the audience (its size, existing knowledge of policy and methods, attitude to analysts, and openness to challenge), and (b) the process is ‘interactive’ to help analysts and their audiences learn from each other.
Policy analysis and policy theory: systems thinking, evidence based policymaking, and policy cycles
Dunn (2017: 31-40) situates this discussion within a brief history of policy analysis, which culminated in new ways to express old ambitions, such as to:
Note the huge difference between (a) policy analysis discussions of ‘systems thinking’ built on the hope that if we can understand them we can direct them, and (b) policy theory discussions that emphasise ‘emergence’ in the absence of central control (and presence of multi-centric policymaking).
Also note that Dunn (2017: 73) describes policy problems – rather than policymaking – as complex systems. I’ll write another post (short, I promise) on the many different (and confusing) ways to use the language of complexity.
Promote ‘evidence based policy’, as the new way to describe an old desire for ‘technocratic’ policymaking that accentuates scientific evidence and downplays politics and values (see also 2017: 60-4).
Note the idea of ‘erotetic rationality’ in which people deal with their lack of knowledge of a complex world by giving up on the idea of certainty (accepting their ‘ignorance’), in favour of a continuous process of ‘questioning and answering’.
This approach is a pragmatic response to the lack of order and predictability of policymaking systems, which limits the effectiveness of a rigid attachment to ‘rational’ 5 step policy analyses (compare with Meltzer & Schwartz).
Dunn (2017: 41-7) also provides an unusually useful discussion of the policy cycle. Rather than seeing it as a mythical series of orderly stages, Dunn highlights:
Lasswell’s original discussion of policymaking functions (or functional requirements of policy analysis, not actual stages to observe), including: ‘intelligence’ (gathering knowledge), ‘promotion’ (persuasion and argumentation while defining problems), ‘prescription’, ‘invocation’ and ‘application’ (to use authority to make sure that policy is made and carried out), and ‘appraisal’ (2017: 42-3).
The constant interaction between all notional ‘stages’ rather than a linear process: attention to a policy problem fluctuates, actors propose and adopt solutions continuously, actors are making policy (and feeding back on its success) as they implement, evaluation (of policy success) is not a single-shot document, and previous policies set the agenda for new policy (2017: 44-5).
In that context, it is no surprise that the impact of a single policy analyst is usually minimal (2017: 57). Sorry to break it to you. Hang in there, baby.
‘The basic relationship between a decision-maker (the client) and an analyst has moved from a two-person encounter to an extremely complex and diverse set of interactions’ (Radin, 2019: 2).
Many texts in this series continue to highlight the client-oriented nature of policy analysis (Weimer and Vining), but within a changing policy process that has altered the nature of that relationship profoundly.
We can use Radin’s work to present two main stories of policy analysis:
The old ways of making policy resembled a club, or reflected a clear government hierarchy, involving:
a small number of analysts, generally inside government (such as senior bureaucrats, scientific experts, and – in particular- economists),
giving technical or factual advice,
about policy formulation,
to policymakers at the heart of government,
on the assumption that policy problems would be solved via analysis and action.
Modern policy analysis is characterised by a more open and politicised process in which:
many analysts, inside and outside government,
compete to interpret facts, and give advice,
about setting the agenda, and making, delivering, and evaluating policy,
across many policymaking venues,
often on the assumption that governments have a limited ability to understand and solve complex policy problems.
As a result, the client-analyst relationship is increasingly fluid:
In previous eras, the analyst’s client was a senior policymaker, the main focus was on the analyst-client relationship, and ‘both analysts and clients did not spend much time or energy thinking about the dimensions of the policy environment in which they worked’ (2019: 59). Now, in a multi-centric policymaking environment:
It is tricky to identify the client.
We could imagine the client to be someone paying for the analysis, someone affected by its recommendations, or all policy actors with the ability to act on the advice (2019: 10).
If there is ‘shared authority’ for policymaking within one political system, a ‘client’ (or audience) may be a collection of policymakers and influencers spread across a network containing multiple types of government, non-governmental actors, and actors responsible for policy delivery (2019: 33).
The growth in international cooperation also complicates the idea of a single client for policy advice (2019: 33-4)
This shift may limit the ‘face-to-face encounters’ that would otherwise provide information for – and perhaps trust in – the analyst (2019: 2-3).
It is tricky to identify the analyst
Radin (2019: 9-25) traces, from the post-war period in the US, a major expansion of policy analysts, from the notional centre of policymaking in federal government towards analysts spread across many venues, inside government (across multiple levels, ‘policy units’, and government agencies) and congressional committees, and outside government (such as in influential think tanks).
Policy analysts can also be specialist external companies contracted by organisations to provide advice (2019: 37-8).
This expansion shifted the image of many analysts, from a small number of trusted insiders towards many being treated as akin to interest groups selling their pet policies (2019: 25-6).
The nature – and impact – of policy analysis has always been a little vague, but now it seems more common to suggest that ‘policy analysts’ may really be ‘policy advocates’ (2019: 44-6).
As such, they may now have to work harder to demonstrate their usefulness (2019: 80-1) and accept that their analysis will have a limited impact (2019: 82, drawing on Weiss’ discussion of ‘enlightenment’).
Consequently, the necessary skills of policy analysis have changed:
Although many people value systematic policy analysis (and many rely on economists), an effective analyst does not simply apply economic or scientific techniques to analyse a problem or solution, or rely on one source of expertise or method, as if it were possible to provide ‘neutral information’ (2019: 26).
Indeed, Radin (2019: 31; 48) compares the old ‘acceptance that analysts would be governed by the norms of neutrality and objectivity’ with
(a) increasing calls to acknowledge that policy analysis is part of a political project to foster some notion of public good or ‘public interest’, and
(b) Stone’s suggestion that the projection of reason and neutrality is a political strategy.
In other words, the fictional divide between political policymakers and neutral analysts is difficult to maintain.
Rather, think of analysts as developing wider skills to operate in a highly political environment in which the nature of the policy issue is contested, responsibility for a policy problem is unclear, and it is not clear how to resolve major debates on values and priorities:
Some analysts will be expected to see the problem from the perspective of a specific client with a particular agenda.
Other analysts may be valued for their flexibility and pragmatism, such as when they acknowledge the role of their own values, maintain or operate within networks, communicate by many means, and supplement ‘quantitative data’ with ‘hunches’ when required (2019: 2-3; 28-9).
Radin (2019: 21) emphasises a shift in skills and status
The idea of (a) producing new and relatively abstract ideas, based on high control over available information, at the top of a hierarchical organisation, makes way for (b) developing the ability to:
generate a wider understanding of organisational and policy processes, reflecting the diffusion of power across multiple policymaking venues
the limits to a government’s ability to understand and solve problems (2019: 95-6),
the inescapable conflict over trade-offs between values and goals, which are difficult to resolve simply by weighting each goal (2019: 105-8; see Stone), and
do so flexibly, to recognise major variations in problem definition, attention and networks across different policy sectors and notional ‘stages’ of policymaking (2019: 75-9; 84).
Radin’s (2019: 48) overall list of relevant skills include:
‘Case study methods, Cost- benefit analysis, Ethical analysis, Evaluation, Futures analysis, Historical analysis, Implementation analysis, Interviewing, Legal analysis, Microeconomics, Negotiation, mediation, Operations research, Organizational analysis, Political feasibility analysis, Public speaking, Small- group facilitation, Specific program knowledge, Statistics, Survey research methods, Systems analysis’
They develop alongside analytical experience and status, from the early career analyst trying to secure or keep a job, to the experienced operator looking forward to retirement (2019: 54-5)
A checklist for policy analysts
Based on these skills requirements, the contested nature of evidence, and the complexity of the policymaking environment, Radin (2019: 128-31) produces a 4-page checklist of – 91! – questions for policy analysts.
For me, it serves two main functions:
It is a major contrast to the idea that we can break policy analysis into a mere 5-8 steps (rather, think of these small numbers as marketing for policy analysis students, akin to 7-minute abs)
It presents policy analysis as an overwhelming task with absolutely no guarantee of policy impact.
To me, this cautious, eyes-wide-open, approach is preferable to the sense that policy analysts can change the world if they just get the evidence and the steps right.
Iris Geva-May (2005) ‘Thinking Like a Policy Analyst. Policy Analysis as a Clinical Profession’, in Geva-May (ed) Thinking Like a Policy Analyst. Policy Analysis as a Clinical Profession (Basingstoke: Palgrave)
Although the idea of policy analysis may be changing, Geva-May (2005: 15) argues that it remains a profession with its own set of practices and ways of thinking. As with other professions (like medicine), it would be unwise to practice policy analysis without education and training or otherwise learning the ‘craft’ shared by a policy analysis community (2005: 16-17). For example, while not engaging in clinical diagnosis, policy analysts can draw on 5-step process to diagnose a policy problem and potential solutions (2005: 18-21). Analysts may also combine these steps with heuristics to determine the technical and political feasibility of their proposals (2005: 22-5), as they address inevitable uncertainty and their own bounded rationality (2005: 26-34; see Gigerenzer on heuristics). As with medicine, some aspects of the role – such as research methods – can be taught in graduate programmes, while others may be better suited to on the job learning (2005: 36-40). If so, it opens up the possibility that there are many policy analysis professions to reflect different cultures in each political system (and perhaps the venues within each system).
These posts introduce you to key concepts in the study of public policy. They are all designed to turn a complex policymaking world into something simple enough to understand. Some of them focus on small parts of the system. Others present ambitious ways to explain the system as a whole. The wide range of concepts should give you a sense of a variety of studies out there, but my aim is to show you that these studies have common themes.