All going well, it will be out in November 2019. We are now at the proofing stage.
I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).
All going well, it will be out in November 2019. We are now at the proofing stage.
I have included below the summaries of the chapters (and each chapter should also have its own entry (or multiple entries) in the 1000 Words and 500 Words series).
Let’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.
Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.
A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:
In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.
Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.
You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.
One compromise is to keep the cycle then show how messy it is in practice:
However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.
Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.
People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.
The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.
In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.
Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways
The table helps us think through the tensions between models, built on very different principles of good evidence and governance.
In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.
I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.
However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.
The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.
*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control? Here are 7 different ‘answers’.
Powerpoint Paul Cairney @ GES GSRS 2017
This is a guest post by William L. Swann (left) and Seo Young Kim (right), discussing how to use insights from the Institutional Collective Action Framework to think about how to improve collaborative governance. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.
Many public policy problems cannot be addressed effectively by a single, solitary government. Consider the problems facing the Greater Los Angeles Area, a heavily fragmented landscape of 88 cities and numerous unincorporated areas and special districts. Whether it is combatting rising homelessness, abating the country’s worst air pollution, cleaning the toxic L.A. River, or quelling gang violence, any policy alternative pursued unilaterally is limited by overlapping authority and externalities that alter the actions of other governments.
Problems of fragmented authority are not confined to metropolitan areas. They are also found in multi-level governance scenarios such as the restoration of Chesapeake Bay, as well as in international relations as demonstrated by recent global events such as “Brexit” and the U.S.’s withdrawal from the Paris Climate Agreement. In short, fragmentation problems manifest at every scale of governance, horizontally, vertically, and even functionally within governments.
In many cases governments would be better off coordinating and working together, but they face barriers that prevent them from doing so. These barriers are what the policy literature refers to as ‘institutional collective action’ (ICA) dilemmas, or collective action problems in which a government’s incentives do not align with collectively desirable outcomes. For example, all governments in a region benefit from less air pollution, but each government has an incentive to free ride and enjoy cleaner air without contributing to the cost of obtaining it.
The ICA Framework, developed by Professor Richard Feiock, has emerged as a practical analytical instrument for understanding and improving fragmented governance. This framework assumes that governments must match the scale and coerciveness of the policy intervention (or mechanism) to the scale and nature of the policy problem to achieve efficient and desired outcomes.
For example, informal networks (a mechanism) can be highly effective at overcoming simple collective action problems. But as problems become increasingly complex, more obtrusive mechanisms, such as governmental consolidation or imposed collaboration, are needed to achieve collective goals and more efficient outcomes. The more obtrusive the mechanism, however, the more actors’ autonomy diminishes and the higher the transaction costs (monitoring, enforcement, information, and agency) of governing.
We explored what actionable steps policymakers can take to improve their results with collaboration in fragmented systems. Our study offers three general practical recommendations based on the empirical literature that can enhance institutional collaborative governance.
First, institutional collaboration is more likely to emerge and work effectively when policymakers employ networking strategies that incorporate frequent, face-to-face interactions.
Government actors networking with popular, well-endowed actors (“bridging strategies”) as well as developing closer-knit, reciprocal ties with a smaller set of actors (“bonding strategies”) will result in more collaborative participation, especially when policymakers interact often and in-person.
Policy network characteristics are also important to consider. Research on estuary governance indicates that in newly formed, emerging networks, bridging strategies may be more advantageous, at least initially, because they can provide organizational legitimacy and access to resources. However, once collaboratives mature, developing stronger and more reciprocal bonds with fewer actors reduces the likelihood of opportunistic behavior that can hinder collaborative effectiveness.
Second, policymakers should design collaborative arrangements that reduce transaction costs which hinder collaboration.
Well-designed collaborative institutions can lower the barriers to participation and information sharing, make it easier to monitor the behaviors of partners, grant greater flexibility in collaborative work, and allow for more credible commitments from partners.
Research suggests policymakers can achieve this by
Considering the context, however, is crucial. Collaboratives that thrive on informal, close-knit, reciprocal relations, for example, may be severely damaged by the introduction of monitoring mechanisms that signal distrust.
Third, institutional collaboration is enhanced by the development and harnessing of collaborative capacity.
Research suggests signaling organizational competencies and capacities, such as budget, political support, and human resources, may be more effective at lowering barriers to collaboration than ‘homophily’ (a tendency to associate with similar others in networks). Policymakers can begin building collaborative capacity by seeking political leadership involvement, granting greater managerial autonomy, and looking to higher-level governments (e.g., national, state, or provincial governments) for financial and technical support for collaboration.
Finally, we recognize that not all policymakers operate in similar institutional contexts, and collaboration can often be mandated by higher-level authorities in more centralized nations. Nonetheless, visible joint gains, economic incentives, transparent rules, and equitable distribution of joint benefits and costs are critical components of voluntary or mandated collaboration.
The recommendations offered here are, at best, only the tip of the iceberg on valuable practical insight that can be gleaned from collaborative governance research. While these suggestions are consistent with empirical findings from broader public management and policy networks literatures, much could be learned from a closer inspection of the overlap between ICA studies and other streams of collaborative governance work.
Collaboration is a valuable tool of governance, and, like any tool, it should be utilized appropriately. Collaboration is not easily managed and can encounter many obstacles. We suggest that governments generally avoid collaborating unless there are joint gains that cannot be achieved alone. But the key to solving many of society’s intractable problems, or just simply improving everyday public service delivery, lies in a clearer understanding of how collaboration can be used effectively within different fragmented systems.
Filed under public policy
By Paul Cairney and Richard Kwiatkowski
Policymakers cannot pay attention to all of the things for which they are responsible, or understand all of the information they use to make decisions. Like all people, there are limits on what information they can process (Baddeley, 2003; Cowan, 2001, 2010; Miller, 1956; Rock, 2008).
They must use short cuts to gather enough information to make decisions quickly: the ‘rational’, by pursuing clear goals and prioritizing certain kinds of information, and the ‘irrational’, by drawing on emotions, gut feelings, values, beliefs, habits, schemata, scripts, and what is familiar, to make decisions quickly. Unlike most people, they face unusually strong pressures on their cognition and emotion.
Policymakers need to gather information quickly and effectively, often in highly charged political atmospheres, so they develop heuristics to allow them to make what they believe to be good choices. Perhaps their solutions seem to be driven more by their values and emotions than a ‘rational’ analysis of the evidence, often because we hold them to a standard that no human can reach.
If so, and if they have high confidence in their heuristics, they will dismiss criticism from researchers as biased and naïve. Under those circumstances, we suggest that restating the need for ‘rational’ and ‘evidence-based policymaking’ is futile, naively ‘speaking truth to power’ counterproductive, and declaring ‘policy based evidence’ defeatist.
We use psychological insights to recommend a shift in strategy for advocates of the greater use of evidence in policy. The simple recommendation, to adapt to policymakers’ ‘fast thinking’ (Kahneman, 2011) rather than bombard them with evidence in the hope that they will get round to ‘slow thinking’, is already becoming established in evidence-policy studies. However, we provide a more sophisticated understanding of policymaker psychology, to help understand how people think and make decisions as individuals and as part of collective processes. It allows us to (a) combine many relevant psychological principles with policy studies to (b) provide several recommendations for actors seeking to maximise the impact of their evidence.
To ‘show our work’, we first summarise insights from policy studies already drawing on psychology to explain policy process dynamics, and identify key aspects of the psychology literature which show promising areas for future development.
Then, we emphasise the benefit of pragmatic strategies, to develop ways to respond positively to ‘irrational’ policymaking while recognising that the biases we ascribe to policymakers are present in ourselves and our own groups. Instead of bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond effectively. Instead of identifying only the biases in our competitors, and masking academic examples of group-think, let’s reject our own imagined standards of high-information-led action. This more self-aware and humble approach will help us work more successfully with other actors.
On that basis, we provide three recommendations for actors trying to engage skilfully in the policy process:
These tips are designed to produce effective, not manipulative, communicators. They help foster the clearer communication of important policy-relevant evidence, rather than imply that we should bend evidence to manipulate or trick politicians. We argue that it is pragmatic to work on the assumption that people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves. To persuade them to change course requires showing simple respect and seeking ways to secure their trust, rather than simply ‘speaking truth to power’. Effective engagement requires skilful communication and good judgement as much as good evidence.
This is the introduction to our revised and resubmitted paper to the special issue of Palgrave Communications The politics of evidence-based policymaking: how can we maximise the use of evidence in policy? Please get in touch if you are interested in submitting a paper to the series.
Full paper: Cairney Kwiatkowski Palgrave Comms resubmission CLEAN 14.7.17
See also our project website IMAJINE.
Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.
Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).
So, what should we do?
I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.
I have mocked up a paper abstract to describe this kind of work:
In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them. In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.
Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.
We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.
So, in the paper we (my future research partner and I), would:
Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?
The case study of territorial inequalities is a key source of such dilemmas …
…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.
When considering solutions, they can choose from three basic categories:
Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.
‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.
For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?
For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy
Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy
“There is extensive health and public health literature on the ‘evidence-policy gap’, exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems”.
We use this table to describe how the policy process works, how effective actors respond, and the dilemmas that arise for advocates of scientific evidence: should they act this way too?
We summarise this argument in two posts for:
The Guardian If scientists want to influence policymaking, they need to understand it
Sax Institute The evidence policy gap: changing the research mindset is only the beginning
The article is part of a wider body of work in which one or both of us considers the relationship between evidence and policy in different ways, including:
Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review PDF
Paul Cairney (2016) The Politics of Evidence-Based Policy Making (PDF)
Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2
Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf
Paul Cairney (2016) Evidence-based best practice is more political than it looks in Evidence and Policy
Many of my blog posts explore how people like scientists or researchers might understand and respond to the policy process:
The Science of Evidence-based Policymaking: How to Be Heard
Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’
‘Evidence-based Policymaking’ and the Study of Public Policy
How far should you go to secure academic ‘impact’ in policymaking?
Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking
What 10 questions should we put to evidence for policy experts?
Why doesn’t evidence win the day in policy and policymaking?
We all want ‘evidence based policy making’ but how do we do it?
The Politics of Evidence Based Policymaking:3 messages
The politics of evidence-based best practice: 4 messages
The politics of implementing evidence-based policies
There are more posts like this on my EBPM page
I am also guest editing a series of articles for the Open Access journal Palgrave Communications on the ‘politics of evidence-based policymaking’ and we are inviting submissions throughout 2017.
There are more details on that series here.
And finally ..
… if you’d like to read about the policy theories underpinning these arguments, see Key policy theories and concepts in 1000 words and 500 words.
Filed under Evidence Based Policymaking (EBPM), public policy
It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.
You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.
There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.
Be concise, be smart
So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?
It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.
Your aim is not to give a full account of a problem – it’s to get someone important to care about it.
Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.
Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.
Be creative
I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:
Be reflective
For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.
You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:
Be a blogger
With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.
The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.
Be reflective once again
Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.
This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!
See also: Professionalism online with social media
Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?
c
Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK
This is the first of 10 blog posts for the course POLU9UK: Policy and Policymaking in the UK. They will be a fair bit longer than the blog posts I asked you to write. I have also recorded a short lecture to go with it (OK, 22 minutes isn’t short).
In week 1 we’ll identify all that we think we knew about British politics, compare notes, then throw up our hands and declare that the Brexit vote has changed what we thought we knew.
I want to focus on the idea that a vote for the UK to leave the European Union was a vote for UK sovereignty. People voted Leave/ Remain for all sorts of reasons, and bandied around all sorts of ways to justify their position, but the idea of sovereignty and ‘taking back control’ is central to the Leave argument and this module.
For our purposes, it relates to broader ideas about the images we maintain about who makes key decisions in British politics, summed up by the phrases ‘parliamentary sovereignty’ and the ‘Westminster model’, and challenged by terms such as ‘bounded rationality’, ‘policy communities’, ‘multi-level governance’, and ‘complex government’.
Parliamentary Sovereignty
UK sovereignty relates strongly to the idea of parliamentary sovereignty: we vote in constituencies to elect MPs as our representatives, and MPs as a whole represent the final arbiters on policy in the UK. In practice, one party tends to dominate Parliament, and the elected government tends to dominate that party, but the principle remains important.
So, ‘taking back control’ is about responding, finally, to the sense that (a) the UK’s entry to the European Union from 1972 (when it signed the accession treaty) involved giving up far more sovereignty than most people expected, and (b) the European Union’s role has strengthened ever since, at the further expense of parliamentary sovereignty.
The Westminster Model
This idea of parliamentary sovereignty connects strongly to elements of the ‘Westminster model’ (WM), a shorthand phrase to describe key ways in which the UK political system is designed to work.
Our main task is to examine how well the WM: (a) describes what actually happens in British politics, and (b) represents what should happen in British politics. We can separate these two elements analytically but they influence each other in practice. For example, I ask what happens when elected policymakers know their limits but have to pretend that they don’t.
What should happen in British politics?
Perhaps policymaking should reflect strongly the wishes of the public. In representative democracies, political parties engage each other in a battle of ideas, to attract the attention and support of the voting public; the public votes every 4-5 years; the winner forms a government; the government turns its manifesto into policy; and, policy choices are carried out by civil servants and other bodies. In other words, there should be a clear link between public preferences, the strategies and ideas of parties and the final result.
The WM serves this purpose in a particular way: the UK has a plurality (‘first past the post’) voting system which tends to exaggerate support for, and give a majority in Parliament to, the winning party. It has an adversarial (and majoritarian?) style of politics and a ‘winner takes all’ mentality which tends to exclude opposition parties. The executive resides in the legislature and power tends to be concentrated within government – in ministers that head government departments and the Prime Minister who heads (and determines the members of) Cabinet. The government is responsible for the vast majority of public policy and it uses its governing majority, combined with a strong party ‘whip’, to make sure that its legislation is passed by Parliament.
In other words, the WM narrative suggests that the UK policy process is centralised and that the arrangement reflects a ‘British political tradition’: the government is accountable to public on the assumption that it is powerful and responsible. So, you know who is in charge and therefore who to praise or blame, and elections every 4-5 years are supplemented by parliamentary scrutiny built on holding ministers directly to account.
Pause for further reading: at this point, consider how this WM story links to a wider discussion of centralised policymaking (in particular, read the 1000 Words post on the policy cycle).
What actually happens?
One way into this discussion is to explore modern discussions of disenchantment with distant political elites who seem to operate in a bubble and not demonstrate their accountability to the public. For example, there is a literature on the extent to which MPs are likely to share the same backgrounds: white, male, middle class, and educated in private schools and Oxford or Cambridge. Or, the idea of a ‘Westminster bubble’ and distant ‘political class’ comes up in discussions of constitutional change (including the Scottish referendum debate), and was exacerbated during the expenses scandal in 2009.
Another is to focus on the factors that undermine this WM image of central control: maybe Westminster political elites are remote, but they don’t control policy outcomes. Instead, there are many factors which challenge the ability of elected policymakers to control the policy process. We will focus on these challenges throughout the course:
Challenge 1. Bounded rationality
Ministers only have the ability to pay attention to a tiny proportion of the issues over which have formal responsibility. So, how can they control issues if they have to ignore them? Much of the ‘1000 Words’ series explores the general implications of bounded rationality.
Challenge 2. Policy communities
Ministers don’t quite ignore issues; they delegate responsibility to civil servants at a quite-low level of government. Civil servants make policy in consultation with interest groups and other participants with the ability to trade resources (such as information) for access or influence. Such relationships can endure long after particular ministers or elected governments have come and gone.
In fact, this argument developed partly in response to discussions in the 1970s about the potential for plurality elections to cause huge swings in party success, and therefore frequent changes of government and reversals of government policy. Rather, scholars such as Jordan and Richardson identified policy continuity despite changes of government (although see Richardson’s later work).
Challenge 3. Multi-level governance
‘Multi-level’ refers to a tendency for the UK government to share policymaking responsibility with international, EU, devolved, and local governments.
‘Governance’ extends the logic of policy communities to identify a tendency to delegate or share responsibility with non-governmental and quasi-non-governmental organisations (quangos).
So, MLG can describe a clear separation of powers at many levels and a fairly coherent set of responsibilities in each case. Or, it can describe a ‘patchwork quilt’ of relationships which is difficult to track and understand. In either case, we identify ‘polycentricity’ or the presence of more than one ‘centre’ in British politics.
Challenge 4. Complex government
The phrase ‘complex government’ can be used to describe the complicated world of public policy, with elements including:
Overall, these factors generate a sense of complex government that challenges the Westminster-style notion of accountability. How can we hold elected ministers to account if:
Challenge 5. The policy environment and unpredictable events
Further, such governments operate within a wider environment in which conditions and events are often out of policymakers’ control. For example, how do they deal with demographic change or global economic crisis? Policymakers have some choice about the issues to which they pay attention, and the ways in which they understand and address them. However, they do not control that agenda or policy outcomes in the way we associate with the WM image of central control.
How has the UK government addressed these challenges?
We can discuss two key themes throughout the course:
What does this discussion tell us about our initial discussion of Brexit?
None of these factors help downplay the influence of the EU on the UK. Rather, they prompt us to think harder about the meaning, in practice, of parliamentary sovereignty and the Westminster model which underpins ongoing debates about the UK-EU relationship. In short, we can explore the extent to which a return to ‘parliamentary sovereignty’ describes little more than principles not evidence in practice. Such principles are important, but let’s also focus on what actually happens in British politics.
Filed under POLU9UK, UK politics and policy
Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.
Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.
Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.
So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:
cairney-offshoot-troubled-families-ebpm-5-9-16
See also:
In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.
See also: The Scottish Parliament would be crap in an independent Scotland and almost no-one cares
The Scottish Government made a recent amendment to the Scottish Ministerial Code to restrict the role of MSPs while ‘Parliamentary Liaison Officers’ (PLOs) in the Scottish Parliament. PLOs are not members or the Scottish Government, but they work closely with ministers and sit on committees scrutinising ministers, which blurs the boundary between policymaking and scrutiny.
While previous Labour-led governments made a decent effort to deny that this is a problem (1999-2007), the SNP (from 2007) perfected that denial by allowing PLOs to sit on the very committees scrutinising their ministers.
Now, after some (social and traditional) media and opposition party pressure, its revised guidelines in the 2016 Scottish Ministerial Code – remove a large part of the problem:
PLOs may serve on Parliamentary Committees, but they should not serve on Committees with a substantial direct link to their Cabinet Secretary’s portfolio … At the beginning of each Parliamentary session, or when changes to PLO appointments are made, the Minister for Parliamentary Business will advise Parliament which MSPs have been appointed as PLOs. The Minister for Parliamentary Business will also ensure that PLO appointments are brought to the attention of Committee Conveners. PLOs should ensure that they declare their appointment as a PLO on the first occasion they are participating in Parliamentary business related to the portfolio of their Cabinet Secretary.
The only thing that (I think) remains missing is the stipulation in the 2003 code that PLOs ‘should not table oral Parliamentary Questions on issues for which their minister is responsible’. So, we should still expect the odd question along the lines of, ‘Minister, why are you so great?’.
Filed under Scottish politics
Here is a four-step plan to avoid having to talk about how powerless the Scottish Parliament tends to be, in comparison to the old idea of ‘power sharing’ with the Scottish Government:
This pretty much sums up the reaction to the SNP’s use of Parliamentary Liaison Officers (PLOs) on Scottish parliamentary committees: the MSP works closely with a minister and sits on the committee that is supposed to hold the minister to account. The practice ensures that there is no meaningful dividing line between government and parliament, and reinforces the sense that the parliament is not there to provide effective scrutiny and robust challenge to the government. Instead, plenary is there for the pantomime discussion and committees are there to have run-of-the-mill humdrum scrutiny with minimal effect on ministers.
The use of PLOs on parliamentary committees has become yet another example in which the political parties – or, at least, any party with a chance of being in government – put themselves first before the principles of the Scottish Parliament (set out in the run up to devolution). Since devolution, the party of government has gone further than you might expect to establish its influence on parliament: controlling who convenes (its share of) committees and which of its MSPs sit on committees, and moving them around if they get too good at holding ministers to account or asking too-difficult questions. An MSP on the side of government might get a name for themselves if they ask a follow-up question to a minister in a committee instead of nodding appreciatively – and you don’t want that sort of thing to develop. Better to keep it safe and ask your MSPs not to rock the boat, or move them on if they cause a ripple.
So, maybe the early founders of devolution wanted MSPs to sit on the same committees for long periods, to help them develop expertise, build up a good relationship with MSPs from other parties, and therefore work effectively to hold the government to account. Yet, no Scottish government has been willing to let go, to allow that independent role to develop. Instead, they make sure that they have at least one key MSP on each committee to help them agree the party line that all their MSPs are expected to follow. So, this development, of parliamentary aides to ministers corresponding almost exactly with committee membership, might look new, but it is really an extension of longstanding practices to curb the independent power of parliaments and their committees – and the party in government has generally resisted any reforms (including those proposed by the former Presiding Officer Tricia Marwick) to challenge its position.
Maybe the only surprise is that ‘new politics’ seems worse than old Westminster. In Westminster committees, some MPs can make a career as a chair, and their independence from government is far clearer – something that it is keen to reinforce with initiatives such as MPs electing chairs in secret ballots. In comparison, the Scottish Parliament seems like a far poorer relation to its Scottish Government counterpart – partly because of complacency and a lack of continuous reform.
Almost no-one cares about this sort of thing
What is not surprising is the general reaction to the Herald piece on the 15th August – and the follow up on the 16th – which pointed out that the SNP was going further than the use of PLOs it criticised while in opposition.
So, future Scottish Cabinet Secretary Fiona Hyslop – quite rightly – criticised this practice in 2002, arguing that it went against the government’s Scottish Ministerial Code. Note the Labour-led government’s ridiculous defense, which it got away with because (a) almost no-one cares, and (b) the governing parties dominate the parliament.
Then, in 2007, the SNP government’s solution was to remove the offending section from that Code. Problem solved!
Now, its defence is that Labour used to do it and the SNP has been doing it for 9 years, so why complain now? It can get away with it because almost no-one cares. Of those who might care, most only care if it embarrasses one of the parties at the expense of another. When it looks like they might all be at it, it’s OK. Almost no-one pays attention to the principle that the Scottish Parliament should have a strong role independent of government, and that this role should not be subject to the whims of self-interested political parties.
So, I feel the need to provide a reason for SNP and independence supporters to care more about this, and here goes:
It is complacent nonsense, treating the Scottish political system as an afterthought, and it might just come back to bite the SNP in the bum. The implicit argument that The Scottish Parliament would be just as crap in an independent Scotland as it is now, and almost no-one cares is poor. Or, to put it in terms of the standard of partisan debate on twitter: shitey whataboutery might make you feel good on twitter, but it won’t win you any votes in the next referendum.
See also: Lucy Hunter Blackburn’s Patrick Harvie highlights close links between ministerial aides and parliamentary committees
Filed under Scottish independence, Scottish politics
In a series of heroic leaps of logic, I aim to highlight some important links between three current concerns: Labour’s leadership contest, the Brexit vote built on emotion over facts, and the insufficient use of evidence in policy. In each case, there is a notional competition between ‘idealism’ and ‘pragmatism’ (as defined in common use, not philosophy); the often-unrealistic pursuit of a long term ideal versus the focus on solving more immediate problems often by compromising ideals and getting your hands dirty. We know what this looks like in party politics, including the compromises that politicians make to win elections and the consequences for their image, but do we know how to make the same compromises when we appeal for a more deliberative referendum or more evidence-informed policymaking?
I searched Google for a few minutes until I found a decent hook for this post. It is a short Forbes article by Susan Gunelius advocating a good mix of pragmatic and idealistic team members:
“Pragmatic leaders focus on the practical, “how do we get this done,” side of any task, initiative or goal. They can erroneously be viewed as negative in their approach when in fact they simply view the entire picture (roadblocks included) to get to the end result. It’s a linear, practical way of thinking and “doing.”
Idealist leaders focus on the visionary, big ideas. It could be argued that they focus more on the end result than the path to get there, and they can erroneously be viewed as looking through rose-colored glasses when, in fact, they simply “see” the end goal and truly believe there is a way to get there”.
On the surface, it’s a neat description of the current battle to win the Labour party, with Jeremy Corbyn representing the idealist willing to lose elections to stay true to the pure ideal, and Owen Smith representing the pragmatist willing to compromise on the ideal to win an election.
In this context, pragmatic politicians face a dilemma that we often take for granted in party politics: they want to look flexible enough to command the ‘centre’ ground, but also appear principled and unwilling to give up completely on their values to secure office. Perhaps pragmatists also accept to a large extent that the means can justify the ends: they can compromise their integrity and break a few rules to win office if it means that they serve the long term greater good as a result (in this case, better a compromised socialist than a Tory government). So, politicians accept that a slightly tarnished image is the price you pay to get what you want.
For current purposes, let us assume that you are the kind of person drawn more to the pragmatist rather than the idealist politician; you despair at the naiveté of the idealist politician, and expect to see them fail rather than gain office.
If so, how might we draw comparisons with other areas in politics and policymaking?
Referendums should be driven by facts and an intelligent public, not lies and emotions
Many people either joke or complain seriously about most of the public being too stupid to engage effectively in elections and referendums. I will use this joke about Trump because I saw it as a meme, and on Facebook it has 49000 smiley faces already:
An more serious idealistic argument about the Brexit vote goes something like this:
You often have to read between the lines and piece together this argument, but Dame Liz Forgan recently did me a favour by spelling out a key part in a speech to the British Academy:
Democracies require not just literate and numerate electorates. They need people who cannot be sold snake oil by every passing shyster because their critical faculties have been properly honed. Whose popular culture has not degenerated so completely that every shopping channel hostess is classed as a celebrity. Where post-modern irony doesn’t undermine both honest relaxation and serious endeavour. Where the idea of a post-factual age is seen as an acute peril not an amusing cultural meme. If the events of June have taught us anything it is that we need to put the rigour back in our education, the search for truth back in our media.
Of course, I have cherry picked the juiciest part to highlight a sense of idealism that I have seen in many places. Let’s link it back to our despair at the naïvely idealist politician: doesn’t this look quite similar? If we took this line, and pursued public education as our main solution to Brexit, wouldn’t people think that we are doomed to fail in the long term and lose a lot of other votes on the way?
Another (albeit quicker and less idealistic) solution, proposed largely by academics (many of whom are highly critical of the campaigns) is largely institutional: let’s investigate the abuse of facts during the referendum to help us produce new rules of engagement. Yet, if the problem is that people are too stupid or emotional to process facts, it doesn’t seem that much more effective.
At this stage, I’d like to say: instead of clinging to idealism, let’s be pragmatic about this. If you despair of the world, get your hands dirty to win key votes rather than hope that people will do the right thing or wait for a sufficiently ‘rational’ public.
Yet, I don’t think we yet know enough about how to do it and how far ‘experts’ should go, particularly since many experts are funded – directly or indirectly – by the state and are subject to different (albeit often unwritten) rules than politicians. So, in a separate post, I provide some bland advice that might apply to all:
Yet, if we think that other referendum participants are winning because they are lying and cheating, we might also think that honourable strategies won’t tip the balance. We know that, like pragmatic politicians, we might need to go a bit further to win key debates. Anything else is idealism, right?
Policy should be based on evidence, not electoral politics, ideology and emotion
The same can be said for many scientists bemoaning the lack of ‘evidence-based policymaking’ (EBPM). Some express the naïve hope that politicians become trained to think like scientists and/ or the view that evidence-based policymaking should be more like the idea of evidence-based medicine in which there is a hierarchy of evidence. Others try to work out how they can improve the supply of evidence or set up new institutions to get policymakers to pay more attention to facts. This seems to be EBPM’s equivalent of idealism, in which you largely wish for something that won’t exist rather than trying to produce pragmatic strategies for the real world.
A more pragmatic two-step solution is to:
(1) work out how and why policymakers demand information, and the policymaking context in which they operate (which I describe in The Politics of Evidence-Based Policymaking, and with Kathryn Oliver and Adam Wellstead in PAR).
(2) draw on as many interdisciplinary insights to explore how to do something about it, such as to establish the psychology of policymakers and identify good ways to tell simple stories to generate an emotional connection to your evidence (which I describe in a forthcoming special issue in Palgrave Communications).
Should academics remain idealists rather than pragmatists?
Of course, it is legitimate to take what I am calling an idealistic approach. In politics, Corbyn’s idealism is certainly capturing a part of the public imagination (while another part of the public watches on, sniggering or aghast). In the Academy, it may be a part of a legitimate attempt to maintain your integrity by not engaging directly in politics or policymaking, and/or accepting that academics largely contribute to a very long term enlightenment function rather than enjoy immediate impact. All I am saying is that you need to choose and, if you seek more direct impact, you need to forego idealism and start thinking about what it means to be pragmatic while pursuing ‘evidence informed’ politics.
It is easy to reject the empirical value of the policy cycle, but difficult to replace it as a practical tool. I identify the implications for students, policymakers, and the actors seeking influence in the policy process.
A policy cycle divides the policy process into a series of stages:
Most academics (and many practitioners) reject it because it oversimplifies, and does not explain, a complex policymaking system in which: these stages may not occur (or occur in this order), or we are better to imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and predictable outputs.
But what do we do about it?
The implications for students are relatively simple: we have dozens of concepts and theories which serve as better ways to understand policymaking. In the 1000 Words series, I give you 25 to get you started.
The implications for policymakers are less simple because they cycle may be unrealistic and useful. Stages can be used to organise policymaking in a simple way: identify policymaker aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate the policy. The idea is simple and the consequent advice to policy practitioners is straightforward. A major unresolved challenge for scholars and practitioners is to describe a more meaningful, more realistic, analytical model to policymakers and give advice on how to act and justify action in the same straightforward way. So, in this article, I discuss how to reconcile policy advice based on complexity and pragmatism with public and policymaker expectations.
The implications for actors trying to influence policymaking can be dispiriting: how can we engage effectively in the policy process if we struggle to understand it? So, in this page (scroll down – it’s long!), I discuss how to present evidence in complex policymaking systems.
Take home message for students. It is easy to describe then assess the policy cycle as an empirical tool, but don’t stop there. Consider how to turn this insight into action. First, examine the many ways in which we use concepts to provide better descriptions and explanations. Then, think about the practical implications. What useful advice could you give an elected policymaker, trying to juggle pragmatism with accountability? What strategies would you recommend to actors trying to influence the policy process?
Filed under 500 words, public policy
The Scottish Government’s former Permanent Secretary Sir Peter Housden (2013) labelled the ‘Scottish Approach to Policymaking’ (SATP) as an alternative to the UK model of government. He described in broad terms the rejection of command-and-control policymaking and many elements of New Public Management driven delivery. Central to this approach is the potentially distinctive way in which it uses evidence to inform policy and policymaking and, therefore, a distinctive approach to leadership and public service delivery. Yet, there are three different models of evidence-driven policy delivery within the Scottish Government, and they compete with the centralist model, associated with democratic accountability, that must endure despite a Scottish Government commitment to its replacement. In this paper, I describe these models, identify their different implications for leadership and public service delivery, and highlight the enduring tensions in public service delivery when governments must pursue very different and potentially contradictory aims. Overall, the SATP may represent a shift from the UK model, but it is not a radical one.
Cairney QMU Leadership and SATP 11.5.16
The paper is to a workshop called ‘Leading Change in Public Services’, at Queen Margaret University, 13th June 2016.
You should get the impression from 1000 words that most policy changes are small or not radically different from the past: Lindblom identifies incrementalism; punctuated equilibrium highlights a huge number of small changes and small number of huge changes; the ACF compares routine learning by a dominant coalition to a ‘shock’ which prompts new subsystem and policy dynamics; and multiple streams identifies the conditions (rarely met) for major change.
Yet, I just gave you the impression that we don’t know how to define policy. If we can’t define it well, how can we measure it well enough to come to this conclusion so consistently?
Why is the measurement of policy change important?
We miss a lot if we equate policy with statements rather than outputs/ outcomes. We also miss a lot if we equate policy change with the most visible outputs such as legislation. I list 16 different policy instruments, although they tend to be grouped into smaller categories: focusing on regulation (including legislation) and resources (money and staffing) to accentuate the power at policymaker’s disposal; or regulatory/ distributive/ redistributive to suggest that some policy measures are more difficult to ‘sell’ than others.
We also give a limited picture if we equate change with outputs rather than outcomes, since a key insight from policy studies is that there is generally a gap between policymaker expectations and the actual result.
What are the key issues in measurement?
So, as in defining policy change, we need to make choices about what counts as policy in this instance to measure how much it has changed. For example, I have (a) written on one output as a key exemplar of policy change – legislation to ban smoking in public places for Scotland, England/ Wales, the UK, and (almost) EU – to show that a government is signalling major changes to come, but also (b) situated that policy instrument within a much broader discussion – of many tobacco policies in the UK and across the globe – to examine the extent to which it is already consistent with a well-established direction of travel.
To make such choices we need to consider:
How do we solve the problem?
The problem is that we can produce very different accounts of policy change from the same pool of evidence, by accentuating some measures and ignoring others, or putting more faith in some data more than others (e.g. during interviews).
Sometimes, my preferred solution is to compare more than one narrative of policy change. Another is simply to ‘show your work’.
Take home message for students: ‘show your work’ means explaining your logical process and step-by-step choices. Don’t just write that it is difficult to define policy and measure change. Instead, explain how you assess policy change in one important way, why you chose this way, and shine a light on the payoffs to your approach. Read up on how other scholars do it, to learn good practice and how to make your results comparable to theirs. Indeed, part of the benefit of using an established theory, to guide our analysis, is that we can engage in research systematically as a group.
.
.
.
PS Here is the way in which I describe these issues to MPP students writing theory-driven coursework on policy and policy change (using the case study of UK tobacco policy as a guide):
Filed under 500 words, public policy, Research design
This paper – Cairney PSA 2016 UK fracking 15.3.16– collects insights from the comparative study of ‘fracking’ policy, including a forthcoming book using the ‘Advocacy Coalition Framework’ to compare policy and policymaking in the US, Canada and five European countries (Weible, Heikkila, Ingold and Fischer, 2016), the UK chapter, and offshoot article submissions comparing the UK with Switzerland. It is deliberately brief to reflect the likelihood that, in a 90-minute panel with 5 papers, we will need to keep our initial presentations short and sweet. I am also a member of the no-powerpoint-collective.
See also Three lessons from a comparison of fracking policy in the UK and Switzerland
Category: Fracking
Filed under agenda setting, Fracking, public policy, UK politics and policy
Here is my talk (2 parts) on EBPM at the School of Public Affairs, University of Colorado Denver 24.2.16 (or download the main talk and Q and A):
You can find more on this topic here: https://paulcairney.wordpress.com/ebpm/
Filed under Evidence Based Policymaking (EBPM)
We can generate new insights on policymaking by connecting the dots between many separate concepts. However, don’t underestimate some major obstacles or how hard these dot-connecting exercises are to understand. They may seem clear in your head, but describing them (and getting people to go along with your description) is another matter. You need to set out these links clearly and in a set of logical steps. I give one example – of the links between evidence and policy transfer – which I have been struggling with for some time.
In this post, I combine three concepts – policy transfer, bounded rationality, and ‘evidence-based policymaking’ – to identify the major dilemmas faced by central government policymakers when they use evidence to identify a successful policy solution and consider how to import it and ‘scale it up’ within their jurisdiction. For example, do they use randomised control trials (RCTs) to establish the effectiveness of interventions and require uniform national delivery (to ensure the correct ‘dosage’), or tell stories of good practice and invite people to learn and adapt to local circumstances? I use these examples to demonstrate that our judgement of good evidence influences our judgement on the mode of policy transfer.
Insights from each concept
From studies of policy transfer, we know that central governments (a) import policies from other countries and/ or (b) encourage the spread (‘diffusion’) of successful policies which originated in regions within their country: but how do they use evidence to identify success and decide how to deliver programs?
From studies of ‘evidence-based policymaking’ (EBPM), we know that providers of scientific evidence identify an ‘evidence-policy gap’ in which policymakers ignore the evidence of a problem and/ or do not select the best evidence-based solution: but can policymakers simply identify the ‘best’ evidence and ‘roll-out’ the ‘best’ evidence-based solutions?
From studies of bounded rationality and the policy cycle (compared with alternative theories, such as multiple streams analysis or the advocacy coalition framework), we know that it is unrealistic to think that a policymaker at the heart of government can simply identify then select a perfect solution, click their fingers, and see it carried out. This limitation is more pronounced when we identify multi-level governance, or the diffusion of policymaking power across many levels and types of government. Even if they were not limited by bounded rationality, they would still face: (a) practical limits to their control of the policy process, and (b) a normative dilemma about how far you should seek to control subnational policymaking to ensure the delivery of policy solutions.
The evidence-based policy transfer dilemma
If we combine these insights we can identify a major policy transfer dilemma for central government policymakers:
Note how closely connected these concerns are: our judgement of the ‘best evidence’ can produce a judgement on how to ‘scale up’ success
Here are three ideal-type approaches to using evidence to transfer or ‘scale up’ successful interventions. In at least two cases, the choice of ‘best evidence’ seems linked inextricably to the choice of transfer strategy:
With approach 1, you gather evidence of effectiveness with reference to a hierarchy of evidence, with systematic reviews and RCTs at the top (see pages 4, 15, 33). This has a knock-on effect for ‘scaling up’: you introduce the same model in each area, requiring ‘fidelity’ to the model to ensure you administer the correct ‘dosage’ and measure its effectiveness with RCTs.
With approach 2, you reject this hierarchy and place greater value on practitioner and service user testimony. You do not necessarily ‘scale up’. Instead, you identify good practice (or good governance principles) by telling stories based on your experience and inviting other people to learn from them.
With approach 3, you gather evidence of effectiveness based on a mix of evidence. You seek to ‘scale up’ best practice through local experimentation and continuous data gathering (by practitioners trained in ‘improvement methods’).
The comparisons between approaches 1 and 2 (in particular) show us the strong link between a judgement on evidence and transfer. Approach 1 requires particular methods to gather evidence and high policy uniformity when you transfer solutions, while approach 2 places more faith in the knowledge and judgement of practitioners.
Therefore, our choice of what counts as EBPM can determine our policy transfer strategy. Or, a different transfer strategy may – if you adhere to an evidential hierarchy – preclude EBPM.
Further reading
I describe these issues, with concrete examples of each approach here, and in far more depth here:
Evidence-based best practice is more political than it looks: ‘National governments use evidence selectively to argue that a successful policy intervention in one local area should be emulated in others (‘evidence-based best practice’). However, the value of such evidence is always limited because there is: disagreement on the best way to gather evidence of policy success, uncertainty regarding the extent to which we can draw general conclusions from specific evidence, and local policymaker opposition to interventions not developed in local areas. How do governments respond to this dilemma? This article identifies the Scottish Government response: it supports three potentially contradictory ways to gather evidence and encourage emulation’.
Both articles relate to ‘prevention policy’ and the examples (so far) are from my research in Scotland, but in a future paper I’ll try to convince you that the issues are ‘universal’
My aim is to tell you about the use and value of comparative research by combining (a) a summary of your POLU9RM course reading, and (b) some examples from my research. Of course, I wouldn’t normally blow my own trumpet, but Dr Margulis insisted that I do so. Here is the podcast (25 minutes) which you can download or stream:
The reading for this session is Halperin and Heath’s chapter 9, which makes the following initial points about comparative research:
Key issues when you apply theories to new cases
There are two interesting implications of the final bullet point.
First, note the tendency of a small number of countries to dominate academic publication. For example, a lot of the policy theories to which I refer in this ‘1000 words’ series derive from studies of the US. With many theories, you can see interesting developments when scholars apply them outside of the US:
Second, consider the extent to which we are accumulating knowledge when applying the same theory to many cases. There are now some major reviews or debates of key policy theories, in which the authors highlight the difficulties of systematic research to produce a large number of comparable case studies:
For me, the important common factor in all of these reviews is that many scholars pay insufficient attention to the theory when applying its insights to new cases. Consequently, when you try to review the body of work, you find it difficult to generate an overall sense of developments by comparing many case studies. So, in my humble opinion, we’d be a lot better off if people did a proper review of the literature – and made sure that their concepts were clear and able to be ‘operationalised’ – before jumping into case study analysis. I wrote these blog posts for established scholars and new PhD students, but the argument should also apply to you as an undergraduate: get the basics right (which includes understanding the theory you are applying) to get the comparative research right. This is just as important as your case selection.
From case study to large-N research: choosing between depth and breadth?
Case studies. It is in this context that we might understand Halperin and Heath’s point that case study research (single-N) is comparative. You might be going into much depth to identify key aspects of a single case, but also considering the extent to which it compares meaningfully with other case studies using the same theory. All that we require in such examples is that you justify your case selection. In some examples, you are trying to see if a theory drawn from one country applies to another. Or, you might be interested in how far theories travel, or how applicable they are to cases which seem unusual or ‘deviant’. In some examples, we seek the ‘crucial case study’ that is central to the ‘confirmation or disconfirmation of a theory’ (p207), but don’t make the mistake of concluding that you need a new theory because the old one can only explain so much. Further, although single case studies should not be dismissed, you can only conclude so much from them. So, be careful to spell out and justify any conclusions that you find to be ‘generalisable’ to other cases.
Example 1. In my research, I often use US-origin theories to help explain policymaking by the UK and devolved governments. For example, I used the same approach as Kingdon (documentary analysis and semi-structured interviews) to identify, in great detail, the circumstances under which 4 governments in the UK introduced the same ban on smoking in public places (the take home message: there was more to it than you might think!).
Small-N studies. The systematic comparison of several cases allows you to extend analysis often without compromising on depth. However, there is great potential to bias your outcomes by cherry-picking cases to suit particular theories. So, we look for ways to justify case selection. Halperin and Heath go to some length to identify the problems of ‘selection on the dependent variable’, which can be summed up as: don’t compare cases just because they seem to have the same outcome in common (focus instead on what causes such outcomes). Two well-established approaches are ‘most similar systems design’ (MSSD) and ‘most different systems design’ (MDSD) (p209). With MSSD, you choose cases which share key explanatory factors/ characteristics (e.g. they have the same socio-economic/ political context) so that you can track the effects of one or more difference (perhaps in the spirit of a randomised control trial, but without the substance). With MDSD you choose cases which do not share characteristics so that you can track the effect of a key similarity.
Aside from the problems of case study selection bias, it’s worth noting how difficult it is to produce a clear MSSD or MDSD research design, since you are making value judgements about which shared/ not shared characteristics are crucial (see their discussion of necessary/ sufficient factors).
Example 2. Take the example of a study I did with Karin Ingold and Manuel Fischer recently, comparing ‘fracking’ policy and policymaking in the UK and Switzerland. We use the same theory (the ACF) and same method (documentary analysis and a survey of key actors) in both countries. We used a survey to allow us to quantify key relationships between actors: to what extent do they share the same beliefs with other actors, and how likely are they to share information frequently with their allies and competitors?
We describe the research design as MDSD because their political systems represent two contrasting political system archetypes – the ‘majoritarian’ UK and ‘consensus’ Switzerland – which Lijphart describes as key factors in their contrasting policymaking processes. Yet, central to our argument is that there are policymaking processes common to policy subsystems despite their system differences. In effect, we try to measure the effect of political system design on subsystem dynamics and find a subtle but important impact.
We also find that, although these differences exist, their policy outcomes are remarkably similar. So, ‘most different’ systems often produce very similar policymaking processes and policy outcomes. Then, we note that if we had our time again we would have extended the analysis to subnational governments in the UK. The ‘most different’ design prompted us to focus on the UK central government and Swiss Cantons (the alleged locus of power in both cases), but maybe we could have started from an assumption that they are not as different as they look. Have a look and you can see the dilemmas that still play out in (what I think is) a well-designed study.
Example 3. I face the same difficulties when comparing policy and policymaking by the UK and Scottish Governments. In some respects, they are ‘most different’: ‘new Scottish politics’ was designed to contrast with ‘old Westminster’ (particularly when it came to elections; the Scottish Parliament is also unicameral). In others, they are similar: the ‘architects of devolution’ introduced a system that seems to be of the Westminster family (particularly the executive-legislative relationships). Further, Scotland remains part of the UK, and the UK Government retains responsibility for many policies affecting Scotland. Overall, it is difficult to say for sure how similar/ different are their systems (which I discuss in a series of lectures). So, the comparison is fraught with difficulty. In such examples, they key solution is to ‘show your working’: describe these problems and state how you work within them (for example, in my case, I try to examine the extent to which policymaking reflects ‘territorial’ context or ‘universal’ drivers’, partly by interviewing policymakers in each government).
Large-N (quantitative) studies. Halperin and Heath lay out some of the potential benefits of large-N research. For example, it is easier to come to general conclusions about many countries by studying many countries (rather than trying to generalise from a few, which might not be representative). They also highlight the pitfalls, including the problem of meaning: when you ‘operationalise’ a concept such as democracy or populism, can you provide a simple enough definition to allow you to give each system a number (to denote democratic/ undemocratic or X% democratic) that means the same thing in each case? To this problem, I would add the general issue of breadth and depth. With large-N studies you can examine the effects of a small number of variables, to explain a small part of the politics of many systems. With small-N you can study a large number of variables in a few systems. The classic trade-off is between breadth and depth. Of course, if you are doing an undergraduate dissertation the big Q is: what can you reasonably be expected to do? Maybe your highest aim should be to make sense of the studies which already exist.
Filed under Research design
The ‘social construction of target populations’ (SCTP) literature identifies:
Schneider, Ingram and Deleon identify the importance of this process in three main steps.
First, when competing for elected public office, people articulate value judgements and make fundamental choices about which social groups should be treated differently by government bodies. They present arguments for rewarding ‘good’ groups with government support and punishing ‘bad’ groups with sanctions. This description, which may seem rather simplistic, highlights the tendency of policymakers to make quick and superficial judgements, and back up their impressions with selective facts, before distributing rewards and sanctions. There is a crucial ‘fast thinking’ element to policymaking. Policymakers make quick, biased, emotional judgements, then back up their actions with selective facts to ‘institutionalize’ their understanding of a policy problem and its solution.
Second, these judgements can have an enduring ‘feed-forward’ effect: fundamental choices based on values are reproduced in the institutions devoted to policy delivery. Policy designs based on emotionally-driven thinking often become routine and questioned rarely in government.
Third, this decision has an impact on citizens and groups, who participate more or less in politics according to how they are characterised by government. Some groups can become more or less powerful, and categorised differently, if they have the resources to mobilise and challenge the way they are perceived by policymakers (and the media and public). However, this outcome may take decades in the absence of a major event, such as an economic crisis or game-changing election.
Overall, past policies, based on rapid emotional judgements and policymakers’ values, provide key context for policymaking. The distribution of rewards and sanctions is cumulative, influencing future action by signalling to target populations how they are described and will be treated. Social constructions are difficult to overcome, because a sequence of previous policies, based on a particular framing of target populations, produces ‘hegemony’: the public, media and/ or policymakers take this set of values for granted, as normal or natural, and rarely question them when engaging in politics.
SCTP builds on classic discussions of power, in which actors exercise power to reinforce or challenge policymaker and social attitudes. For example, if most people assume that people in poverty deserve little government help, because they are largely responsible for their own fate, policymakers have little incentive to intervene. In such cases, power and powerlessness relates to the inability of disadvantaged groups to persuade the public, media and/ or government that there is a reason to make policy or a problem to be solved. Or, people may take for granted that criminals should be punished because they are engaging in deviant behaviour. To challenge policies based on this understanding, groups have to challenge fundamental public assumptions, reinforced by government policy, regarding what constitutes normal and deviant behaviour. Yet, many such groups have no obvious way in which to mobilise to pursue their collective interests.
SCTP depicts this dilemma with a notional table (page 102) in which there are two spectrums: one describes the positive or negative ways in which groups are portrayed by policymakers, the other describes the resources available to groups to challenge or reinforce that image. The powerful and positively constructed are ‘advantaged’; the powerful and negatively constructed are ‘contenders’; the powerless and positively constructed are ‘dependents’; the powerless and negatively constructed are ‘deviants’. As such, the table represents an abstract account of policymaking context, in which some groups are more likely to be favoured or stigmatised by government, and some groups are better able to exploit their favourable, or challenge their unfavourable, image.
It represents the starting point to empirical analysis since, although some examples seem intuitive (many ‘criminals’ are punished by government and have minimal ways in which to mobilise to influence policy), many are time-specific (the ‘feminist movement’ has been more or less active over time) and place-specific (gun manufacturers are high profile in the US, but not the UK). Different populations are also more or less favoured by policymakers at different levels of government – for example, ‘street level’ professionals may treat certain ‘deviant’ populations, such as intravenous drug users, more sympathetically – and may, for example, find it easier to mobilise at local than national levels. Further, people do not fit neatly into these categories – many ‘mothers’ are also ‘scientists’ and/ or part of the ‘feminist movement’ – and may mobilise according to their own perception of their identity.
Still, SCTP demonstrates that policymakers can treat people in certain ways, based on a quick, emotional and simplistic understanding of their background, and that this way of thinking should not be forgotten simply because it is taken for granted. Indeed, governments may go one step further to reinforce these judgements: capitalising on ‘fast thinking’ in the population by constructing simple ‘narratives’ designed to justify policy action to a public that may be prone to accept simple stories that seem plausible, confirm their biases, exploit their emotions, and/ or come from a source they trust. Actors compete to tell ‘stories, to quickly assign blame to one group of people, or praise another, even though that group is heterogeneous and cause/effect is multifaceted. The winner of this competition may help produce a policy response which endures for years, if not decades.
This post is part of the ‘1000 words’ series https://paulcairney.wordpress.com/1000-words/
For more on social construction, see:
Social Construction and Policy Design
Who are the most deserving and entitled to government benefits?
Filed under 1000 words, agenda setting, public policy