All going well, it will be out in November 2019. We are now at the proofing stage.
Classic studies suggest that the most profound and worrying kinds of power are the hardest to observe. We often witness highly visible political battles and can use pluralist methods to identify who has material resources, how they use them, and who wins. However, key forms of power ensure that many such battles do not take place. Actors often use their resources to reinforce social attitudes and policymakers’ beliefs, to establish which issues are policy problems worthy of attention and which populations deserve government support or punishment. Key battles may not arise because not enough people think they are worthy of debate. Attention and support for debate may rise, only to be crowded out of a political agenda in which policymakers can only debate a small number of issues.
Studies of power relate these processes to the manipulation of ideas or shared beliefs under conditions of bounded rationality (see for example the NPF). Manipulation might describe some people getting other people to do things they would not otherwise do. They exploit the beliefs of people who do not know enough about the world, or themselves, to know how to identify and pursue their best interests. Or, they encourage social norms – in which we describe some behaviour as acceptable and some as deviant – which are enforced by the state (for example, via criminal justice and mental health policy), but also social groups and individuals who govern their own behaviour with reference to what they feel is expected of them (and the consequences of not living up to expectations).
Such beliefs, norms, and rules are profoundly important because they often remain unspoken and taken for granted. Indeed, some studies equate them with the social structures that appear to close off some action. If so, we may not need to identify manipulation to find unequal power relationships: strong and enduring social practices help some people win at the expense of others, by luck or design.
In practice, these more-or-less-observable forms of power co-exist and often reinforce each other:
Example 1. The control of elected office is highly skewed towards men. Male incumbency, combined with social norms about who should engage in politics and public life, signal to women that their efforts may be relatively unrewarded and routinely punished – for example, in electoral campaigns in which women face verbal and physical misogyny – and the oversupply of men in powerful positions tends to limit debates on feminist issues.
Example 2. ‘Epistemic violence’ describes the act of dismissing an individual, social group, or population by undermining the value of their knowledge or claim to knowledge. Specific discussions include: (a) the colonial West’s subjugation of colonized populations, diminishing the voice of the subaltern; (b) privileging scientific knowledge and dismissing knowledge claims via personal or shared experience; and (c) erasing the voices of women of colour from the history of women’s activism and intellectual history.
It is in this context that we can understand ‘critical’ research designed to ‘produce social change that will empower, enlighten, and emancipate’ (p51). Powerlessness can relate to the visible lack of economic material resources and factors such as the lack of opportunity to mobilise and be heard.
In policy studies, there is a profound difference between uncertainty and ambiguity:
Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to process all information relevant to policy problems. Instead, they employ two kinds of shortcut:
I make an artificially binary distinction, uncertain versus ambiguous, and relate it to another binary, rational versus irrational, to point out the pitfalls of focusing too much on one aspect of the policy process:
Actors can try to solve uncertainty by: (a) improving the quality of evidence, and (b) making sure that there are no major gaps between the supply of and demand for evidence. Relevant debates include: what counts as good evidence?, focusing on the criteria to define scientific evidence and their relationship with other forms of knowledge (such as practitioner experience and service user feedback), and what are the barriers between supply and demand?, focusing on the need for better ways to communicate.
Actors try to solve ambiguity by exercising power to increase attention to, and support for, their favoured interpretation of a policy problem. You will find many examples of such activity spread across the 500 and 1000 words series:
A focus on reducing uncertainty gives the impression that policymaking is a technical process in which people need to produce the best evidence and deliver it to the right people at the right time.
In contrast, a focus on reducing ambiguity gives the impression of a more complicated and political process in which actors are exercising power to compete for attention and dominance of the policy agenda. Uncertainty matters, but primarily to describe the role of a complex policymaking system in which no actor truly understands where they are or how they should exercise power to maximise their success.
For a longer discussion, see Fostering Evidence-informed Policy Making: Uncertainty Versus Ambiguity (PDF)
Or, if you fancy it in French: Favoriser l’élaboration de politiques publiques fondées sur des données probantes : incertitude versus ambiguïté (PDF)
Here is the relevant opening section in UPP:
Let’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.
Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.
A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:
In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.
Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.
You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.
One compromise is to keep the cycle then show how messy it is in practice:
However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.
Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.
People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.
The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.
In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.
Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways
The table helps us think through the tensions between models, built on very different principles of good evidence and governance.
In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.
I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.
However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.
The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.
*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control? Here are 7 different ‘answers’.
Powerpoint Paul Cairney @ GES GSRS 2017
This is a guest post by Michael D. Jones (left) and Deserai Anderson Crow (right), discussing how to use insights from the Narrative Policy Framework to think about how to tell effective stories to achieve policy goals. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.
Imagine. You are an ecologist. You recently discovered that a chemical that is discharged from a local manufacturing plant is threatening a bird that locals love to watch every spring. Now, imagine that you desperately want your research to be relevant and make a difference to help save these birds. All of your training gives you depth of expertise that few others possess. Your training also gives you the ability to communicate and navigate things such as probabilities, uncertainty, and p-values with ease.
But as NPR’s Robert Krulwich argues, focusing on this very specialized training when you communicate policy problems could lead you in the wrong direction. While being true to the science and best practices of your training, one must also be able to tell a compelling story. Perhaps combine your scientific findings with the story about the little old ladies who feed the birds in their backyards on spring mornings, emphasizing the beauty and majesty of these avian creatures, their role in the community, and how the toxic chemicals are not just a threat to the birds, but are also a threat to the community’s understanding of itself and its sense of place. The latest social science is showing that if you tell a good story, your policy communications are likely to be more effective.
The world is complex. We are bombarded with information as we move through our lives and we seek patterns within that information to simplify complexity and reduce ambiguity, so that we can make sense of the world and act within it.
The primary means by which human beings render complexity understandable and reduce ambiguity is through the telling of stories. We “fit” the world around us and the myriad of objects and people therein, into story patterns. We are by nature storytelling creatures. And if it is true of us as individuals, then we can also safely assume that storytelling matters for public policy where complexity and ambiguity abound.
Based on our (hopefully) forthcoming article (which has a heavy debt to Jones and Peterson, 2017 and Catherine Smith’s popular textbook) here we offer some abridged advice synthesizing some of the most current social science findings about how best to engage public policy storytelling. We break it down into five easy steps and offer a short discussion of likely intervention points within the policy process.
There are crucial points in the policy process where actors can use narratives to achieve their goals. We call these “intervention points” and all intervention points should be viewed as opportunities to tell a good policy story, although each will have its own constraints.
These intervention points include the most formal types of policy communication such as crafting of legislation or regulation, expert testimony or statements, and evaluation of policies. They also include less formal communications through the media and by citizens to government.
Each of these interventions can frequently be dry and jargon-laden, but it’s important to remember that by employing effective narratives within any of them, you are much more likely to see your policy goals met.
When considering how to construct your story within one or more of the various intervention points, we urge you to first consider several aspects of your role as a narrator.
Without deliberate consideration of your role, audience, the intervention point, and how your narrative links all of these pieces together, you are relying on chance to tell a compelling policy story.
On the other hand, thoughtful and purposeful storytelling that remains true to you, your values, your craft, and your best understanding of the facts, can allow you to be both the ecologist and the bird lover.
This is a guest post by Chris Koski (left) and Sam Workman (right), discussing how to use insights from punctuated equilibrium theory to reform government policy making. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.
Many people assume that the main problem faced by governments is an information deficit. However, the opposite is true. A surfeit of information exists and institutions have a hard time managing it. At the same time, all the information that exists in defining problems may be insufficient. Institutions need to develop a capacity to seek out better quality information too.
Institutions, from the national government, to state legislatures, to city councils – try to solve the information processing dilemma by delegating authority to smaller subgroups. Delegation increases the information processing capacity of governments by involving more actors to attend to narrower issues.
The delegation of authority is ultimately a delegation of attention. It solves the ‘flow’ problem, but also introduces new ‘filters’. The preferences, interests, and modes of information search all influence the process. Even narrowly focused smaller organizations face limitations in their capacity to search and are subject to similar forces as the governments which created them – filters for the deluge of information and capacity limitations for information seeking.
Organizational design predisposes institutions to filter information for ideas that support status quo problem definitions – that is, definitions that existed at the time of delegation – and to seek out information based on these status quo understandings. As a result, despite a desire to expand attention and information processing to adapt to changes in problem characteristics, most institutions look for information that supports their identity. Institutional problem definitions stay the same even as the problems change.
Governments eventually face trade-offs between the gains made from delegating decision-making to smaller subgroups and the losses associated with coordinating the information generated by those subgroups.
Governments get stuck in the same ruts as when the delegation process started: status quo bias that doesn’t adjust with change problem conditions. There is a sense among citizens and academics that governments make bad decisions in part because they respond to problems of today with the policies of 10 years ago. Government solutions look like hammers in search of nails when they ought to look more like contractors or even urban planners.
When institutions become stultified in their problem definitions, policymakers and citizens often misdiagnose the problem as entirely a coordination problem. The logic here is that a small group of actors have captured policymaking and are using such capture for their own gain. This understanding may be true, or may not, but it leads to the “centralization as savior” fallacy. The idea here is that organizations with broader latitude will be better able to receive a wider variety of information from a broader range of sources.
There are two problems with this strategy. First, centralization might guarantee an outcome, but at the expense of an honest problems search and, likely, at the expense of what we might call policy stability. Second, centralization may offer the opportunity for a broader array of information to bear on policy decisions, but, in practice will rely on even narrower information filters given the number of issues to which the newly centralized policymaking forum must attend.
The alternative, more delegation, has significant coordination challenges as we find bottlenecks of attention when multiple subsystems bear on decision-points. Also, simply delegating authority can predispose subsystems to a particular solution, which we want to avoid.
Our solutions do solve fundamental problems of information processing in terms of sorting and seeking information – such problems are fundamental to humans and human-created organizations. However, while governments may be predisposed to prioritize decisions over information, we are optimistic that our recommendations can facilitate better informed policy in the future.