Tag Archives: psychology

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

Richard Kwiatkowski and I combine policy studies and psychology to (a) take forward ‘Psychology Based Policy Studies’, and (b) produce practical advice for actors engaged in the policy process.

Cairney Kwiatkowski abstract

Most policy studies, built on policy theory, explain policy processes without identifying practical lessons. They identify how and why people make decisions, and situate this process of choice within complex systems of environments in which there are many actors at multiple levels of government, subject to rules, norms, and group influences, forming networks, and responding to socioeconomic dynamics. This approach helps generate demand for more evidence of the role of psychology in these areas:

  1. To do more than ‘psychoanalyse’ a small number of key actors at the ‘centre’ of government.
  2. To consider how and why actors identify, understand, follow, reproduce, or seek to shape or challenge, rules within their organisations or networks.
  3. To identify the role of network formation and maintenance, and the extent to which it is built on heuristics to establish trust and the regular flow of information and advice.
  4. To examine the extent to which persuasion can be used to prompt actors to rethink their beliefs – such as when new evidence or a proposed new solution challenges the way that a problem is framed, how much attention it receives, and how it is solved.
  5. To consider (a) the effect of events such as elections on the ways in which policymakers process evidence (e.g. does it encourage short-term and vote-driven calculations?), and (b) what prompts them to pay attention to some contextual factors and not others.

This literature highlights the use of evidence by actors who anticipate or respond to lurches of attention, moral choices, and coalition formation built on bolstering one’s own position, demonising competitors, and discrediting (some) evidence. Although this aspect of choice should not be caricatured – it is not useful simply to bemoan ‘post-truth’ politics and policymaking ‘irrationality’ – it provides a useful corrective to the fantasy of a linear policy process in which evidence can be directed to a single moment of authoritative and ‘comprehensively rational’ choice based only on cognition. Political systems and human psychology combine to create a policy process characterised by many actors competing to influence continuous policy choice built on cognition and emotion.

What are the practical implications?

Few studies consider how those seeking to influence policy should act in such environments or give advice about how they can engage effectively in the policy process. Of course context is important, and advice needs to be tailored and nuanced, but that is not necessarily a reason to side-step the issue of moving beyond description. Further, policymakers and influencers do not have this luxury. They need to gather information quickly and effectively to make good choices. They have to take the risk of action.

To influence this process we need to understand it, and to understand it more we need to study how scientists try to influence it. Psychology-based policy studies can provide important insights to help actors begin to measure and improve the effectiveness of their engagement in policy by: taking into account cognitive and emotional factors and the effect of identity on possible thought; and, considering how political actors are ‘embodied’ and situated in time, place, and social systems.

5 tentative suggestions

However, few psychological insights have been developed from direct studies of policymaking, and there is a limited evidence base. So, we provide preliminary advice by identifying the most relevant avenues of conceptual research and deriving some helpful ‘tools’ to those seeking to influence policy.

Our working assumption is that policymakers need to gather information quickly and effectively, so they develop heuristics to allow them to make what they believe to be good choices. Their solutions often seem to be driven more by their emotions than a ‘rational’ analysis of the evidence, partly because we hold them to a standard that no human can reach. If so, and if they have high confidence in their heuristics, they will dismiss our criticism as biased and naïve. Under those circumstances, restating the need for ‘evidence-based policymaking’ is futile, and naively ‘speaking truth to power’ counterproductive.

For us, heuristics represent simple alternative strategies, built on psychological insights to use psychological insights in policy practice. They are broad prompts towards certain ways of thinking and acting, not specific blueprints for action in all circumstances:

  1. Develop ways to respond positively to ‘irrational’ policymaking

Instead of automatically bemoaning the irrationality of policymakers, let’s marvel at the heuristics they develop to make quick decisions despite uncertainty. Then, let’s think about how to respond in a ‘fast and frugal’ way, to pursue the kinds of evidence informed policymaking that is realistic in a complex and constantly changing policymaking environment.

  1. Tailor framing strategies to policymaker bias

The usual advice is to minimise the cognitive burden of your presentation, and use strategies tailored to the ways in which people pay attention to, and remember information (at the beginning and end of statements, with repetition, and using concrete and immediate reference points).

What is the less usual advice? If policymakers are combining cognitive and emotive processes, combine facts with emotional appeals. If policymakers are making quick choices based on their values and simple moral judgements, tell simple stories with a hero and a clear moral. If policymakers are reflecting a group emotion, based on their membership of a coalition with firmly-held beliefs, frame new evidence to be consistent with the ‘lens’ through which actors in those coalitions understand the world.

 

  1. Identify the right time to influence individuals and processes

Understand what it means to find the right time to exploit ‘windows of opportunity’. ‘Timing’ can refer to the right time to influence an individual, which is relatively difficult to identify but with the possibility of direct influence, or to act while several political conditions are aligned, which presents less chance for you to make a direct impact.

  1. Adapt to real-world dysfunctional organisations rather than waiting for an orderly process to appear

Politicians may appear confident of policy and with a grasp of facts and details, but are (a) often vulnerable and defensive, and closed to challenging information, and/ or (b) inadequate in organisational politics, or unable to change the rules of their organisations. In the absence of institutional reforms, and presence of ‘dysfunctional’ processes, develop pragmatic strategies: form relationships in networks, coalitions, or organisations first, then supply challenging information second. To challenge without establishing trust may be counterproductive.

  1. Recognise that the biases we ascribe to policymakers are present in ourselves and our own groups.

Identifying only the biases in our competitors may help mask academic/ scientific examples of group-think, and it may be counterproductive to use euphemistic terms like ‘low information’ to describe actors whose views we do not respect. This is a particular problem for scholars if they assume that most people do not live up to their own imagined standards of high-information-led action.

It may be more effective to recognise that: (a) people’s beliefs are honestly held, and policymakers believe that their role is to serve a cause greater than themselves.; and, (b) a fundamental aspect of evolutionary psychology is that people need to get on with each other, so showing simple respect – or going further, to ‘mirror’ that person’s non-verbal signals – can be useful even if it looks facile.

This leaves open the ethical question of how far we should go to identify our biases, accept the need to work with people whose ways of thinking we do not share, and how far we should go to secure their trust without lying about one’s beliefs.

3 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid?

One of the most dispiriting parts of fierce political debate is the casual use of mental illness or old and new psychiatric terms to undermine an opponent: she is mad, he is crazy, she is a nutter, they are wearing tin foil hats, get this guy a straitjacket and the men in white coats because he needs to lie down in a dark room, she is hysterical, his position is bipolar, and so on. This kind of statement reflects badly on the campaigner rather than their opponent.

I say this because, while doing some research on a paper on the psychology of politics and policymaking (this time with Richard Kwiatkowski, as part of this special collection), there are potentially useful concepts that seem difficult to insulate from such political posturing. There is great potential to use them cynically against opponents rather than benefit from their insights.

The obvious ‘live’ examples relate to ‘rational’ versus ‘irrational’ policymaking. For example, one might argue that, while scientists develop facts and evidence rationally, using tried and trusted and systematic methods, politicians act irrationally, based on their emotions, ideologies, and groupthink. So, we as scientists are the arbiters of good sense and they are part of a pathological political process that contributes to ‘post truth’ politics.

The obvious problem with such accounts is that we all combine cognitive and emotional processes to think and act. We are all subject to bias in the gathering and interpretation of evidence. So, the more positive, but less tempting, option is to consider how this process works – when both competing sides act ‘rationally’ and emotionally – and what we can realistically do to mitigate the worst excesses of such exchanges. Otherwise, we will not get beyond demonising our opponents and romanticising our own cause. It gives us the warm and fuzzies on twitter and in academic conferences but contributes little to political conversations.

A less obvious example comes from modern work on the links between genes and attitudes. There is now a research agenda which uses surveys of adult twins to compare the effect of genes and environment on political attitudes. For example, Oskarsson et al (2015: 650) argue that existing studies ‘report that genetic factors account for 30–50% of the variation in issue orientations, ideology, and party identification’. One potential mechanism is cognitive ability: put simply, and rather cautiously and speculatively, with a million caveats, people with lower cognitive ability are more likely to see ‘complexity, novelty, and ambiguity’ as threatening and to respond with fear, risk aversion, and conservatism (2015: 652).

My immediate thought, when reading this stuff, is about how people would use it cynically, even at this relatively speculative stage in testing and evidence gathering: my opponent’s genes make him stupid, which makes him fearful of uncertainty and ambiguity, and therefore anxious about change and conservative in politics (in other words, the Yoda hypothesis applied only to stupid people). It’s not his fault, but his stupidity is an obstacle to progressive politics. If you add in some psychological biases, in which people inflate their own sense of intelligence and underestimate that of their opponents, you have evidence-informed, really shit political debate! ‘My opponent is stupid’ seems a bit better than ‘my opponent is mental’ but only in the sense that eating a cup of cold sick is preferable to eating shit.

I say this as we try to produce some practical recommendations (for scientist and advocates of EBPM) to engage with politicians to improve the use of evidence in policy. I’ll let you know if it goes beyond a simple maxim: adapt to their emotional and cognitive biases, but don’t simply assume they’re stupid.

See also: the many commentaries on how stupid it is to treat your political opponents as stupid

Stop Calling People “Low Information Voters

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Is there any hope for evidence in emotional debates and chaotic government?

Two recent news features sum up the role of emotion and ‘chaos’ in policymaking.

The first is ‘Irritation and anger’ may lead to Brexit, says influential psychologist in The Telegraph. The headline suggests that people may vote Leave for emotional reasons, rather than with reference to a more ‘rational’ process in which people identify the best evidence and use it to weight up the short and long term consequences of action.

Yet, the article confirms that we’re all at it! To be human is to use emotional, gut-level, and habitual thinking to turn a complex world, with too much information, into a good enough decision in a necessary short amount of time.

In debate, evidence is mentioned a lot, but only to praise the evidence backing my decision and rejecting yours. Or, you only trust the evidence from people you trust. If you trust the evidence from certain scientists, you stress their scientific credentials. If not, you find some from other experts. Or, if all else is lost, you reject experts as condescending elites with a hidden agenda. Or, you say simply that they can’t be that clever if they agree with smarmy Cameron/ Johnson.

Lesson 1: you can see these emotional and manipulative approaches to policymaking play out in the EU referendum. Don’t assume that policymaking behind closed doors, on other issues, is any different.

The second feature is Lost in Transit: chaos in government research in The Economist.

It describes a Sense about Science report which (a) was commissioned ‘following a spate of media stories about government research being suppressed or delayed’, and (b) finds that ‘The UK government spends around £2.5 billion a year on research for policy, but does not know how many studies it has commissioned or which of them have been published’.

The Economist reports the perhaps-unexpected result of the inquiry:

But the main gripe is the sheer disorganisation of it all. The report’s afterword states that “Sir Stephen looked for suppression and found chaos”.

Such accounts reflect the two contradictory stories that we often tell about government. The first relates to the Westminster model of democratic accountability which helps concentrate power at the centre of government: if you know who is in charge, you know who to blame.

The second, regarding complex government, describes a complicated world of public policy in which no-one seems to be in control. For example, we make reference to: the huge size and reach of government; the potential for ministerial ‘overload’ and need to simplify decision-making; the blurry boundaries between the actors who make and influence policy; the multi-level nature of policymaking; and, the proliferation of rules and regulations, many of which may undermine each other.

The problem with the first story is that (a) although it is easy to tell during elections and inquiries, (b) you always struggle to find it when you actually study government.

The problem with the second is that, (a) although it seems realistic when you study government, (b) few people will buy it when they are seeking to hold ministers and governments to account. This problem may be exacerbated by the terms of reference of reports: few will accept a pragmatic response, based on the second story of complexity, if you start out by using the first story of central control to say that you will track down and solve the problem!

Lesson 2: if you assume central control you will find chaos (and struggle to produce feasible recommendations to deal with it). The manipulation of evidence takes place in a complex policymaking system over which no individual or ‘core executive’ has control. Indeed, no single person or organisation could even pay attention to all that goes on within government. This insight requires pragmatic inquiries and solutions, not the continuous reassertion of central control and discovery of ‘chaos’.

It might be possible to develop a third lesson if we put these two together. One part of the EU debate reflects our inability to understand EU policymaking and relate it to the relatively clear processes in the UK, in which you know who is in charge and therefore who to blame. The EU seems less democratic because it is so complex and remote. Yet, if we follow this other story about complexity in the UK, we often find that UK politics is also difficult to follow. Its image does not describe reality.

Lesson 3: when you find policymaking complexity in the EU, don’t assume it is any better in the UK! Instead, try to compare like with like.

See also

I expand on both lessons in The Politics of Evidence-Based Policymaking

Cock-up, not conspiracy, conceals evidence for policy

Government buries its own research – and that’s bad for democracy

The rationality paradox of Nudge: rational tools of government in a world of bounded rationality

5 Comments

Filed under Evidence Based Policymaking (EBPM), public policy, UK politics and policy

Policy Concepts in 1000 Words: Framing

framing main

(podcast download)

‘Framing’ is a metaphor to describe the ways in which we understand, and use language selectively to portray, policy problems. There are many ways to describe this process in many disciplines, including communications, psychological, and sociological research. There is also more than one way to understand the metaphor.

For example, I think that most scholars describe this image (from litemind) of someone deciding which part of the world on which to focus.

framing with hands

However, I have also seen colleagues use this image, of a timber frame, to highlight the structure of a discussion which is crucial but often unseen and taken for granted:

timber frame

  1. Intentional framing and cognition.

The first kind of framing relates to bounded rationality or the effect of our cognitive processes on the ways in which we process information (and influence how others process information):

  • We use major cognitive shortcuts to turn an infinite amount of information into the ‘signals’ we perceive or pay attention to.
  • These cognitive processes often produce interesting conclusions, such as when (a) we place higher value on the things we own/ might lose rather than the things we don’t own/ might gain (‘prospect theory’) or (b) we value, or pay more attention to, the things with which we are most familiar and can process more easily (‘fluency’).
  • We often rely on other people to process and select information on our behalf.
  • We are susceptible to simple manipulation based on the order (or other ways) in which we process information, and the form it takes.

In that context, you can see one meaning of framing: other actors portray information selectively to influence the ways in which we see the world, or which parts of the world capture our attention (here is a simple example of wind farms).

In policy theory, framing studies focus on ambiguity: there are many ways in which we can understand and define the same policy problem (note terms such as ‘problem definition’ and a ‘policy image’). Therefore, actors exercise power to draw attention to, and generate support for, one particular understanding at the expense of others. They do this with simple stories or the selective presentation of facts, often coupled with emotional appeals, to manipulate the ways in which we process information.

  1. Frames as structures

Think about the extent to which we take for granted certain ways to understand or frame issues. We don’t begin each new discussion with reference to ‘first principles’. Instead, we discuss issues with reference to:

(a) debates that have been won and may not seem worth revisiting (imagine, for example, the ways in which ‘socialist’ policies are treated in the US)

(b) other well-established ways to understand the world which, when they seem to dominate our ways of thinking, are often described as ‘hegemonic’ or with reference to paradigms.

In such cases, the timber frame metaphor serves two purposes:

(a) we can conclude that it is difficult but not impossible to change.

(b) if it is hidden by walls, we do not see it; we often take it for granted even though we should know it exists.

Framing the social, not physical, world

These metaphors can only take us so far, because the social world does not have such easily identifiable physical structures. Instead, when we frame issues, we don’t just choose where to look; we also influence how people describe what we are looking at. Or, ‘structural’ frames relate to regular patterns of behaviour or ways of thinking which are more difficult to identify than in a building. Consequently, we do not all describe structural constraints in the same way even though, ostensibly, we are looking at the same thing.

In this respect, for example, the well-known ‘Overton window’ is a sort-of helpful but also problematic concept, since it suggests that policymakers are bound to stay within the limits of what Kingdon calls the ‘national mood’. The public will only accept so much before it punishes you in events such as elections. Yet, of course, there is no such thing as the public mood. Rather, some actors (policymakers) make decisions with reference to their perception of such social constraints (how will the public react?) but they also know that they can influence how we interpret those constraints with reference to one or more proxies, including opinion polls, public consultations, media coverage, and direct action:

JEPP public opinion

They might get it wrong, and suffer the consequences, but it still makes sense to say that they have a choice to interpret and adapt to such ‘structural’ constraints.

Framing, power and the role of ideas

We can bring these two ideas about framing together to suggest that some actors exercise power to reinforce dominant ways to think about the world. Power is not simply about visible conflicts in which one group with greater material resources wins and another loses. It also relates to agenda setting. First, actors may exercise power to reinforce social attitudes. If the weight of public opinion is against government action, maybe governments will not intervene. The classic example is poverty – if most people believe that it is caused by fecklessness, what is the role of government? In such cases, power and powerlessness may relate to the (in)ability of groups to persuade the public, media and/ or government that there is a reason to make policy; a problem to be solved.  In other examples, the battle may be about the extent to which issues are private (with no legitimate role for government) or public (and open to legitimate government action), including: should governments intervene in disputes between businesses and workers? Should they intervene in disputes between husbands and wives? Should they try to stop people smoking in private or public places?

Second, policymakers can only pay attention to a tiny amount of issues for which they are responsible. So, actors exercise power to keep some issues on their agenda at the expense of others.  Issues on the agenda are sometimes described as ‘safe’: more attention to these issues means less attention to the imbalances of power within society.

18 Comments

Filed under 1000 words, agenda setting, PhD, public policy

The Psychology of Evidence Based Policymaking: Who Will Speak For the Evidence if it Doesn’t Speak for Itself?

Let’s begin with a simple – and deliberately naïve – prescription for evidence based policymaking (EBPM): there should be a much closer link between (a) the process in which scientists and knowledge brokers identify major policy problems, and (b) the process in which politicians make policy decisions. We should seek to close the ‘evidence-policy gap’. The evidence should come first and we should bemoan the inability of policymakers to act accordingly. I discuss why that argument is naïve here and here, but in terms of the complexity of policy processes and the competing claims for knowledge-based policy. This post is about the link between EBPM and psychology.

Let’s consider the role of two types of thought process common to all people, policymakers included: (a) the intuitive, gut, emotional or other heuristics we use to process and act on information quickly; and (b) goal-oriented and reasoned, thoughtful behaviour. As Daniel Kahneman’s Thinking, Fast and Slow (p 20) puts it: ‘System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations … often associated with the subjective experience of agency, choice and concentration’

The naïve description of EBPM requires System 2 (‘slow’) thinking, but what happens if most policymaking is characterised by System 1 (‘fast’)? The answer is ‘a whole bunch of cognitive shortcuts’, including:

  • the ‘availability heuristic’, when people relate the size, frequency or probability of a problem to how easy it is to remember or imagine
  • the ‘representativeness heuristic’, when people overestimate the probability of vivid events
  • ‘prospect theory’, when people value losses more than equivalent gains
  • ‘framing effects’, based on emotional and moral judgements
  • confirmation bias
  • optimism bias, or unrealistic expectations about our aims working out well when we commit to them
  • status quo bias
  • a tendency to use exemplars of social groups to represent general experience; and
  • a ‘need for coherence’ and to establish patterns and causal relationships when they may not exist (see Paul Lewis, p 7).

The ‘availability heuristic’ may also be linked to more recent studies of ‘processing fluency’ – which suggests that people’s decisions are influenced by their familiarity with things; with the ease in which they process information (see Alter and Oppenheimer, 2009). Fluency can take several forms, including conceptual, perceptual, and linguistic. For example, people may pay more attention to an issue or statement if they already possess some knowledge of it and find it easy to understand or recall. They may pay attention to people when their faces seem familiar and find fewer faults with systems they comprehend. They may place more value on things they find familiar, such as their domestic currency, items that they own compared to items they would have to buy, or the stocks of companies with more pronounceable names – even if they are otherwise identical. Or, their ability to imagine things in an abstract or concrete form may relate to their psychological ‘distance’ from it.

Is fast thinking bad thinking? Views from psychology

Alter and Oppenheimer use these insights to warn policymakers against taking the wrong attitude to regulation or spending based on flawed assessments of risk – for example, they might spend disproportionate amounts of money on projects designed to address risks with which they are most familiar (Slovic suggests that feelings towards risk may even be influenced by the way in which it is described, for example as a percentage versus a 1 in X probability). Alter and Oppenheimer also worry about medical and legal judgements swayed by fluid diagnoses and stories. Haidt argues that the identification of the ‘intuitive basis of moral judgment’ can be used to help policymakers ‘avoid mistakes’ or allow people to develop ‘programs’ or an ‘environment’ to ‘improve the quality of moral judgment and behavior’. These studies compare with arguments focusing on the positive role of emotions of decision-making, either individually (Frank) or as part of social groups, with emotional responses providing useful information in the form of social cues (Van Kleef et al).

Is fast thinking bad thinking? Views from the political and policy sciences

Social Construction Theory suggests that policymakers make quick, biased, emotional judgements, then back up their actions with selective facts to ‘institutionalize’ their understanding of a policy problem and its solution. They ‘socially construct’ their target populations to argue that they are deserving either of governmental benefits or punishments. Schneider and Ingram (forthcoming) argue that the outcomes of social construction are often dysfunctional and not based on a well-reasoned, goal-oriented strategy: ‘Studies have shown that rules, tools, rationales and implementation structures inspired by social constructions send dysfunctional messages and poor choices may hamper the effectiveness of policy’.

However, not all policy scholars make such normative pronouncements. Indeed, the value of policy theory is often to show that policy results from the interaction between large numbers of people and institutions. So, the actions of a small number of policymakers would not be the issue; we need to know more about the cumulative effect of individual emotional decision making in a collective decision-making environment – in organisations, networks and systems. For example:

  • The Advocacy Coalition Framework suggests that people engage in coordinated activity to cooperate with each other and compete with other coalitions, based on their shared beliefs and a tendency to demonise their opponents. In some cases, there are commonly accepted ways to interpret the evidence. In others, it is a battle of ideas.
  • Multiple Streams Analysis and Punctuated Equilibrium Theory focus on uncertainty and ambiguity, exploring the potential for policymaker attention to lurch dramatically from one problem or ‘image’ (the way the problem is viewed or understood). They identify the framing strategies – of actors such as ‘entrepreneurs’, ‘venue shoppers’ and ‘monopolists’ – based on a mixture of empirical facts and ‘emotional appeals’.
  • The Narrative Policy Framework combines a discussion of emotion with the identification of narrative strategies. Each narrative has a setting, characters, plot and moral. They can be compared to marketing, as persuasion based more on appealing to an audience’s beliefs (or exploiting their thought processes) than the evidence. People will pay attention to certain narratives because they are boundedly rational, seeking shortcuts to gather sufficient information – and prone to accept simple stories that seem plausible, confirm their biases, exploit their emotions, and/ or come from a source they trust.

In each case, we might see our aim as going beyond the simple phrase: ‘the evidence doesn’t speak for itself’. If ‘fast thinking’ is pervasive in policymaking, then ‘the evidence’ may only be influential if it can be provided in ways that are consistent with the thought processes of certain policymakers – such as by provoking a strong emotional reaction (to confirm or challenge biases), or framing messages in terms that are familiar to (and can be easily processed by) policymakers.

These issues are discussed further in these posts:

Is Evidence-Based Policymaking the same as good policymaking?

Policy Concepts in 1000 Words: The Psychology of Policymaking

And at more length in these papers:

PSA 2014 Cairney Psychology Policymaking 7.4.14

Cairney PSA 2014 EBPM 5.3.14

See also: Joseph Rowntree Foundation, Evidence alone won’t bring about social change

Discover Society (Delaney and Henderson) Risk and Choice in the Scottish Independence debate

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Policy Concepts in 1000 Words: The Psychology of Policymaking

(podcast download)

Psychology is at the heart of policymaking, but the literature on psychology is not always at the heart of policy theory. Most theories identify ‘bounded rationality’ which, on its own, is little more than a truism: people do not have the time, resources and cognitive ability to consider all information, all possibilities, all solutions, or anticipate all consequences of their actions. Consequently, they use informational shortcuts or heuristics – perhaps to produce ‘good-enough’ decisions. This is where psychology comes in, to:

  1. Describe the thought processes that people use to turn a complex world into something simple enough to understand and/ or respond to; and
  2. To compare types of thought process, such as (a) goal-oriented and reasoned, thoughtful behaviour and (b) the intuitive, gut, emotional or other heuristics we use to process and act on information quickly.

Where does policy theory come in? It seeks to situate these processes within a wider examination of policymaking systems and their environments, identifying the role of:

  • A wide range of actors making choices.
  • Institutions, as the rules, norms, and practices that influence behaviour.
  • Policy networks, as the relationships between policymakers and the ‘pressure participants’ with which they consult and negotiate.
  • Ideas – a broad term to describe beliefs, and the extent to which they are shared within groups, organisations, networks and political systems.
  • Context and events, to describe the extent to which a policymaker’s environment is in her control or how it influences her decisions.

Putting these approaches together is not easy. It presents us with an important choice regarding how to treat the role of psychology within explanations of complex policymaking systems – or, at least, on which aspect to focus.

Our first choice is to focus specifically on micro-level psychological processes, to produce hypotheses to test propositions regarding individual thought and action. There are many from which to choose, although from Daniel Kahneman’s Thinking, Fast and Slow (p 20), we can identify a basic distinction between two kinds ‘System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations … often associated with the subjective experience of agency, choice and concentration’. Further, system 1 can be related to a series of cognitive shortcuts which develop over time as people learn from experience, including:

  • the ‘availability heuristic’, when people relate the size, frequency or probability of a problem to how easy it is to remember or imagine
  • the ‘representativeness heuristic’, when people overestimate the probability of vivid events
  • ‘prospect theory’, when people value losses more than equivalent gains
  • ‘framing effects’, based on emotional and moral judgements
  • confirmation bias
  • optimism bias, or unrealistic expectations about our aims working out well when we commit to them
  • status quo bias
  • a tendency to use exemplars of social groups to represent general experience; and
  • a ‘need for coherence’ and to establish patterns and causal relationships when they may not exist (see Paul Lewis, p 7).

The ‘availability heuristic’ may also be linked to more recent studies of ‘processing fluency’ – which suggests that people’s decisions are influenced by their familiarity with things; with the ease in which they process information (see Alter and Oppenheimer, 2009). Fluency can take several forms, including conceptual, perceptual, and linguistic. For example, people may pay more attention to an issue or statement if they already possess some knowledge of it and find it easy to understand or recall. They may pay attention to people when their faces seem familiar and find fewer faults with systems they comprehend. They may place more value on things they find familiar, such as their domestic currency, items that they own compared to items they would have to buy, or the stocks of companies with more pronounceable names – even if they are otherwise identical. Or, their ability to imagine things in an abstract or concrete form may relate to their psychological ‘distance’ from it.

Our second choice is to treat these propositions as assumptions, allowing us to build larger (‘meso’ or ‘macro’ level) models that produce other hypotheses. We ask what would happen if these assumptions were true, to allow us to theorise a social system containing huge numbers of people, and/ or focus on the influence of the system or environment in which people make decisions.

These choices are made in different ways in the policy theory literature:

  • The Advocacy Coalition Framework has tested the idea of ‘devil shift’ (coalitions romanticize their own cause and demonise their opponents, misperceiving their power, beliefs and/ or motives) but also makes assumptions about belief systems and prospect theory to build models and test other assumptions.
  • Multiple Streams Analysis and Punctuated Equilibrium Theory focus on uncertainty and ambiguity, exploring the potential for policymaker attention to lurch dramatically from one problem or ‘image’ (the way the problem is viewed or understood). They identify the framing strategies of actors such as ‘entrepreneurs’, ‘venue shoppers’ and ‘monopolists’.
  • Social Construction Theory argues that policymakers make quick, biased, emotional judgements, then back up their actions with selective facts to ‘institutionalize’ their understanding of a policy problem and its solution.
  • The Narrative Policy Framework combines a discussion of emotion with the identification of ‘homo narrans’ (humans as storytellers – in stated contrast to ‘homo economicus’, or humans as rational beings). Narratives are used strategically to reinforce or oppose policy measures. Each narrative has a setting, characters, plot and moral. They can be compared to marketing, as persuasion based more on appealing to an audience’s beliefs (or exploiting their thought processes) than the evidence. People will pay attention to certain narratives because they are boundedly rational, seeking shortcuts to gather sufficient information – and prone to accept simple stories that seem plausible, confirm their biases, exploit their emotions, and/ or come from a source they trust.

These issues are discussed at more length in this paper: PSA 2014 Cairney Psychology Policymaking 7.4.14

19 Comments

Filed under 1000 words, agenda setting, public policy