Tag Archives: policymaking

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid?

One of the most dispiriting parts of fierce political debate is the casual use of mental illness or old and new psychiatric terms to undermine an opponent: she is mad, he is crazy, she is a nutter, they are wearing tin foil hats, get this guy a straitjacket and the men in white coats because he needs to lie down in a dark room, she is hysterical, his position is bipolar, and so on. This kind of statement reflects badly on the campaigner rather than their opponent.

I say this because, while doing some research on a paper on the psychology of politics and policymaking (this time with Richard Kwiatkowski, as part of this special collection), there are potentially useful concepts that seem difficult to insulate from such political posturing. There is great potential to use them cynically against opponents rather than benefit from their insights.

The obvious ‘live’ examples relate to ‘rational’ versus ‘irrational’ policymaking. For example, one might argue that, while scientists develop facts and evidence rationally, using tried and trusted and systematic methods, politicians act irrationally, based on their emotions, ideologies, and groupthink. So, we as scientists are the arbiters of good sense and they are part of a pathological political process that contributes to ‘post truth’ politics.

The obvious problem with such accounts is that we all combine cognitive and emotional processes to think and act. We are all subject to bias in the gathering and interpretation of evidence. So, the more positive, but less tempting, option is to consider how this process works – when both competing sides act ‘rationally’ and emotionally – and what we can realistically do to mitigate the worst excesses of such exchanges. Otherwise, we will not get beyond demonising our opponents and romanticising our own cause. It gives us the warm and fuzzies on twitter and in academic conferences but contributes little to political conversations.

A less obvious example comes from modern work on the links between genes and attitudes. There is now a research agenda which uses surveys of adult twins to compare the effect of genes and environment on political attitudes. For example, Oskarsson et al (2015: 650) argue that existing studies ‘report that genetic factors account for 30–50% of the variation in issue orientations, ideology, and party identification’. One potential mechanism is cognitive ability: put simply, and rather cautiously and speculatively, with a million caveats, people with lower cognitive ability are more likely to see ‘complexity, novelty, and ambiguity’ as threatening and to respond with fear, risk aversion, and conservatism (2015: 652).

My immediate thought, when reading this stuff, is about how people would use it cynically, even at this relatively speculative stage in testing and evidence gathering: my opponent’s genes make him stupid, which makes him fearful of uncertainty and ambiguity, and therefore anxious about change and conservative in politics (in other words, the Yoda hypothesis applied only to stupid people). It’s not his fault, but his stupidity is an obstacle to progressive politics. If you add in some psychological biases, in which people inflate their own sense of intelligence and underestimate that of their opponents, you have evidence-informed, really shit political debate! ‘My opponent is stupid’ seems a bit better than ‘my opponent is mental’ but only in the sense that eating a cup of cold sick is preferable to eating shit.

I say this as we try to produce some practical recommendations (for scientist and advocates of EBPM) to engage with politicians to improve the use of evidence in policy. I’ll let you know if it goes beyond a simple maxim: adapt to their emotional and cognitive biases, but don’t simply assume they’re stupid.

See also: the many commentaries on how stupid it is to treat your political opponents as stupid

Stop Calling People “Low Information Voters

Leave a comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

We all want ‘evidence based policy making’ but how do we do it?

Here are some notes for my talk to the Scottish Government on Thursday as part of its ‘inaugural ‘evidence in policy week’. The advertised abstract is as follows:

A key aim in government is to produce ‘evidence based’ (or ‘informed’) policy and policymaking, but it is easier said than done. It involves two key choices about (1) what evidence counts and how you should gather it, and (2) the extent to which central governments should encourage subnational policymakers to act on that evidence. Ideally, the principles we use to decide on the best evidence should be consistent with the governance principles we adopt to use evidence to make policy, but what happens when they seem to collide? Cairney provides three main ways in which to combine evidence and governance-based principles to help clarify those choices.

I plan to use the same basic structure of the talks I gave to the OSF (New York) and EUI-EP (Florence) in which I argue that every aspect of ‘evidence based policy making’ is riddled with the necessity to make political choices (even when we define EBPM):

ebpm-5-things-to-do

I’ll then ‘zoom in’ on points 4 and 5 regarding the relationship between EBPM and governance principles. They are going to videotape the whole discussion to use for internal discussions, but I can post the initial talk here when it becomes available. Please don’t expect a TED talk (especially the E part of TED).

EBPM and good governance principles

The Scottish Government has a reputation for taking certain governance principles seriously, to promote high stakeholder ‘ownership’ and ‘localism’ on policy, and produce the image of a:

  1. Consensual consultation style in which it works closely with interest groups, public bodies, local government organisations, voluntary sector and professional bodies, and unions when making policy.
  2. Trust-based implementation style indicating a relative ability or willingness to devolve the delivery of policy to public bodies, including local authorities, in a meaningful way

Many aspects of this image were cultivated by former Permanent Secretaries: Sir John Elvidge described a ‘Scottish Model’ focused on joined-up government and outcomes-based approaches to policymaking and delivery, and Sir Peter Housden labelled the ‘Scottish Approach to Policymaking’ (SATP) as an alternative to the UK’s command-and-control model of government, focusing on the ‘co-production’ of policy with local communities and citizens.

The ‘Scottish Approach’ has implications for evidence based policy making

Note the major implication for our definition of EBPM. One possible definition, derived from ‘evidence based medicine’, refers to a hierarchy of evidence in which randomised control trials and their systematic review are at the top, while expertise, professional experience and service user feedback are close to the bottom. An uncompromising use of RCTs in policy requires that we maintain a uniform model, with the same basic intervention adopted and rolled out within many areas. The focus is on identifying an intervention’s ‘active ingredient’, applying the correct dosage, and evaluating its success continuously.

This approach seems to challenge the commitment to localism and ‘co-production’.

At the other end of the spectrum is a storytelling approach to the use of evidence in policy. In this case, we begin with key governance principles – such as valuing the ‘assets’ of individuals and communities – and inviting people to help make and deliver policy. Practitioners and service users share stories of their experiences and invite others to learn from them. There is no model of delivery and no ‘active ingredient’.

This approach seems to challenge the commitment to ‘evidence based policy’

The Goldilocks approach to evidence based policy making: the improvement method

We can understand the Scottish Government’s often-preferred method in that context. It has made a commitment to:

Service performance and improvement underpinned by data, evidence and the application of improvement methodologies

So, policymakers use many sources of evidence to identify promising, make broad recommendations to practitioners about the outcomes they seek, and they train practitioners in the improvement method (a form of continuous learning summed up by a ‘Plan-Do-Study-Act’ cycle).

Table 1 Three ideal types EBBP

This approach appears to offer the best of both worlds; just the right mix of central direction and local discretion, with the promise of combining well-established evidence from sources including RCTs with evidence from local experimentation and experience.

Four unresolved issues in decentralised evidence-based policy making

Not surprisingly, our story does not end there. I think there are four unresolved issues in this process:

  1. The Scottish Government often indicates a preference for improvement methods but actually supports all three of the methods I describe. This might reflect an explicit decision to ‘let a thousand flowers bloom’ or the inability to establish a favoured approach.
  2. There is not a single way of understanding ‘improvement methodology’. I describe something akin to a localist model here, but other people describe a far more research-led and centrally coordinated process.
  3. Anecdotally, I hear regularly that key stakeholders do not like the improvement method. One could interpret this as a temporary problem, before people really get it and it starts to work, or a fundamental difference between some people in government and many of the local stakeholders so important to the ‘Scottish approach’.

4. The spectre of democratic accountability and the politics of EBPM

The fourth unresolved issue is the biggest: it’s difficult to know how this approach connects with the most important reference in Scottish politics: the need to maintain Westminster-style democratic accountability, through periodic elections and more regular reports by ministers to the Scottish Parliament. This requires a strong sense of central government and ministerial control – if you know who is in charge, you know who to hold to account or reward or punish in the next election.

In principle, the ‘Scottish approach’ provides a way to bring together key aims into a single narrative. An open and accessible consultation style maximises the gathering of information and advice and fosters group ownership. A national strategic framework, with cross-cutting aims, reduces departmental silos and balances an image of democratic accountability with the pursuit of administrative devolution, through partnership agreements with local authorities, the formation of community planning partnerships, and the encouragement of community and user-driven design of public services. The formation of relationships with public bodies and other organisations delivering services, based on trust, fosters the production of common aims across the public sector, and reduces the need for top-down policymaking. An outcomes-focus provides space for evidence-based and continuous learning about what works.

In practice, a government often needs to appear to take quick and decisive action from the centre, demonstrate policy progress and its role in that progress, and intervene when things go wrong. So, alongside localism it maintains a legislative, financial, and performance management framework which limits localism.

How far do you go to ensure EBPM?

So, when I describe the ‘5 things to do’, usually the fifth element is about how far scientists may want to go, to insist on one model of EBPM when it has the potential to contradict important governance principles relating to consultation and localism. For a central government, the question is starker:

Do you have much choice about your model of EBPM when the democratic imperative is so striking?

I’ll leave it there on a cliff hanger, since these are largely questions to prompt discussion in specific workshops. If you can’t attend, there is further reading on the EBPM and EVIDENCE tabs on this blog, and specific papers on the Scottish dimension

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

Paul Cairney, Siabhainn Russell and Emily St Denny (2016) “The ‘Scottish approach’ to policy and policymaking: what issues are territorial and what are universal?” Policy and Politics, 44, 3, 333-50

The politics of evidence-based best practice: 4 messages

 

 

2 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy, Scottish politics, Storytelling

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

These notes are for my brief panel talk at the European Parliament-European University Institute ‘Policy Roundtable’: Evidence and Analysis in EU Policy-Making: Concepts, Practice and Governance. As you can see from the programme description, the broader theme is about how EU institutions demonstrate their legitimacy through initiatives such as stakeholder participation and evidence-based policymaking (EBPM). So, part of my talk is about what happens when EBPM does not exist.

The post is a slightly modified version of my (recorded) talk for Open Society Foundations (New York) but different audiences make sense of these same basic points in very different ways.

  1. Recognise that the phrase ‘evidence-based policy-making’ means everything and nothing

The main limitation to ‘evidence-based policy-making’ is that no-one really knows what it is or what the phrase means. So, each actor makes sense of EBPM in different ways and you can tell a lot about each actor by the way in which they answer these questions:

  • Should you use restrictive criteria to determine what counts as ‘evidence’? Some actors equate evidence with scientific evidence and adhere to specific criteria – such as evidence-based medicine’s hierarchy of evidence – to determine what is scientific. Others have more respect for expertise, professional experience, and stakeholder and service user feedback as sources of evidence.
  • Which metaphor, evidence based or informed is best? ‘Evidence based’ is often rejected by experienced policy participants as unrealistic, preferring ‘informed’ to reflect pragmatism about mixing evidence and political calculations.
  • How far do you go to pursue EBPM? It is unrealistic to treat ‘policy’ as a one-off statement of intent by a single authoritative actor. Instead, it is made and delivered by many actors in a continuous policymaking process within a complicated policy environment (outlined in point 3). This is relevant to EU institutions with limited resources: the Commission often makes key decisions but relies on Member States to make and deliver, and the Parliament may only have the ability to monitor ‘key decisions’. It is also relevant to stakeholders trying to ensure the use of evidence throughout the process, from supranational to local action.
  • Which actors count as policymakers? Policymaking is done by ‘policymakers’, but many are unelected and the division between policymaker/ influencer is often unclear. The study of policymaking involves identifying networks of decision-making by elected and unelected policymakers and their stakeholders, while the actual practice is about deciding where to draw the line between influence and action.
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

For stakeholders, an effective engagement strategy is not straightforward: it takes time to know ‘where the action is’, how and where to engage with policymakers, and with whom to form coalitions. For the Commission, it is difficult to know what will happen to policy after it is made (although we know the end point will not resemble the starting point). For the Parliament, it is difficult even to know where to look.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected national and local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from stakeholders, professional groups, service user and local practitioner experience. This principle seems to rule out the use of RCTs, at least as a source of a uniform model to be rolled out and evaluated. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach to EBPM or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking

  • If policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals?
  • If policymaking systems are so complex, should stakeholders devote huge amounts of resources to make sure they’re effective at each stage?
  • Should proponents of scientific evidence go to great lengths to make sure that EBPM is based on a hierarch of evidence? There is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
  • Should policymakers try to direct the use of evidence in policy as well as policy itself?

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Evidence Based Policy Making: 5 things you need to know and do

These are some opening remarks for my talk on EBPM at Open Society Foundations (New York), 24th October 2016. The OSF recorded the talk, so you can listen below, externally, or by right clicking and saving. Please note that it was a lunchtime talk, so the background noises are plates and glasses.

Evidence based policy making’ is a good political slogan, but not a good description of the policy process. If you expect to see it, you will be disappointed. If you seek more thoughtful ways to understand and act within political systems, you need to understand five key points then decide how to respond.

  1. Decide what it means.

EBPM looks like a valence issue in which most of us agree that policy and policymaking should be ‘evidence based’ (perhaps like ‘evidence based medicine’). Yet, valence issues only command broad agreement on vague proposals. By defining each term we highlight ambiguity and the need to make political choices to make sense of key terms:

  • Should you use restrictive criteria to determine what counts as ‘evidence’ and scientific evidence?
  • Which metaphor, evidence based or informed, describes how pragmatic you will be?
  • The unclear meaning of ‘policy’ prompts you to consider how far you’d go to pursue EBPM, from a one-off statement of intent by a key actor, to delivery by many actors, to the sense of continuous policymaking requiring us to be always engaged.
  • Policymaking is done by policymakers, but many are unelected and the division between policy maker/ influencer is often unclear. So, should you seek to influence policy by influencing influencers?
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

These factors suggest that an effective engagement strategy is not straightforward: our instinct may be to influence elected policymakers at the ‘centre’ making authoritative choices, but the ‘return on investment’ is not clear. So, you need to decide how and where to engage, but it takes time to know ‘where the action is’ and with whom to form coalitions.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from service user and local practitioner experience. This principle seems to rule out the use of RCTs. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking. For example, if policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals? If policymaking systems are so complex, should we devote huge amounts of resources to make sure we’re effective? Kathryn Oliver and I also explore the implications for proponents of scientific evidence, and there is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

ebpm pic

4 Comments

Filed under Evidence Based Policymaking (EBPM)

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

I am now part of a large EU-funded Horizon2020 project called IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe), which begins in January 2017. It is led by Professor Michael Woods at Aberystwyth University and has a dozen partners across the EU. I’ll be leading one work package in partnership with Professor Michael Keating.

imajine-logo-2017

The aim in our ‘work package’ is deceptively simple: generate evidence to identify how EU countries try to reduce territorial inequalities, see who is the most successful, and recommend the transfer of that success to other countries.

Life is not that simple, though, is it?! If it were, we’d know for sure what ‘territorial inequalities’ are, what causes them, what governments are willing to do to reduce them, and if they’ll succeed if they really try.

Instead, here are some of the problems you encounter along the way, including an inability to identify:

  • What policies are designed explicitly to reduce inequalities. Instead, we piece together many intentions, actions, instruments, and outputs, in many levels and types of government, and call it ‘policy’.
  • The link between ‘policy’ and policy outcomes, because many factors interact to produce those outcomes.
  • Success. Even if we could solve the methodological problems, to separate cause and effect, we face a political problem about choosing measures to evaluate and report success.
  • Good ways to transfer successful policies. A policy is not like a #gbbo cake, in which you can produce a great product and give out the recipe. In that scenario, you can assume that we all have the same aims (we all want cake, and of course chocolate is the best), starting point (basically the same shops and kitchens), and language to describe the task (use loads of sugar and cocoa). In policy, governments describe and seek to solve similar-looking problems in very different ways and, if they look elsewhere for lessons, those insights have to be relevant to their context (and the evidence-gathering process has to fit their idea of good governance). They also ‘transfer’ some policies while maintaining their own, and a key finding from our previous work is that governments simultaneously pursue policies to reduce inequalities and undermine their inequality-reducing policies.

So, academics like me tend to spend their time highlighting problems, explaining why such processes are not ‘evidence-based’, and identifying all the things that will go wrong from your perspective if you think policymaking and policy transfer can ever be straightforward.

Yet, policymakers do not have this luxury to identify problems, find them interesting, then go home. Instead, they have to make decisions in the face of ambiguity (what problem are they trying to solve?), uncertainty (evidence will help, but always be limited), and limited time.

So, academics like me are now focused increasingly on trying to help address the problems we raise. On the plus side, it prompts us to speak with policymakers from start to finish, to try to understand what evidence they’re interested in and how they’ll use it. On the less positive side (at least if you are a purist about research), it might prompt all sorts of compromises about how to combine research and policy advice if you want policymakers to use your evidence (on, for example, the line between science and advice, and the blurry boundaries between evidence and advice). If you are interested, please let me know, or follow the IMAJINE category on this site (and #IMAJINE).

See also:

New EU study looks at gap between rich and poor

New research project examines regional inequalities in Europe

Understanding the transfer of policy failure: bricolage, experimentalism and translation by Diane Stone

 

Leave a comment

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Writing a policy paper and blog post #POLU9UK

It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.

You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.

There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.

bardachs-8-steps

Be concise, be smart

So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?

It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.

Your aim is not to give a full account of a problem – it’s to get someone important to care about it.

Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.

Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.

Be creative

I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:

  1. Who is your audience? Our discussion of the limits to centralised policymaking suggest that your most influential audience will not necessarily be a UK government minister – but who else would it be?
  2. How manipulative should you be? Our discussions of ‘bounded rationality’ and ‘evidence-based policymaking’ suggest that policymakers combine ‘rational’ and ‘irrational’ shortcuts to gather information and make choices. So, do you appeal to their desire to set goals and gather a lot of scientific information and/or make an emotional and manipulative appeal?
  3. Are you an advocate or an ‘honest broker’? Contemporary discussions of science advice to government highlight unresolved debates about the role of unelected advisors: should you simply lay out some possible solutions or advocate one solution strongly?

Be reflective

For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.

You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:

  • Show how your understanding of policymaker psychology helped you decide how to present information on problems and solutions.
  • Extract insights from policy theories, such as from punctuated equilibrium theory on policymaker attention, multiple streams analysis on timing and feasibility, or the NPF on how to tell persuasive stories.
  • Explore the implications of the lack of ‘comprehensive rationality’ and absence of a ‘policy cycle’: feasibility is partly about identifying the extent to which a solution is ‘doable’ when central governments have limited powers. What ‘policy style’ or policy instruments would be appropriate for the solution you favour?

Be a blogger

With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.

The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.

Be reflective once again

Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.

This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!

See also: Professionalism online with social media

Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?

20161125_094112c

20161125_094131

20161125_094146

20161125_094203

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK