Tag Archives: Policy studies

Writing an essay on politics, policymaking, and policy change

I tend to set this simple-looking question for coursework in policy modules: what is policy, how much has it changed, and why? Students get to choose the policy issue, timeframe (and sometimes the political system), and relevant explanatory concepts.

On the face of it, it looks super-simple: A+ for everyone!

Give it a few more seconds, and you can see the difficulties:

  1. We spent a lot of time agreeing that it seems almost impossible to define policy (explained in 1000 Words and 500 Words)
  2. There are a gazillion possible measures of policy change (1000 Words and 500 Words)
  3. There is an almost unmanageable number of models, concepts, and theories to use to explain policy dynamics (I describe about 25 in 1000 Words each)

I try to encourage some creativity when solving this problem, but also advise students to keep their discussion as simple and jargon-free as possible (often by stretching an analogy with diving, in which a well-executed simple essay can score higher than a belly-flopped hard essay).

Choosing a format: the initial advice

  1. Choose a policy area (such as health) or issue (such as alcohol policy).
  2. Describe the nature of policy, and the extent of policy change, in a particular time period (such as in the post-war era, since UK devolution, or since a change in government).
  3. Select one or more policy concept or theory to help structure your discussion and help explain how and why policy has changed.

For example, a question might be: What is tobacco policy in the UK, how much has it changed since the 1980s, and why? I use this example because I try to answer that – UK and global – question myself, even though my 2007 article on the UK is too theory-packed to be a good model for an undergraduate essay.

Choosing a format: the cautionary advice

You may be surprised about how difficult it is to answer a simple question like ‘what is policy?’ and I will give you considerable credit for considering how to define and measure it, by identifying, for example, the use of legislation/ regulation, funding, staff, and ‘nodality’ and/ or by considering the difference between, say, policy as a statement of intent or a long term outcome. In turn, a good description and explanation of policy change is difficult. If you are feeling ambitious, you can go further, to compare, say, two issues (such as tobacco and alcohol) or places (such UK Government policy and the policy of another country), but sometimes a simple and narrow discussion can be as, or more, effective. Similarly, you can use many theories or concepts to aid explanation, but often one theory will do. Note that (a) your description of your research question, and your essay structure, is more important than (b) your decision on what topic to focus or concepts to use.

Choosing a topic: the ‘joined up’ advice

The wider aim is to encourage students to think about the relationship between different perspectives on policy theory and analysis. For example, in a blog and policy analysis paper they try to generate attention to a policy problem and advocate a solution. Then, they draw on policy theories and concepts to reflect on their papers, highlighting (say): the need to identify the most important audience; the importance of framing issues with a mixture of evidence and emotional appeals; and, the need to present ‘feasible’ solutions.

The reflection can provide a useful segue to the essay, since we’re already identifying important policy problems, advocating change, reflecting on how best to encourage it – such as by presenting modest objectives – and then, in the essay, trying to explain (say) why governments have not taken that advice in the past. Their interest in the policy issue can prompt interest in researching the issue further; their knowledge of the issue and the policy process can help them develop politically-aware policy analysis. All going well, it produces a virtuous circle.

Some examples from my pet subject

Let me outline how I would begin to answer the three questions with reference to UK tobacco policy. I’m offering a brief summary of each section rather than presenting a full essay with more detail (partly to hold on to that idea of creativity – I don’t want students to use this description as a blueprint).

What is modern UK tobacco policy?

Tobacco policy in the UK is now one of the most restrictive in the world. The UK government has introduced a large number of policy instruments to encourage a major reduction of smoking in the population. They include: legislation to ban smoking in public places; legislation to limit tobacco advertising, promotion, and sponsorship; high taxes on tobacco products; unequivocal health education; regulations on tobacco ingredients; significant spending on customs and enforcement measures; and, plain packaging measures.

[Note that I selected only a few key measures to define policy. A fuller analysis might expand on why I chose them and why they are so important].

How much has policy changed since the 1980s?

Policy has changed radically since the post-war period, and most policy change began from the 1980s, but it was not until the 2000s onwards that the UK cemented its place as one of the most restrictive countries. The shift from the 1980s relates strongly to the replacement of voluntary agreements and limited measures with limited enforcement with legislative measures and stronger enforcement. The legislation to ban tobacco advertising, passed in 2002, replaced limited bans combined with voluntary agreements to (for example) keep billboards a certain distance from schools. The legislation to ban smoking in public places, passed in 2006 (2005 in Scotland), replaced voluntary measures which allowed smoking in most pubs and restaurants. Plain packaging measures, combined with large and graphic health warnings, replace branded packets which once had no warnings. Health education warnings have gone from stating the facts and inviting smokers to decide, and the promotion of harm reduction (smoke ‘low tar’), to an unequivocal message on the harms of smoking and passive smoking.

[Note that I describe these changes in broad terms. Other articles might ‘zoom’ in on specific instruments to show how exactly they changed]

Why has it changed?

This is the section of the essay in which we have to make a judgement about the type of explanation: should you choose one or many concepts; if many, do you focus on their competing or complementary insights; should you provide an extensive discussion of your chosen theory?

I normally recommend a very small number of concepts or simple discussion, largely because there is only so much you can say in an essay of 2-3000 words.

For example, a simple ‘hook’ is to ask if the main driver was the scientific evidence: did policy change as the evidence on smoking (and then passive smoking) related harm became more apparent? Is it a good case of ‘evidence based policymaking’? The answer may then note that policy change seemed to be 20-30 years behind the evidence [although I’d have to explain that statement in more depth] and set out the conditions in which this driver would have an effect.

In short, one might identify the need for a ‘policy environment’, shaped by policymakers, and conducive to a strong policy response based on the evidence of harm and a political choice to restrict tobacco use. It would relate to decisions by policymakers to: frame tobacco as a public health epidemic requiring a major government response (rather than primarily as an economic good or issue of civil liberties); place health departments or organisations at the heart of policy development; form networks with medical and public health groups at the expense of tobacco companies; and respond to greater public support for control, reduced smoking prevalence, and the diminishing economic value of tobacco.

This discussion can proceed conceptually, in a relatively straightforward way, or with the further aid of policy theories which ask further questions and help structure the answers.

For example, one might draw on punctuated equilibrium theory to help describe and explain shifts of public/media/ policymaker attention to tobacco, from low and positive in the 1950s to high and negative from the 1980s.

Or, one might draw on the ACF to explain how pro-tobacco coalitions helped slow down policy change by interpreting new scientific evidence though the ‘lens’ of well-established beliefs or approaches (examples from the 1950s include filter tips, low tar brands, and ventilation as alternatives to greater restrictions on smoking).

One might even draw on multiple streams analysis to identify a ‘window of opportunity for change (as I did when examining the adoption of bans on smoking in public places).

Any of these approaches will do, as long as you describe and justify your choice well. One cannot explain everything, so it may be better to try to explain one thing well.

Leave a comment

Filed under 1000 words, 500 words, POLU9UK, tobacco, tobacco policy, UK politics and policy

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

These notes are for my brief panel talk at the European Parliament-European University Institute ‘Policy Roundtable’: Evidence and Analysis in EU Policy-Making: Concepts, Practice and Governance. As you can see from the programme description, the broader theme is about how EU institutions demonstrate their legitimacy through initiatives such as stakeholder participation and evidence-based policymaking (EBPM). So, part of my talk is about what happens when EBPM does not exist.

The post is a slightly modified version of my (recorded) talk for Open Society Foundations (New York) but different audiences make sense of these same basic points in very different ways.

  1. Recognise that the phrase ‘evidence-based policy-making’ means everything and nothing

The main limitation to ‘evidence-based policy-making’ is that no-one really knows what it is or what the phrase means. So, each actor makes sense of EBPM in different ways and you can tell a lot about each actor by the way in which they answer these questions:

  • Should you use restrictive criteria to determine what counts as ‘evidence’? Some actors equate evidence with scientific evidence and adhere to specific criteria – such as evidence-based medicine’s hierarchy of evidence – to determine what is scientific. Others have more respect for expertise, professional experience, and stakeholder and service user feedback as sources of evidence.
  • Which metaphor, evidence based or informed is best? ‘Evidence based’ is often rejected by experienced policy participants as unrealistic, preferring ‘informed’ to reflect pragmatism about mixing evidence and political calculations.
  • How far do you go to pursue EBPM? It is unrealistic to treat ‘policy’ as a one-off statement of intent by a single authoritative actor. Instead, it is made and delivered by many actors in a continuous policymaking process within a complicated policy environment (outlined in point 3). This is relevant to EU institutions with limited resources: the Commission often makes key decisions but relies on Member States to make and deliver, and the Parliament may only have the ability to monitor ‘key decisions’. It is also relevant to stakeholders trying to ensure the use of evidence throughout the process, from supranational to local action.
  • Which actors count as policymakers? Policymaking is done by ‘policymakers’, but many are unelected and the division between policymaker/ influencer is often unclear. The study of policymaking involves identifying networks of decision-making by elected and unelected policymakers and their stakeholders, while the actual practice is about deciding where to draw the line between influence and action.
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

For stakeholders, an effective engagement strategy is not straightforward: it takes time to know ‘where the action is’, how and where to engage with policymakers, and with whom to form coalitions. For the Commission, it is difficult to know what will happen to policy after it is made (although we know the end point will not resemble the starting point). For the Parliament, it is difficult even to know where to look.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected national and local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from stakeholders, professional groups, service user and local practitioner experience. This principle seems to rule out the use of RCTs, at least as a source of a uniform model to be rolled out and evaluated. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach to EBPM or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking

  • If policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals?
  • If policymaking systems are so complex, should stakeholders devote huge amounts of resources to make sure they’re effective at each stage?
  • Should proponents of scientific evidence go to great lengths to make sure that EBPM is based on a hierarch of evidence? There is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
  • Should policymakers try to direct the use of evidence in policy as well as policy itself?

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Storytelling for Policy Change: promise and problems

I went to a fantastic workshop on storytelling for policy change. It was hosted by Open Society Foundations New York (25/6 October), and brought together a wide range of people from different backgrounds: Narativ, people experienced in telling their own story, advocacy and professional groups using stories to promote social or policy change, major funders, journalists, and academics. There was already a lot of goodwill in the room at the beginning, and by the end there was more than a lot!

The OSF plans to write up a summary of the whole discussion, so my aim is to highlight the relevance for ‘evidence-based policymaking’ and for scientists and academics seeking more ‘impact’ for their research. In short, although I recommend that scientists ‘turn a large amount of scientific evidence into simple and effective stories that appeal to the biases of policymakers’, it’s easier said than done, and not something scientists are trained in. Good storytellers might enthuse people already committed to the idea of storytelling for policy, but what about scientists more committed to the language of scientific evidence and perhaps sceptical about the need to develop this new skill (particularly those who describe stories pejoratively as ‘anecdata’)? What would make them take a leap in the dark, to give up precious research time to develop skills in storytelling?

So, let me tell you why I thought the workshop was brilliant – including outlining its key insights – and why you might not!

Why I thought it was brilliant

Academic conferences can be horrible: a seemingly never-ending list of panels with 4-5 paper givers and a discussant, taking up almost all of the talking time with too-long and often-self-indulgent and long-winded PowerPoint presentations and little time for meaningful discussion. It’s a test of meeting deadlines for the presenter and an endurance test for the listener.

This workshop was different: the organisers thought about what it means to talk and listen, and therefore how to encourage people to talk in an interesting way and encourage high attention and engagement.

There were three main ‘listening exercises’: a personal exercise in which you closed your eyes and thought about the obstacles to listening (I confess that I cheated on that one); a paired exercise in which one person listened and thought of three poses to sum up the other’s short story; and a group exercise in which people paired up, told and then summarised each other’s stories, and spoke as a group about the implications.

This final exercise was powerful: we told often-revealing stories to strangers, built up trust very quickly, and became emotionally involved in each other’s accounts. It was interesting to watch how quickly we could become personally invested in each other’s discussion, form networks, and listen intently to each other.

For me, it was a good exercise in demonstrating what you need in a policymaker audience: ideally, they should care about the problem you raise, be personally invested in trying to solve it, and trust you and therefore your description of the most feasible solutions. If it helps recreate these conditions, a storytelling scientist may be more effective than an ‘honest broker’. Without a good story to engage your audience, your evidence will be like a drop in the ocean and your audience might be checking its email or playing Pokemon Go while you present.

Key insights and impressions

Most participants expressed strong optimism about the effect of stories on society and policy, particularly when the aim is more expressive than instrumental: the act itself of telling one’s story and being heard can be empowering, particularly within marginalised groups from which we hear few voices. It can also be remarkably powerful, remarkably quickly: most of us were crying or laughing instantly and frequently as we heard many moving stories about many issues. It’s hard to overstate just how effective many of these stories were when you heard them in person.

When discussing more instrumental concerns – can we use a story to get what we want? – the optimism was more cautious and qualified. Key themes included:

  • The balance between effective and ethical storytelling, accepting that if a story is a commodity it can be used by people less sympathetic to our aims, and exploring the ethics of using the lens of sympathetic characters (e.g. white grandparents) to make the case for marginalised groups.
  • This ethical dimension was reinforced continuously by stories of vulnerable storytellers and the balance between telling their story and protecting their safety.
  • The importance of context: the same story may have more or less impact depending on the nature of the audience; many examples conveyed the sense that a story with huge impact now would have failed 10 or 20 years ago and/ or in a different region.
  • The importance of tailoring stories to the biases of audiences and trying to reframe the implications of your audience’s beliefs (one particularly interesting example was of portraying equal marriage in Ireland as the Christian thing to do).
  • Many campaigns used humour and positive stories with heroes, based on the assumption that audiences would be put off by depressing stories of problems with no obvious solution but energised by a message of new possibilities.
  • Many warned against stories that were too personal, identifying the potential for an audience to want to criticise or fix an individual’s life rather than solve a systemic problem (and this individualistic interpretation was most pronounced among people identifying with right-wing parties).
  • Many described the need to experiment/ engage in trial-and-error to identify what works with each audience, including the length of written messages, choice of media, and choice of ‘thin’ stories with clear messages to generate quick attention or ‘thick’ stories which might be more memorable if we have the resources to tell them and people take the time to listen.

Many of these points will seem familiar if you study psychology or the psychology of policymaking. So, the benefit of these experiences is that they tell us how people have applied such insights and how it has helped their cause. Most speakers were confident that they were making an impact.

Why you may not be as impressed: two reasons

The first barrier to getting you enthusiastic is that you weren’t there. If emotional engagement is such a key part of storytelling, and you weren’t there to hear it, why would you care? So, a key barrier to making an ‘impact’ with storytelling is that it is difficult to increase its scale. You might persuade someone if they spent enough time with you, but what if you only had a few seconds in which to impress them or, worse still, you couldn’t impress them because they weren’t interested in the first place? Our worry may be that we can only influence people who are already open to our idea. This isn’t the end of the world, since a key political aim may be to enthuse people who share your beliefs and get them to act (for example, to spread the word to their friends). However, it prompts us to wonder about the varying effect of the same message and the extent to which our message’s power comes from our audience rather than our story.

The second barrier is that, when the question is framed for an academic audience – what is the scientific evidence on the impact of stories? – the answer is not clear.

On the panel devoted to this question (and in a previous session), there were some convincing accounts of the impact of initiatives such as: the Women’s Policy Institute ‘grass roots’ training in California (leading to advocacy prompting 2 dozen bills to be signed over 13 years); Purpose’s branding campaign for the White Helmets (including the miracle baby video which has received tens of millions of views); and, the Frame Works Institute’s ability to change minds with very brief interventions (for example, getting people to think in terms of problem systems more than problem individuals in areas like criminal justice).

However, the academic analysis – with contributions from Francesca Polletta, Jeff Niederdeppe, Douglas Storey, Michael Jones – tended to stress caution or note limited effects:

  • Stories work when they create an empathic reaction, but intensely personal experiences are difficult to recreate when mediated (soap operas and ‘edutainment’ come closest).
  • Randomised control trials suggest that stories have a measurable effect, but it’s small and compared to no intervention at all rather than a competing approach such as providing evidence in reports (note that most experimental studies do not draw on skilful storytellers)
  • Stories work most clearly when they reinforce the beliefs of your allies (and when you refer to heroes, not villains), but the effects are indirect at best with your opponents.

More research required?!

So, you might want more convincing evidence before you take that giant leap to train to become a skilful storyteller: why go for it when its effects are so unclear and difficult to measure?

For me, that response might seem sensible but is also a cop out: unequivocal evidence may never arrive and good science often involves researching as you go. A key insight into policymaking regards a continuous sense of urgency to solve problems: policymakers don’t wait for the evidence to become unequivocal before they act, partly because that sense of clarity may never happen. They feel the need to act on the basis of available evidence. Perhaps scientists should at least think about doing the same when they seek to act on research rather than simply do the research: how long should you postpone potentially valuable action with the old cliché ‘more research required’?

listening-new-york-1-11-16

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy, Storytelling

Evidence Based Policy Making: 5 things you need to know and do

These are some opening remarks for my talk on EBPM at Open Society Foundations (New York), 24th October 2016. The OSF recorded the talk, so you can listen below, externally, or by right clicking and saving. Please note that it was a lunchtime talk, so the background noises are plates and glasses.

Evidence based policy making’ is a good political slogan, but not a good description of the policy process. If you expect to see it, you will be disappointed. If you seek more thoughtful ways to understand and act within political systems, you need to understand five key points then decide how to respond.

  1. Decide what it means.

EBPM looks like a valence issue in which most of us agree that policy and policymaking should be ‘evidence based’ (perhaps like ‘evidence based medicine’). Yet, valence issues only command broad agreement on vague proposals. By defining each term we highlight ambiguity and the need to make political choices to make sense of key terms:

  • Should you use restrictive criteria to determine what counts as ‘evidence’ and scientific evidence?
  • Which metaphor, evidence based or informed, describes how pragmatic you will be?
  • The unclear meaning of ‘policy’ prompts you to consider how far you’d go to pursue EBPM, from a one-off statement of intent by a key actor, to delivery by many actors, to the sense of continuous policymaking requiring us to be always engaged.
  • Policymaking is done by policymakers, but many are unelected and the division between policy maker/ influencer is often unclear. So, should you seek to influence policy by influencing influencers?
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

These factors suggest that an effective engagement strategy is not straightforward: our instinct may be to influence elected policymakers at the ‘centre’ making authoritative choices, but the ‘return on investment’ is not clear. So, you need to decide how and where to engage, but it takes time to know ‘where the action is’ and with whom to form coalitions.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from service user and local practitioner experience. This principle seems to rule out the use of RCTs. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking. For example, if policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals? If policymaking systems are so complex, should we devote huge amounts of resources to make sure we’re effective? Kathryn Oliver and I also explore the implications for proponents of scientific evidence, and there is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

ebpm pic

4 Comments

Filed under Evidence Based Policymaking (EBPM)

Realistic ‘realist’ reviews: why do you need them and what might they look like?

This discussion is based on my impressions so far of realist reviews and the potential for policy studies to play a role in their effectiveness. The objectives section formed one part of a recent team bid for external funding (so, I acknowledge the influence of colleagues on this discussion, but not enough to blame them personally). We didn’t get the funding, but at least I got a lengthy blog post and a dozen hits out of it.

I like the idea of a ‘realistic’ review of evidence to inform policy, alongside a promising uptake in the use of ‘realist review’. The latter doesn’t mean realistic: it refers to a specific method or approach – realist evaluation, realist synthesis.

The agenda of the realist review already takes us along a useful path towards policy relevance, driven partly by the idea that many policy and practice ‘interventions’ are too complex to be subject to meaningful ‘systematic review’.

The latter’s aim – which we should be careful not to caricature – may be to identify something as close as possible to a general law: if you do X, the result will generally be Y, and you can be reasonably sure because the studies (such as randomised control trials) meet the ‘gold standard’ of research.

The former’s aim is to focus extensively on the context in which interventions take place: if you do X, the result will be Y under these conditions. So, for example, you identify the outcome that you want, the mechanism that causes it, and the context in which the mechanism causes the outcome. Maybe you’ll even include a few more studies, not meeting the ‘gold standard’, if they meet other criteria of high quality research (I declare that I am a qualitative researcher, so you call tell who I’m rooting for).

Realist reviews come increasingly with guide books and discussions on how to do them systematically. However, my impression is that when people do them, they find that there is an art to applying discretion to identify what exactly is going on. It is often difficult to identify or describe the mechanism fully (often because source reports are not clear on that point), say for sure it caused the outcome even in particular circumstances, and separate the mechanism from the context.

I italicised the last point because it is super-important. I think that it is often difficult to separate mechanism from context because (a) the context is often associated with a particular country’s political system and governing arrangements, and (b) it might be better to treat governing context as another mechanism in a notional chain of causality.

In other words, my impression is that realist reviews focus on the mechanism at the point of delivery; the last link in the chain in which the delivery of an intervention causes an outcome. It may be wise to also identify the governance mechanism that causes the final mechanism to work.

Why would you complicate an already complicated review?

I aim to complicate things then simplify them heroically at the end.

Here are five objectives that I maybe think we should pursue in an evidence review for policymakers (I can’t say for sure until we all agree on the principles of science advice):

  1. Focus on ways to turn evidence into feasible political action, identifying a clear set of policy conditions and mechanisms necessary to produce intended outcomes.
  2. Produce a manageable number of simple lessons and heuristics for policymakers, practitioners, and communities.
  3. Review a wider range of evidence sources than in traditional systematic reviews, to recognise the potential trade-offs between measures of high quality and high impact evidence.
  4. Identify a complex policymaking environment in which there is a need to connect the disparate evidence on each part of the ‘causal chain’.
  5. Recognise the need to understand individual countries and their political systems in depth, to know how the same evidence will be interpreted and used very differently by actors in different contexts.

Objective 1: evidence into action by addressing the politics of evidence-based policymaking

There is no shortage of scientific evidence of policy problems. Yet, we lack a way to use evidence to produce politically feasible action. The ‘politics of evidence-based policymaking’ produces scientists frustrated with the gap between their evidence and a proportionate policy response, and politicians frustrated that evidence is not available in a usable form when they pay attention to a problem and need to solve it quickly. The most common responses in key fields, such as environmental and health studies, do not solve this problem. The literature on ‘barriers’ between evidence and policy recommend initiatives such as: clearer scientific messages, knowledge brokerage and academic-practitioner workshops, timely engagement in politics, scientific training for politicians, and participation to combine evidence and community engagement.

This literature makes limited reference to policy theory and has two limitations. First, studies focus on reducing empirical uncertainty, not ‘framing’ issues to reduce ambiguity. Too many scientific publications go unread in the absence of a process of persuasion to influence policymaker demand for that information (particularly when more politically relevant and paywall-free evidence is available elsewhere). Second, few studies appreciate the multi-level nature of political systems or understand the strategies actors use to influence policy. This involves experience and cultural awareness to help learn: where key decisions are made, including in networks between policymakers and influential actors; the ‘rules of the game’ of networks; how to form coalitions with key actors; and, that these processes unfold over years or decades.

The solution is to produce knowledge that will be used by policymakers, community leaders, and ‘street level’ actors. It requires a (23%) shift in focus from the quality of scientific evidence to (a) who is involved in policymaking and the extent to which there is a ‘delivery chain’ from national to local, and (b) how actors demand, interpret, and use evidence to make decisions. For example, simple qualitative stories with a clear moral may be more effective than highly sophisticated decision-making models or quantitative evidence presented without enough translation.

Objective 2: produce simple lessons and heuristics

We know that the world is too complex to fully comprehend, yet people need to act despite uncertainty. They rely on ‘rational’ methods to gather evidence from sources they trust, and ‘irrational’ means to draw on gut feeling, emotion, and beliefs as short cuts to action (or system 1 and 2 thinking). Scientific evidence can help reduce some uncertainty, but not tell people how to behave. Scientific information strategies can be ineffective, by expecting audiences to appreciate the detail and scale of evidence, understand the methods used to gather it, and possess the skills to interpret and act on it. The unintended consequence is that key actors fall back on familiar heuristics and pay minimal attention to inaccessible scientific information. The solution is to tailor evidence reviews to audiences: examining their practices and ways of thinking; identifying the heuristics they use; and, describing simple lessons and new heuristics and practices.

Objective 3: produce a pragmatic review of the evidence

To review a wider range of evidence sources than in traditional systematic reviews is to recognise the trade-offs between measures of high quality (based on a hierarchy of methods and journal quality) and high impact (based on familiarity and availability). If scientists reject and refuse to analyse evidence that policymakers routinely take more seriously (such as the ‘grey’ literature), they have little influence on key parts of policy analysis. Instead, provide a framework that recognises complexity but produces research that is manageable at scale and translatable into key messages:

  • Context. Identify the role of factors described routinely by policy theories as the key parts of policy environments: the actors involved in multiple policymaking venues at many levels of government; the role of informal and formal rules of each venue; networks between policymakers and influential actors; socio-economic conditions; and, the ‘paradigms’ or ways of thinking that underpin the consideration of policy problems and solutions.
  • Mechanisms. Focus on the connection between three mechanisms: the cause of outcomes at the point of policy delivery (intervention); the cause of ‘community’ or individual ‘ownership’ of effective interventions; and, the governance arrangements that support high levels of community ownership and the effective delivery of the most effective interventions. These connections are not linear. For example, community ownership and effective interventions may develop more usefully from the ‘bottom up’, scientists may convince national but not local policymakers of the value of interventions (or vice versa), or political support for long term strategies may only be temporary or conditional on short term measures of success.
  • Outcomes. Identify key indicators of good policy outcomes in partnership with the people you need to make policy work. Work with those audiences to identify a small number of specific positive outcomes, and synthesise the best available evidence to explain which mechanisms produce those outcomes under the conditions associated with your region of study.

This narrow focus is crucial to the development of a research question, limiting analysis to the most relevant studies to produce a rigorous review in a challenging timeframe. Then, the idea from realist reviews is that you ‘test’ your hypotheses and clarify the theories that underpin this analysis. This should involve a test for political as well as technical feasibility: speak regularly with key actors i to gauge the likelihood that the mechanisms you recommend will be acted upon, and the extent to which the context of policy delivery is stable and predictable and if mechanism will work consistently under those conditions.

Objective 4: identify key links in the ‘causal chain’ via interdisciplinary study

We all talk about combining perspectives from multiple disciplines but I totally mean it, especially if it boosts the role of political scientists who can’t predict elections. For example, health or environmental scientists can identify the most effective interventions to produce good health or environmental outcomes, but not how to work with and influence key people. Policy scholars can identify how the policy process works and how to maximise the use of scientific evidence within it. Social science scholars can identify mechanisms to encourage community participation and the ownership of policies. Anthropologists can provide insights on the particular cultural practices and beliefs underpinning the ways in which people understand and act according to scientific evidence.

Perhaps more importantly, interdisciplinarity provides political cover: we got the best minds in many disciplines and locked them in a room until they produced an answer.

We need this cover for something I’ll call ‘informed extrapolation’ and justify with reference to pragmatism: if we do not provide well-informed analyses of the links between each mechanism, other less-informed actors will fill the gap without appreciating key aspects of causality. For example, if we identify a mechanism for the delivery of successful interventions – e.g. high levels of understanding and implementation of key procedures – there is still uncertainty: do these mechanisms develop organically through ‘bottom up’ collaboration or can they be introduced quickly from the ‘top’ to address an urgent issue? A simple heuristic for central governments could be to introduce training immediately or to resist the temptation for a quick fix.

Relatively-informed analysis, to recommend one of those choices, may only be used if we can back it up with interdisciplinary weight and produce recommendations that are unequivocal (although, again, other approaches are available).

Objective 5: focus intensively on one region, and one key issue, not ‘one size fits all’

We need to understand individual countries or regions – their political systems, communities, and cultural practices – and specific issues in depth, to know how abstract mechanisms work in concrete contexts, and how the same evidence will be interpreted and used differently by actors in those contexts. We need to avoid politically insensitive approaches based on the assumption that a policy that works in countries like (say) the UK will work in countries that are not (say) the UK, and/ or that actors in each country will understand policy problems in the same way.

But why?

It all looks incredibly complicated, doesn’t it? There’s no time to do all that, is there? It will end up as a bit of a too-rushed jumble of high-and-low quality evidence and advice, won’t it?

My argument is that these problems are actually virtues because they provide more insight into how busy policymakers will gather and use evidence. Most policymakers will not know how to do a systematic review or understand why you are so attached to them. Maybe you’ll impress them enough to get them to trust your evidence, but have you put yourself into a position to know what they’ll do with it? Have you thought about the connection between the evidence you’ve gathered, what people need to do, who needs to do it, and who you need to speak to about getting them to do it? Maybe you don’t have to, if you want to be no more than a ‘neutral scientist’ or ‘honest broker’ – but you do if you want to give science advice to policymakers that policymakers can use.

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Policy in 500 Words: if the policy cycle does not exist, what do we do?

It is easy to reject the empirical value of the policy cycle, but difficult to replace it as a practical tool. I identify the implications for students, policymakers, and the actors seeking influence in the policy process.

cycle

A policy cycle divides the policy process into a series of stages:

  • Agenda setting. Identifying problems that require government attention, deciding which issues deserve the most attention and defining the nature of the problem.
  • Policy formulation. Setting objectives, identifying the cost and estimating the effect of solutions, choosing from a list of solutions and selecting policy instruments.
  • Legitimation. Ensuring that the chosen policy instruments have support. It can involve one or a combination of: legislative approval, executive approval, seeking consent through consultation with interest groups, and referenda.
  • Implementation. Establishing or employing an organization to take responsibility for implementation, ensuring that the organization has the resources (such as staffing, money and legal authority) to do so, and making sure that policy decisions are carried out as planned.
  • Evaluation. Assessing the extent to which the policy was successful or the policy decision was the correct one; if it was implemented correctly and, if so, had the desired effect.
  • Policy maintenance, succession or termination. Considering if the policy should be continued, modified or discontinued.

Most academics (and many practitioners) reject it because it oversimplifies, and does not explain, a complex policymaking system in which: these stages may not occur (or occur in this order), or we are better to imagine thousands of policy cycles interacting with each other to produce less orderly behaviour and predictable outputs.

But what do we do about it?

The implications for students are relatively simple: we have dozens of concepts and theories which serve as better ways to understand policymaking. In the 1000 Words series, I give you 25 to get you started.

The implications for policymakers are less simple because they cycle may be unrealistic and useful. Stages can be used to organise policymaking in a simple way: identify policymaker aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate the policy. The idea is simple and the consequent advice to policy practitioners is straightforward.  A major unresolved challenge for scholars and practitioners is to describe a more meaningful, more realistic, analytical model to policymakers and give advice on how to act and justify action in the same straightforward way. So, in this article, I discuss how to reconcile policy advice based on complexity and pragmatism with public and policymaker expectations.

The implications for actors trying to influence policymaking can be dispiriting: how can we engage effectively in the policy process if we struggle to understand it? So, in this page (scroll down – it’s long!), I discuss how to present evidence in complex policymaking systems.

Take home message for students. It is easy to describe then assess the policy cycle as an empirical tool, but don’t stop there. Consider how to turn this insight into action. First, examine the many ways in which we use concepts to provide better descriptions and explanations. Then, think about the practical implications. What useful advice could you give an elected policymaker, trying to juggle pragmatism with accountability? What strategies would you recommend to actors trying to influence the policy process?

2 Comments

Filed under 500 words, public policy

The politics of evidence-based best practice: 4 messages

Well, it’s really a set of messages, geared towards slightly different audiences, and summed up by this table:

Table 1 Three ideal types EBBP.JPG

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer.

Further reading (links):

My academic articles on these topics

The Politics of Evidence Based Policymaking

Key policy theories and concepts in 1000 words

Prevention policy

8 Comments

Filed under 1000 words, ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), Prevention policy, Scottish politics, UK politics and policy