Tag Archives: policymaking

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

See also our project website IMAJINE.

Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.

Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).

So, what should we do?

I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.

I have mocked up a paper abstract to describe this kind of work:

In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them.  In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.

Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.

We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.

So, in the paper we (my future research partner and I), would:

  • Outline the payoffs to this engage-early approach. Early engagement will inform the research questions you ask, how you ask them, and how you ‘frame’ the results. It should also help produce more academic publications (which is still the key consideration for many academics), partly because this early approach will help us speak with some authority about policy and policymaking in many countries.
  • Describe the complications of engaging with different policymakers in many ‘venues’ in different countries: you would expect very different questions to arise, and perhaps struggle to manage competing audience demands.
  • Raise practical questions about the research audience, including: should we interview key advocacy groups and private sources of funding for applied research, as well as policymakers, when refining questions? I ask this question partly because it can be more effective to communicate evidence via policy influencers rather than try to engage directly with policymakers.
  • Raise ethical questions, including: what if policymaker interviewees want the ‘wrong’ questions answered? What if they are only interested in policy solutions that we think are misguided, either because the evidence-base is limited (and yet they seek a magic bullet) or their aims are based primarily on ideology (an allegedly typical dilemma regards left-wing academics providing research for right-wing governments)?

Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?

The case study of territorial inequalities is a key source of such dilemmas …

…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.

When considering solutions, they can choose from three basic categories:

  1. Tax and spending to redistribute income and wealth, perhaps treating economic inequalities as the source of most others (such as health and education inequalities).
  2. The provision of public services to help mitigate the effects of economic and other inequalities (such as free healthcare and education, and public transport in urban and rural areas).
  3. The adoption of ‘prevention’ strategies to engage as early as possible in people’s lives, on the assumption that key inequalities are well-established by the time children are three years old.

Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.

‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.

For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy?

“There is extensive health and public health literature on the ‘evidence-policy gap’, exploring the frustrating experiences of scientists trying to secure a response to the problems and solutions they raise and identifying the need for better evidence to reduce policymaker uncertainty. We offer a new perspective by using policy theory to propose research with greater impact, identifying the need to use persuasion to reduce ambiguity, and to adapt to multi-level policymaking systems”.

We use this table to describe how the policy process works, how effective actors respond, and the dilemmas that arise for advocates of scientific evidence: should they act this way too?

We summarise this argument in two posts for:

The Guardian If scientists want to influence policymaking, they need to understand it

Sax Institute The evidence policy gap: changing the research mindset is only the beginning

The article is part of a wider body of work in which one or both of us considers the relationship between evidence and policy in different ways, including:

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review PDF

Paul Cairney (2016) The Politics of Evidence-Based Policy Making (PDF)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Paul Cairney (2016) Evidence-based best practice is more political than it looks in Evidence and Policy

Many of my blog posts explore how people like scientists or researchers might understand and respond to the policy process:

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

Policy Concepts in 1000 Words: ‘Evidence Based Policymaking’

‘Evidence-based Policymaking’ and the Study of Public Policy

How far should you go to secure academic ‘impact’ in policymaking?

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking

What 10 questions should we put to evidence for policy experts?

Why doesn’t evidence win the day in policy and policymaking?

We all want ‘evidence based policy making’ but how do we do it?

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

The Politics of Evidence Based Policymaking:3 messages

The politics of evidence-based best practice: 4 messages

The politics of implementing evidence-based policies

There are more posts like this on my EBPM page

I am also guest editing a series of articles for the Open Access journal Palgrave Communications on the ‘politics of evidence-based policymaking’ and we are inviting submissions throughout 2017.

There are more details on that series here.

And finally ..

… if you’d like to read about the policy theories underpinning these arguments, see Key policy theories and concepts in 1000 words and 500 words.

 

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Long read for Political Studies Association annual conference 2017 panel Rethinking Impact: Narratives of Research-Policy Relations. There is a paper too, but I’ve hidden it in the text like an Easter Egg hunt.

I’ve watched a lot of film and TV dramas over the decades. Many have the same basic theme, characters, and moral:

  1. There is a villain getting away with something, such as cheating at sport or trying to evict people to make money on a property deal.
  2. There are some characters who complain that life is unfair and there’s nothing they can do about it.
  3. A hero emerges to inspire the other characters to act as a team/ fight the system and win the day. Think of a range from Wyldstyle to Michael Corleone.

For many scientists right now, the villains are people like Trump or Farage, Trump’s election and Brexit symbolise an unfairness on a grand scale, and there’s little they can do about it in a ‘post-truth’ era in which people have had enough of facts and experts. Or, when people try to mobilise, they are unsure about what to do or how far they are willing to go to win the day.

These issues are playing out in different ways, from the March for Science to the conferences informing debates on modern principles of government-science advice (see INGSA). Yet, the basic question is the same when scientists are trying to re-establish a particular role for science in the world: can you present science as (a) a universal principle and (b) unequivocal resource for good, producing (c) evidence so pure that it speaks for itself, regardless of (d) the context in which specific forms of scientific evidence are produced and used?

Of course not. Instead, we are trying to privilege the role of science and scientific evidence in politics and policymaking without always acknowledging that these activities are political acts:

(a) selling scientific values rather than self-evidence truths, and

(b) using particular values to cement the status of particular groups at the expense of others, either within the scientific profession (in which some disciplines and social groups win systematically) or within society (in which scientific experts generally enjoy privileged positions in policymaking arenas).

Politics is about exercising power to win disputes, from visible acts to win ‘key choices’, to less visible acts to keep issues off agendas and reinforce the attitudes and behaviours that systematically benefit some groups at the expense of others.

To deny this link between science, politics and power – in the name of ‘science’ – is (a) silly, and (b) not scientific, since there is a wealth of policy science out there which highlights this relationship.

Instead, academic and working scientists should make better use of their political-thinking-time to consider this basic dilemma regarding political engagement: how far are you willing to go to make an impact and get what you want?  Here are three examples.

  1. How energetically should you give science advice?

My impression is that most scientists feel most comfortable with the unfortunate idea of separating facts from values (rejected by Douglas), and living life as ‘honest brokers’ rather than ‘issue advocates’ (a pursuit described by Pielke and critiqued by Jasanoff). For me, this is generally a cop-out since it puts the responsibility on politicians to understand the implications of scientific evidence, as if they were self-evident, rather than on scientists to explain the significance in a language familiar to their audience.

On the other hand, the alternative is not really clear. ‘Getting your hands dirty’, to maximise the uptake of evidence in politics, is a great metaphor but a hopeless blueprint, especially when you, as part of a notional ‘scientific community’, face trade-offs between doing what you think is the right thing and getting what you want.

There are 101 examples of these individual choices that make up one big engagement dilemmas. One of my favourite examples from table 1 is as follows:

One argument stated frequently is that, to be effective in policy, you should put forward scientists with a particular background trusted by policymakers: white men in their 50s with international reputations and strong networks in their scientific field. This way, they resemble the profile of key policymakers who tend to trust people already familiar to them. Another is that we should widen out science and science advice, investing in a new and diverse generation of science-policy specialists, to address the charge that science is an elite endeavour contributing to inequalities.

  1. How far should you go to ensure that the ‘best’ scientific evidence underpins policy?

Kathryn Oliver and I identify the dilemmas that arise when principles of evidence-production meet (a) principles of governance and (b) real world policymaking. Should scientists learn how to be manipulative, to combine evidence and emotional appeals to win the day? Should they reject other forms of knowledge, and particular forms of governance if the think they get in the way of the use of the best evidence in policymaking?

Cairney Oliver 2017 table 1

  1. Is it OK to use psychological insights to manipulate policymakers?

Richard Kwiatkowski and I mostly discuss how to be manipulative if you make that leap. Or, to put it less dramatically, how to identify relevant insights from psychology, apply them to policymaking, and decide how best to respond. Here, we propose five heuristics for engagement:

  1. developing heuristics to respond positively to ‘irrational’ policymaking
  2. tailoring framing strategies to policymaker bias
  3. identifying the right time to influence individuals and processes
  4. adapting to real-world (dysfunctional) organisations rather than waiting for an orderly process to appear, and
  5. recognising that the biases we ascribe to policymakers are present in ourselves and our own groups

Then there is the impact agenda, which describes something very different

I say these things to link to our PSA panel, in which Christina Boswell and Katherine Smith sum up (in their abstract) the difference between the ways in which we are expected to demonstrate academic impact, and the practices that might actually produce real impact:

Political scientists are increasingly exhorted to ensure their research has policy ‘impact’, most notably in the form of REF impact case studies, and ‘pathways to impact’ plans in ESRC funding. Yet the assumptions underpinning these frameworks are frequently problematic. Notions of ‘impact’, ‘engagement’ and ‘knowledge exchange’ are typically premised on simplistic and linear models of the policy process, according to which policy-makers are keen to ‘utilise’ expertise to produce more effective policy interventions”.

I then sum up the same thing but with different words in my abstract:

“The impact agenda prompts strategies which reflect the science literature on ‘barriers’ between evidence and policy: produce more accessible reports, find the right time to engage, encourage academic-practitioner workshops, and hope that policymakers have the skills to understand and motive to respond to your evidence. Such strategies are built on the idea that scientists serve to reduce policymaker uncertainty, with a linear connection between evidence and policy. Yet, the literature informed by policy theory suggests that successful actors combine evidence and persuasion to reduce ambiguity, particularly when they know where the ‘action’ is within complex policymaking systems”.

The implications for the impact agenda are interesting, because there is a big difference between (a) the fairly banal ways in which we might make it easier for policymakers to see our work, and (b) the more exciting and sinister-looking ways in which we might make more persuasive cases. Yet, our incentive remains to produce the research and play it safe, producing examples of ‘impact’ that, on the whole, seem more reportable than remarkable.

15 Comments

Filed under Evidence Based Policymaking (EBPM), Public health, public policy

Using psychological insights in politics: can we do it without calling our opponents mental, hysterical, or stupid?

One of the most dispiriting parts of fierce political debate is the casual use of mental illness or old and new psychiatric terms to undermine an opponent: she is mad, he is crazy, she is a nutter, they are wearing tin foil hats, get this guy a straitjacket and the men in white coats because he needs to lie down in a dark room, she is hysterical, his position is bipolar, and so on. This kind of statement reflects badly on the campaigner rather than their opponent.

I say this because, while doing some research on a paper on the psychology of politics and policymaking (this time with Richard Kwiatkowski, as part of this special collection), there are potentially useful concepts that seem difficult to insulate from such political posturing. There is great potential to use them cynically against opponents rather than benefit from their insights.

The obvious ‘live’ examples relate to ‘rational’ versus ‘irrational’ policymaking. For example, one might argue that, while scientists develop facts and evidence rationally, using tried and trusted and systematic methods, politicians act irrationally, based on their emotions, ideologies, and groupthink. So, we as scientists are the arbiters of good sense and they are part of a pathological political process that contributes to ‘post truth’ politics.

The obvious problem with such accounts is that we all combine cognitive and emotional processes to think and act. We are all subject to bias in the gathering and interpretation of evidence. So, the more positive, but less tempting, option is to consider how this process works – when both competing sides act ‘rationally’ and emotionally – and what we can realistically do to mitigate the worst excesses of such exchanges. Otherwise, we will not get beyond demonising our opponents and romanticising our own cause. It gives us the warm and fuzzies on twitter and in academic conferences but contributes little to political conversations.

A less obvious example comes from modern work on the links between genes and attitudes. There is now a research agenda which uses surveys of adult twins to compare the effect of genes and environment on political attitudes. For example, Oskarsson et al (2015: 650) argue that existing studies ‘report that genetic factors account for 30–50% of the variation in issue orientations, ideology, and party identification’. One potential mechanism is cognitive ability: put simply, and rather cautiously and speculatively, with a million caveats, people with lower cognitive ability are more likely to see ‘complexity, novelty, and ambiguity’ as threatening and to respond with fear, risk aversion, and conservatism (2015: 652).

My immediate thought, when reading this stuff, is about how people would use it cynically, even at this relatively speculative stage in testing and evidence gathering: my opponent’s genes make him stupid, which makes him fearful of uncertainty and ambiguity, and therefore anxious about change and conservative in politics (in other words, the Yoda hypothesis applied only to stupid people). It’s not his fault, but his stupidity is an obstacle to progressive politics. If you add in some psychological biases, in which people inflate their own sense of intelligence and underestimate that of their opponents, you have evidence-informed, really shit political debate! ‘My opponent is stupid’ seems a bit better than ‘my opponent is mental’ but only in the sense that eating a cup of cold sick is preferable to eating shit.

I say this as we try to produce some practical recommendations (for scientist and advocates of EBPM) to engage with politicians to improve the use of evidence in policy. I’ll let you know if it goes beyond a simple maxim: adapt to their emotional and cognitive biases, but don’t simply assume they’re stupid.

See also: the many commentaries on how stupid it is to treat your political opponents as stupid

Stop Calling People “Low Information Voters

1 Comment

Filed under Evidence Based Policymaking (EBPM), Uncategorized

We all want ‘evidence based policy making’ but how do we do it?

Here are some notes for my talk to the Scottish Government on Thursday as part of its ‘inaugural ‘evidence in policy week’. The advertised abstract is as follows:

A key aim in government is to produce ‘evidence based’ (or ‘informed’) policy and policymaking, but it is easier said than done. It involves two key choices about (1) what evidence counts and how you should gather it, and (2) the extent to which central governments should encourage subnational policymakers to act on that evidence. Ideally, the principles we use to decide on the best evidence should be consistent with the governance principles we adopt to use evidence to make policy, but what happens when they seem to collide? Cairney provides three main ways in which to combine evidence and governance-based principles to help clarify those choices.

I plan to use the same basic structure of the talks I gave to the OSF (New York) and EUI-EP (Florence) in which I argue that every aspect of ‘evidence based policy making’ is riddled with the necessity to make political choices (even when we define EBPM):

ebpm-5-things-to-do

I’ll then ‘zoom in’ on points 4 and 5 regarding the relationship between EBPM and governance principles. They are going to videotape the whole discussion to use for internal discussions, but I can post the initial talk here when it becomes available. Please don’t expect a TED talk (especially the E part of TED).

EBPM and good governance principles

The Scottish Government has a reputation for taking certain governance principles seriously, to promote high stakeholder ‘ownership’ and ‘localism’ on policy, and produce the image of a:

  1. Consensual consultation style in which it works closely with interest groups, public bodies, local government organisations, voluntary sector and professional bodies, and unions when making policy.
  2. Trust-based implementation style indicating a relative ability or willingness to devolve the delivery of policy to public bodies, including local authorities, in a meaningful way

Many aspects of this image were cultivated by former Permanent Secretaries: Sir John Elvidge described a ‘Scottish Model’ focused on joined-up government and outcomes-based approaches to policymaking and delivery, and Sir Peter Housden labelled the ‘Scottish Approach to Policymaking’ (SATP) as an alternative to the UK’s command-and-control model of government, focusing on the ‘co-production’ of policy with local communities and citizens.

The ‘Scottish Approach’ has implications for evidence based policy making

Note the major implication for our definition of EBPM. One possible definition, derived from ‘evidence based medicine’, refers to a hierarchy of evidence in which randomised control trials and their systematic review are at the top, while expertise, professional experience and service user feedback are close to the bottom. An uncompromising use of RCTs in policy requires that we maintain a uniform model, with the same basic intervention adopted and rolled out within many areas. The focus is on identifying an intervention’s ‘active ingredient’, applying the correct dosage, and evaluating its success continuously.

This approach seems to challenge the commitment to localism and ‘co-production’.

At the other end of the spectrum is a storytelling approach to the use of evidence in policy. In this case, we begin with key governance principles – such as valuing the ‘assets’ of individuals and communities – and inviting people to help make and deliver policy. Practitioners and service users share stories of their experiences and invite others to learn from them. There is no model of delivery and no ‘active ingredient’.

This approach seems to challenge the commitment to ‘evidence based policy’

The Goldilocks approach to evidence based policy making: the improvement method

We can understand the Scottish Government’s often-preferred method in that context. It has made a commitment to:

Service performance and improvement underpinned by data, evidence and the application of improvement methodologies

So, policymakers use many sources of evidence to identify promising, make broad recommendations to practitioners about the outcomes they seek, and they train practitioners in the improvement method (a form of continuous learning summed up by a ‘Plan-Do-Study-Act’ cycle).

Table 1 Three ideal types EBBP

This approach appears to offer the best of both worlds; just the right mix of central direction and local discretion, with the promise of combining well-established evidence from sources including RCTs with evidence from local experimentation and experience.

Four unresolved issues in decentralised evidence-based policy making

Not surprisingly, our story does not end there. I think there are four unresolved issues in this process:

  1. The Scottish Government often indicates a preference for improvement methods but actually supports all three of the methods I describe. This might reflect an explicit decision to ‘let a thousand flowers bloom’ or the inability to establish a favoured approach.
  2. There is not a single way of understanding ‘improvement methodology’. I describe something akin to a localist model here, but other people describe a far more research-led and centrally coordinated process.
  3. Anecdotally, I hear regularly that key stakeholders do not like the improvement method. One could interpret this as a temporary problem, before people really get it and it starts to work, or a fundamental difference between some people in government and many of the local stakeholders so important to the ‘Scottish approach’.

4. The spectre of democratic accountability and the politics of EBPM

The fourth unresolved issue is the biggest: it’s difficult to know how this approach connects with the most important reference in Scottish politics: the need to maintain Westminster-style democratic accountability, through periodic elections and more regular reports by ministers to the Scottish Parliament. This requires a strong sense of central government and ministerial control – if you know who is in charge, you know who to hold to account or reward or punish in the next election.

In principle, the ‘Scottish approach’ provides a way to bring together key aims into a single narrative. An open and accessible consultation style maximises the gathering of information and advice and fosters group ownership. A national strategic framework, with cross-cutting aims, reduces departmental silos and balances an image of democratic accountability with the pursuit of administrative devolution, through partnership agreements with local authorities, the formation of community planning partnerships, and the encouragement of community and user-driven design of public services. The formation of relationships with public bodies and other organisations delivering services, based on trust, fosters the production of common aims across the public sector, and reduces the need for top-down policymaking. An outcomes-focus provides space for evidence-based and continuous learning about what works.

In practice, a government often needs to appear to take quick and decisive action from the centre, demonstrate policy progress and its role in that progress, and intervene when things go wrong. So, alongside localism it maintains a legislative, financial, and performance management framework which limits localism.

How far do you go to ensure EBPM?

So, when I describe the ‘5 things to do’, usually the fifth element is about how far scientists may want to go, to insist on one model of EBPM when it has the potential to contradict important governance principles relating to consultation and localism. For a central government, the question is starker:

Do you have much choice about your model of EBPM when the democratic imperative is so striking?

I’ll leave it there on a cliff hanger, since these are largely questions to prompt discussion in specific workshops. If you can’t attend, there is further reading on the EBPM and EVIDENCE tabs on this blog, and specific papers on the Scottish dimension

The ‘Scottish Approach to Policy Making’: Implications for Public Service Delivery

Paul Cairney, Siabhainn Russell and Emily St Denny (2016) “The ‘Scottish approach’ to policy and policymaking: what issues are territorial and what are universal?” Policy and Politics, 44, 3, 333-50

The politics of evidence-based best practice: 4 messages

 

 

4 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy, Scottish politics, Storytelling

How can political actors take into account the limitations of evidence-based policy-making? 5 key points

These notes are for my brief panel talk at the European Parliament-European University Institute ‘Policy Roundtable’: Evidence and Analysis in EU Policy-Making: Concepts, Practice and Governance. As you can see from the programme description, the broader theme is about how EU institutions demonstrate their legitimacy through initiatives such as stakeholder participation and evidence-based policymaking (EBPM). So, part of my talk is about what happens when EBPM does not exist.

The post is a slightly modified version of my (recorded) talk for Open Society Foundations (New York) but different audiences make sense of these same basic points in very different ways.

  1. Recognise that the phrase ‘evidence-based policy-making’ means everything and nothing

The main limitation to ‘evidence-based policy-making’ is that no-one really knows what it is or what the phrase means. So, each actor makes sense of EBPM in different ways and you can tell a lot about each actor by the way in which they answer these questions:

  • Should you use restrictive criteria to determine what counts as ‘evidence’? Some actors equate evidence with scientific evidence and adhere to specific criteria – such as evidence-based medicine’s hierarchy of evidence – to determine what is scientific. Others have more respect for expertise, professional experience, and stakeholder and service user feedback as sources of evidence.
  • Which metaphor, evidence based or informed is best? ‘Evidence based’ is often rejected by experienced policy participants as unrealistic, preferring ‘informed’ to reflect pragmatism about mixing evidence and political calculations.
  • How far do you go to pursue EBPM? It is unrealistic to treat ‘policy’ as a one-off statement of intent by a single authoritative actor. Instead, it is made and delivered by many actors in a continuous policymaking process within a complicated policy environment (outlined in point 3). This is relevant to EU institutions with limited resources: the Commission often makes key decisions but relies on Member States to make and deliver, and the Parliament may only have the ability to monitor ‘key decisions’. It is also relevant to stakeholders trying to ensure the use of evidence throughout the process, from supranational to local action.
  • Which actors count as policymakers? Policymaking is done by ‘policymakers’, but many are unelected and the division between policymaker/ influencer is often unclear. The study of policymaking involves identifying networks of decision-making by elected and unelected policymakers and their stakeholders, while the actual practice is about deciding where to draw the line between influence and action.
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

For stakeholders, an effective engagement strategy is not straightforward: it takes time to know ‘where the action is’, how and where to engage with policymakers, and with whom to form coalitions. For the Commission, it is difficult to know what will happen to policy after it is made (although we know the end point will not resemble the starting point). For the Parliament, it is difficult even to know where to look.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected national and local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from stakeholders, professional groups, service user and local practitioner experience. This principle seems to rule out the use of RCTs, at least as a source of a uniform model to be rolled out and evaluated. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach to EBPM or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking

  • If policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals?
  • If policymaking systems are so complex, should stakeholders devote huge amounts of resources to make sure they’re effective at each stage?
  • Should proponents of scientific evidence go to great lengths to make sure that EBPM is based on a hierarch of evidence? There is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.
  • Should policymakers try to direct the use of evidence in policy as well as policy itself?

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Evidence Based Policy Making: 5 things you need to know and do

These are some opening remarks for my talk on EBPM at Open Society Foundations (New York), 24th October 2016. The OSF recorded the talk, so you can listen below, externally, or by right clicking and saving. Please note that it was a lunchtime talk, so the background noises are plates and glasses.

Evidence based policy making’ is a good political slogan, but not a good description of the policy process. If you expect to see it, you will be disappointed. If you seek more thoughtful ways to understand and act within political systems, you need to understand five key points then decide how to respond.

  1. Decide what it means.

EBPM looks like a valence issue in which most of us agree that policy and policymaking should be ‘evidence based’ (perhaps like ‘evidence based medicine’). Yet, valence issues only command broad agreement on vague proposals. By defining each term we highlight ambiguity and the need to make political choices to make sense of key terms:

  • Should you use restrictive criteria to determine what counts as ‘evidence’ and scientific evidence?
  • Which metaphor, evidence based or informed, describes how pragmatic you will be?
  • The unclear meaning of ‘policy’ prompts you to consider how far you’d go to pursue EBPM, from a one-off statement of intent by a key actor, to delivery by many actors, to the sense of continuous policymaking requiring us to be always engaged.
  • Policymaking is done by policymakers, but many are unelected and the division between policy maker/ influencer is often unclear. So, should you seek to influence policy by influencing influencers?
  1. Respond to ‘rational’ and ‘irrational’ thought.

Comprehensive rationality’ describes the absence of ambiguity and uncertainty when policymakers know what problem they want to solve and how to solve it, partly because they can gather and understand all information required to measure the problem and determine the effectiveness of solutions.

Instead, we talk of ‘bounded rationality’ and how policymakers deal with it. They employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, habits, and familiarity, make decisions quickly.

I say ‘irrational’ provocatively, to raise a key strategic question: do you criticise emotional policymaking (describing it as ‘policy based evidence’) and try somehow to minimise it, adapt pragmatically to it, or see ‘fast thinking’ more positively in terms of ‘fast and frugal heuristics’? Regardless, policymakers will think that their heuristics make sense to them, and it can be counterproductive to simply criticise their alleged irrationality.

  1. Think about how to engage in complex systems or policy environments.

Policy cycle’ describes the idea that there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, and pursuing their goals in a series of clearly defined and well-ordered stages, such as: agenda setting, policy formulation, legitimation, implementation, and evaluation. In this context, one might identify how to influence a singular point of central government decision.

However, a cycle model does not describe policymaking well. Instead, we tend to identify the role of less ordered and more unpredictable complex systems, or policy environments containing:

  • A wide range of actors (individuals and organisations) influencing policy at many levels of government. Scientists and practitioners are competing with many actors to present evidence in a particular way to secure a policymaker audience.
  • A proliferation of rules and norms maintained by different levels or types of government. Support for particular ‘evidence based’ solutions varies according to which organisation takes the lead and how it understands the problem.
  • Important relationships (‘networks’) between policymakers and powerful actors. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn.
  • A tendency for certain ‘core beliefs’ or ‘paradigms’ to dominate discussion. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion.
  • Policy conditions and events that can reinforce stability or prompt policymaker attention to lurch at short notice. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift, but major policy change is rare.

These factors suggest that an effective engagement strategy is not straightforward: our instinct may be to influence elected policymakers at the ‘centre’ making authoritative choices, but the ‘return on investment’ is not clear. So, you need to decide how and where to engage, but it takes time to know ‘where the action is’ and with whom to form coalitions.

  1. Recognise that EBPM is only one of many legitimate ‘good governance’ principles.

There are several principles of ‘good’ policymaking and only one is EBPM. Others relate to the value of pragmatism and consensus building, combining science advice with public values, improving policy delivery by generating ‘ownership’ of policy among key stakeholders, and sharing responsibility with elected local policymakers.

Our choice of which principle and forms of evidence to privilege are inextricably linked. For example, some forms of evidence gathering seem to require uniform models and limited local or stakeholder discretion to modify policy delivery. The classic example is a programme whose value is established using randomised control trials (RCTs). Others begin with local discretion, seeking evidence from service user and local practitioner experience. This principle seems to rule out the use of RCTs. Of course, one can try to pursue both approaches and a compromise between them, but the outcome may not satisfy advocates of either approach or help produce the evidence that they favour.

  1. Decide how far you’ll go to achieve EBPM.

These insights should prompt us to see how far we are willing, and should, go to promote the use of certain forms of evidence in policymaking. For example, if policymakers and the public are emotional decision-makers, should we seek to manipulate their thought processes by using simple stories with heroes, villains, and clear but rather simplistic morals? If policymaking systems are so complex, should we devote huge amounts of resources to make sure we’re effective? Kathryn Oliver and I also explore the implications for proponents of scientific evidence, and there is a live debate on science advice to government on the extent to which scientists should be more than ‘honest brokers’.

Where we go from there is up to you

The value of policy theory to this topic is to help us reject simplistic models of EBPM and think through the implications of more sophisticated and complicated processes. It does not provide a blueprint for action (how could it?), but instead a series of questions that you should answer when you seek to use evidence to get what you want. They are political choices based on value judgements, not issues that can be resolved by producing more evidence.

ebpm pic

5 Comments

Filed under Evidence Based Policymaking (EBPM)

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

I am now part of a large EU-funded Horizon2020 project called IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe), which begins in January 2017. It is led by Professor Michael Woods at Aberystwyth University and has a dozen partners across the EU. I’ll be leading one work package in partnership with Professor Michael Keating.

imajine-logo-2017

The aim in our ‘work package’ is deceptively simple: generate evidence to identify how EU countries try to reduce territorial inequalities, see who is the most successful, and recommend the transfer of that success to other countries.

Life is not that simple, though, is it?! If it were, we’d know for sure what ‘territorial inequalities’ are, what causes them, what governments are willing to do to reduce them, and if they’ll succeed if they really try.

Instead, here are some of the problems you encounter along the way, including an inability to identify:

  • What policies are designed explicitly to reduce inequalities. Instead, we piece together many intentions, actions, instruments, and outputs, in many levels and types of government, and call it ‘policy’.
  • The link between ‘policy’ and policy outcomes, because many factors interact to produce those outcomes.
  • Success. Even if we could solve the methodological problems, to separate cause and effect, we face a political problem about choosing measures to evaluate and report success.
  • Good ways to transfer successful policies. A policy is not like a #gbbo cake, in which you can produce a great product and give out the recipe. In that scenario, you can assume that we all have the same aims (we all want cake, and of course chocolate is the best), starting point (basically the same shops and kitchens), and language to describe the task (use loads of sugar and cocoa). In policy, governments describe and seek to solve similar-looking problems in very different ways and, if they look elsewhere for lessons, those insights have to be relevant to their context (and the evidence-gathering process has to fit their idea of good governance). They also ‘transfer’ some policies while maintaining their own, and a key finding from our previous work is that governments simultaneously pursue policies to reduce inequalities and undermine their inequality-reducing policies.

So, academics like me tend to spend their time highlighting problems, explaining why such processes are not ‘evidence-based’, and identifying all the things that will go wrong from your perspective if you think policymaking and policy transfer can ever be straightforward.

Yet, policymakers do not have this luxury to identify problems, find them interesting, then go home. Instead, they have to make decisions in the face of ambiguity (what problem are they trying to solve?), uncertainty (evidence will help, but always be limited), and limited time.

So, academics like me are now focused increasingly on trying to help address the problems we raise. On the plus side, it prompts us to speak with policymakers from start to finish, to try to understand what evidence they’re interested in and how they’ll use it. On the less positive side (at least if you are a purist about research), it might prompt all sorts of compromises about how to combine research and policy advice if you want policymakers to use your evidence (on, for example, the line between science and advice, and the blurry boundaries between evidence and advice). If you are interested, please let me know, or follow the IMAJINE category on this site (and #IMAJINE).

See also:

New EU study looks at gap between rich and poor

New research project examines regional inequalities in Europe

Understanding the transfer of policy failure: bricolage, experimentalism and translation by Diane Stone

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

Writing a policy paper and blog post #POLU9UK

It can be quite daunting to produce a policy analysis paper or blog post for the first time. You learn about the constraints of political communication by being obliged to explain your ideas in an unusually small number of words. The short word length seems good at first, but then you realise that it makes your life harder: how can you fit all your evidence and key points in? The answer is that you can’t. You have to choose what to say and what to leave out.

You also have to make this presentation ‘not about you’. In a long essay or research report you have time to show how great you are, to a captive audience. In a policy paper, imagine that you are trying to get the attention and support from someone that may not know or care about the issue you raise. In a blog post, your audience might stop reading at any point, so every sentence counts.

There are many guides out there to help you with the practical side, including the broad guidance I give you in the module guide, and Bardach’s 8-steps. In each case, the basic advice is to (a) identify a policy problem and at least one feasible solution, and (b) tailor the analysis to your audience.

bardachs-8-steps

Be concise, be smart

So, for example, I ask you to keep your analysis and presentations super-short on the assumption that you have to make your case quickly to people with 99 other things to do. What can you tell someone in a half-page (to get them to read all 2 pages)? Could you explain and solve a problem if you suddenly bumped into a government minister in a lift/ elevator?

It is tempting to try to tell someone everything you know, because everything is connected and to simplify is to describe a problem simplistically. Instead, be smart enough to know that such self-indulgence won’t impress your audience. They might smile politely, but their eyes are looking at the elevator lights.

Your aim is not to give a full account of a problem – it’s to get someone important to care about it.

Your aim is not to give a painstaking account of all possible solutions – it’s to give a sense that at least one solution is feasible and worth pursuing.

Your guiding statement should be: policymakers will only pay attention to your problem if they think they can solve it, and without that solution being too costly.

Be creative

I don’t like to give you too much advice because I want you to be creative about your presentation; to be confident enough to take chances and feel that I’ll reward you for making the leap. At the very least, you have three key choices to make about how far you’ll go to make a point:

  1. Who is your audience? Our discussion of the limits to centralised policymaking suggest that your most influential audience will not necessarily be a UK government minister – but who else would it be?
  2. How manipulative should you be? Our discussions of ‘bounded rationality’ and ‘evidence-based policymaking’ suggest that policymakers combine ‘rational’ and ‘irrational’ shortcuts to gather information and make choices. So, do you appeal to their desire to set goals and gather a lot of scientific information and/or make an emotional and manipulative appeal?
  3. Are you an advocate or an ‘honest broker’? Contemporary discussions of science advice to government highlight unresolved debates about the role of unelected advisors: should you simply lay out some possible solutions or advocate one solution strongly?

Be reflective

For our purposes, there are no wrong answers to these questions. Instead, I want you to make and defend your decisions. That is the aim of your policy paper ‘reflection’: to ‘show your work’.

You still have some room to be creative: tell me what you know about policy theory and British politics and how it informed your decisions. Here are some examples, but it is up to you to decide what to highlight:

  • Show how your understanding of policymaker psychology helped you decide how to present information on problems and solutions.
  • Extract insights from policy theories, such as from punctuated equilibrium theory on policymaker attention, multiple streams analysis on timing and feasibility, or the NPF on how to tell persuasive stories.
  • Explore the implications of the lack of ‘comprehensive rationality’ and absence of a ‘policy cycle’: feasibility is partly about identifying the extent to which a solution is ‘doable’ when central governments have limited powers. What ‘policy style’ or policy instruments would be appropriate for the solution you favour?

Be a blogger

With a blog post, your audience is wider. You are trying to make an argument that will capture the attention of a more general audience (interested in politics and policy, but not a specialist) that might access your post from Twitter/ Facebook or via a search engine. This produces a new requirement, to: present a ‘punchy’ title which sums up the whole argument in under 140 characters (a statement is often better than a vague question); to summarise the whole argument in (say) 100 words in the first paragraph (what is the problem and solution?); and, to provide more information up to a maximum of 500 words. The reader can then be invited to read the whole policy analysis.

The style of blog posts varies markedly, so you should consult many examples before attempting your own (compare the LSE with The Conversation and newspaper columns to get a sense of variations in style). When you read other posts, take note of their strengths and weaknesses. For example, many posts associated with newspapers introduce a personal or case study element to ground the discussion in an emotional appeal. Sometimes this works, but sometimes it causes the reader to scroll down quickly to find the main argument. Consider if it is as, or more, effective to make your argument more direct and easy to find as soon as someone clicks the link on their phone. Many academic posts are too long (well beyond your 500 limit), take too long to get to the point, and do not make explicit recommendations, so you should not merely emulate them. You should also not just chop down your policy paper – this is about a new kind of communication.

Be reflective once again

Hopefully, by the end, you will appreciate the transferable life skills. I have generated some uncertainty about your task to reflect the sense among many actors that they don’t really know how to make a persuasive case and who to make it to. We can follow some basic Bardach-style guidance, but a lot of this kind of work relies on trial-and-error. I maintain a short word count to encourage you to get to the point, and I bang on about ‘stories’ in our module to encourage you to make a short and persuasive story to policymakers.

This process seems weird at first, but isn’t it also intuitive? For example, next time you’re in my seminar, measure how long it takes you to get bored and look forward to the weekend. Then imagine that policymakers have the same attention span as you. That’s how long you have to make your case!

See also: Professionalism online with social media

Here is the advice that my former lecturer, Professor Brian Hogwood, gave in 1992. Has the advice changed much since then?

20161125_094112c

20161125_094131

20161125_094146

20161125_094203

 

7 Comments

Filed under Evidence Based Policymaking (EBPM), Folksy wisdom, POLU9UK

Realistic ‘realist’ reviews: why do you need them and what might they look like?

This discussion is based on my impressions so far of realist reviews and the potential for policy studies to play a role in their effectiveness. The objectives section formed one part of a recent team bid for external funding (so, I acknowledge the influence of colleagues on this discussion, but not enough to blame them personally). We didn’t get the funding, but at least I got a lengthy blog post and a dozen hits out of it.

I like the idea of a ‘realistic’ review of evidence to inform policy, alongside a promising uptake in the use of ‘realist review’. The latter doesn’t mean realistic: it refers to a specific method or approach – realist evaluation, realist synthesis.

The agenda of the realist review already takes us along a useful path towards policy relevance, driven partly by the idea that many policy and practice ‘interventions’ are too complex to be subject to meaningful ‘systematic review’.

The latter’s aim – which we should be careful not to caricature – may be to identify something as close as possible to a general law: if you do X, the result will generally be Y, and you can be reasonably sure because the studies (such as randomised control trials) meet the ‘gold standard’ of research.

The former’s aim is to focus extensively on the context in which interventions take place: if you do X, the result will be Y under these conditions. So, for example, you identify the outcome that you want, the mechanism that causes it, and the context in which the mechanism causes the outcome. Maybe you’ll even include a few more studies, not meeting the ‘gold standard’, if they meet other criteria of high quality research (I declare that I am a qualitative researcher, so you call tell who I’m rooting for).

Realist reviews come increasingly with guide books and discussions on how to do them systematically. However, my impression is that when people do them, they find that there is an art to applying discretion to identify what exactly is going on. It is often difficult to identify or describe the mechanism fully (often because source reports are not clear on that point), say for sure it caused the outcome even in particular circumstances, and separate the mechanism from the context.

I italicised the last point because it is super-important. I think that it is often difficult to separate mechanism from context because (a) the context is often associated with a particular country’s political system and governing arrangements, and (b) it might be better to treat governing context as another mechanism in a notional chain of causality.

In other words, my impression is that realist reviews focus on the mechanism at the point of delivery; the last link in the chain in which the delivery of an intervention causes an outcome. It may be wise to also identify the governance mechanism that causes the final mechanism to work.

Why would you complicate an already complicated review?

I aim to complicate things then simplify them heroically at the end.

Here are five objectives that I maybe think we should pursue in an evidence review for policymakers (I can’t say for sure until we all agree on the principles of science advice):

  1. Focus on ways to turn evidence into feasible political action, identifying a clear set of policy conditions and mechanisms necessary to produce intended outcomes.
  2. Produce a manageable number of simple lessons and heuristics for policymakers, practitioners, and communities.
  3. Review a wider range of evidence sources than in traditional systematic reviews, to recognise the potential trade-offs between measures of high quality and high impact evidence.
  4. Identify a complex policymaking environment in which there is a need to connect the disparate evidence on each part of the ‘causal chain’.
  5. Recognise the need to understand individual countries and their political systems in depth, to know how the same evidence will be interpreted and used very differently by actors in different contexts.

Objective 1: evidence into action by addressing the politics of evidence-based policymaking

There is no shortage of scientific evidence of policy problems. Yet, we lack a way to use evidence to produce politically feasible action. The ‘politics of evidence-based policymaking’ produces scientists frustrated with the gap between their evidence and a proportionate policy response, and politicians frustrated that evidence is not available in a usable form when they pay attention to a problem and need to solve it quickly. The most common responses in key fields, such as environmental and health studies, do not solve this problem. The literature on ‘barriers’ between evidence and policy recommend initiatives such as: clearer scientific messages, knowledge brokerage and academic-practitioner workshops, timely engagement in politics, scientific training for politicians, and participation to combine evidence and community engagement.

This literature makes limited reference to policy theory and has two limitations. First, studies focus on reducing empirical uncertainty, not ‘framing’ issues to reduce ambiguity. Too many scientific publications go unread in the absence of a process of persuasion to influence policymaker demand for that information (particularly when more politically relevant and paywall-free evidence is available elsewhere). Second, few studies appreciate the multi-level nature of political systems or understand the strategies actors use to influence policy. This involves experience and cultural awareness to help learn: where key decisions are made, including in networks between policymakers and influential actors; the ‘rules of the game’ of networks; how to form coalitions with key actors; and, that these processes unfold over years or decades.

The solution is to produce knowledge that will be used by policymakers, community leaders, and ‘street level’ actors. It requires a (23%) shift in focus from the quality of scientific evidence to (a) who is involved in policymaking and the extent to which there is a ‘delivery chain’ from national to local, and (b) how actors demand, interpret, and use evidence to make decisions. For example, simple qualitative stories with a clear moral may be more effective than highly sophisticated decision-making models or quantitative evidence presented without enough translation.

Objective 2: produce simple lessons and heuristics

We know that the world is too complex to fully comprehend, yet people need to act despite uncertainty. They rely on ‘rational’ methods to gather evidence from sources they trust, and ‘irrational’ means to draw on gut feeling, emotion, and beliefs as short cuts to action (or system 1 and 2 thinking). Scientific evidence can help reduce some uncertainty, but not tell people how to behave. Scientific information strategies can be ineffective, by expecting audiences to appreciate the detail and scale of evidence, understand the methods used to gather it, and possess the skills to interpret and act on it. The unintended consequence is that key actors fall back on familiar heuristics and pay minimal attention to inaccessible scientific information. The solution is to tailor evidence reviews to audiences: examining their practices and ways of thinking; identifying the heuristics they use; and, describing simple lessons and new heuristics and practices.

Objective 3: produce a pragmatic review of the evidence

To review a wider range of evidence sources than in traditional systematic reviews is to recognise the trade-offs between measures of high quality (based on a hierarchy of methods and journal quality) and high impact (based on familiarity and availability). If scientists reject and refuse to analyse evidence that policymakers routinely take more seriously (such as the ‘grey’ literature), they have little influence on key parts of policy analysis. Instead, provide a framework that recognises complexity but produces research that is manageable at scale and translatable into key messages:

  • Context. Identify the role of factors described routinely by policy theories as the key parts of policy environments: the actors involved in multiple policymaking venues at many levels of government; the role of informal and formal rules of each venue; networks between policymakers and influential actors; socio-economic conditions; and, the ‘paradigms’ or ways of thinking that underpin the consideration of policy problems and solutions.
  • Mechanisms. Focus on the connection between three mechanisms: the cause of outcomes at the point of policy delivery (intervention); the cause of ‘community’ or individual ‘ownership’ of effective interventions; and, the governance arrangements that support high levels of community ownership and the effective delivery of the most effective interventions. These connections are not linear. For example, community ownership and effective interventions may develop more usefully from the ‘bottom up’, scientists may convince national but not local policymakers of the value of interventions (or vice versa), or political support for long term strategies may only be temporary or conditional on short term measures of success.
  • Outcomes. Identify key indicators of good policy outcomes in partnership with the people you need to make policy work. Work with those audiences to identify a small number of specific positive outcomes, and synthesise the best available evidence to explain which mechanisms produce those outcomes under the conditions associated with your region of study.

This narrow focus is crucial to the development of a research question, limiting analysis to the most relevant studies to produce a rigorous review in a challenging timeframe. Then, the idea from realist reviews is that you ‘test’ your hypotheses and clarify the theories that underpin this analysis. This should involve a test for political as well as technical feasibility: speak regularly with key actors i to gauge the likelihood that the mechanisms you recommend will be acted upon, and the extent to which the context of policy delivery is stable and predictable and if mechanism will work consistently under those conditions.

Objective 4: identify key links in the ‘causal chain’ via interdisciplinary study

We all talk about combining perspectives from multiple disciplines but I totally mean it, especially if it boosts the role of political scientists who can’t predict elections. For example, health or environmental scientists can identify the most effective interventions to produce good health or environmental outcomes, but not how to work with and influence key people. Policy scholars can identify how the policy process works and how to maximise the use of scientific evidence within it. Social science scholars can identify mechanisms to encourage community participation and the ownership of policies. Anthropologists can provide insights on the particular cultural practices and beliefs underpinning the ways in which people understand and act according to scientific evidence.

Perhaps more importantly, interdisciplinarity provides political cover: we got the best minds in many disciplines and locked them in a room until they produced an answer.

We need this cover for something I’ll call ‘informed extrapolation’ and justify with reference to pragmatism: if we do not provide well-informed analyses of the links between each mechanism, other less-informed actors will fill the gap without appreciating key aspects of causality. For example, if we identify a mechanism for the delivery of successful interventions – e.g. high levels of understanding and implementation of key procedures – there is still uncertainty: do these mechanisms develop organically through ‘bottom up’ collaboration or can they be introduced quickly from the ‘top’ to address an urgent issue? A simple heuristic for central governments could be to introduce training immediately or to resist the temptation for a quick fix.

Relatively-informed analysis, to recommend one of those choices, may only be used if we can back it up with interdisciplinary weight and produce recommendations that are unequivocal (although, again, other approaches are available).

Objective 5: focus intensively on one region, and one key issue, not ‘one size fits all’

We need to understand individual countries or regions – their political systems, communities, and cultural practices – and specific issues in depth, to know how abstract mechanisms work in concrete contexts, and how the same evidence will be interpreted and used differently by actors in those contexts. We need to avoid politically insensitive approaches based on the assumption that a policy that works in countries like (say) the UK will work in countries that are not (say) the UK, and/ or that actors in each country will understand policy problems in the same way.

But why?

It all looks incredibly complicated, doesn’t it? There’s no time to do all that, is there? It will end up as a bit of a too-rushed jumble of high-and-low quality evidence and advice, won’t it?

My argument is that these problems are actually virtues because they provide more insight into how busy policymakers will gather and use evidence. Most policymakers will not know how to do a systematic review or understand why you are so attached to them. Maybe you’ll impress them enough to get them to trust your evidence, but have you put yourself into a position to know what they’ll do with it? Have you thought about the connection between the evidence you’ve gathered, what people need to do, who needs to do it, and who you need to speak to about getting them to do it? Maybe you don’t have to, if you want to be no more than a ‘neutral scientist’ or ‘honest broker’ – but you do if you want to give science advice to policymakers that policymakers can use.

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

Week 2. Two stories of British politics: the Westminster model versus Complex Government #POLU9UK

I want you to think about the simple presentation of complex thought.

  • How do we turn a world which seems infinitely complex into an explanation which describes that world in a few minutes or seconds?
  • How do we choose the information on which to focus, at the expense of all other information, and generate support for that choice?
  • How do we persuade other people to act on that information?

To that end, this week we focus on two stories of politics, and next month you can use these questions to underpin your coursework.

Imagine the study of British politics as the telling of policymaking stories.

We can’t understand or explain everything about politics. Instead, we turn a complex world into a set of simple stories in which we identify, for example, the key actors, events and outcomes. Maybe we’ll stick to dry description, or maybe we’ll identify excitement, heroes, villains, and a moral. Then, we can compare these tales, to see if they add up to a comprehensive account of politics, or if they give us contradictory stories and force us to choose between them.

As scholars, we tell these stories to help explain what is happening, and do research to help us decide which story seems most convincing. However, we also study policymakers who use such stories to justify their action, or the commentators using them to criticise the ineffectiveness of those policymakers. So, one intriguing and potentially confusing prospect is that we can tell stories about policymakers (or their critics) who tell misleading stories!

Remember King Canute (Cnut)

King Canute

Source

If you’re still with me, have a quick look at Hay’s King Canute article (or my summary of it). Yes, that’s right: he got a whole article out of King Canute. I couldn’t believe it either. I was gobsmacked when I realised how good it was too. For our purposes, it highlights three things:

  1. We’ll use the same shorthand terms – ‘Westminster model’, ‘complex government’ – but let’s check if we tell the same stories in the same way.
  2. Let’s check if we pick the same moral. For example, if ministers don’t get what they want, is it because of bad policymaking or factors outside of their control? Further, are we making empirical evaluations and/or moral judgements?
  3. Let’s identify how policymakers tell that story, and what impact the telling has on the outcome. For example, does it help get them re-elected? Does the need or desire to present policymaking help or hinder actual policymaking? Is ‘heresthetic’ a real word?

The two stories

This week, we’ll initially compare two stories about British politics: the Westminster Model and Complex Government. I present them largely as contrasting accounts of politics and policymaking, but only to keep things simple at first.

One is about central control in the hands of a small number of ministers. It contains some or all of these elements, depending on who is doing the telling:

  1. Key parts of the Westminster political system help concentrate power in the executive. Representative democracy is the basis for most participation and accountability. The UK is a unitary state built on parliamentary sovereignty and a fusion of executive and legislature, not a delegation or division of powers. The plurality electoral system exaggerates single party majorities, the whip helps maintain party control of Parliament, the government holds the whip, and the Prime Minister controls membership of the government.
  2. So, you get centralised government and you know who is in charge and therefore to blame.

Another is about the profound limits to the WM:

  1. No-one seems to be in control. The huge size and reach of government, the potential for ministerial ‘overload’ and need to simplify decision-making, the blurry boundaries between the actors who make and influence policy, the multi-level nature of policymaking, and, the proliferation of rules and regulations, many of which may undermine each other, all contribute to this perception.
  2. If elected policymakers can’t govern from the centre, you don’t get top-down government.

What is the moral of these stories?

For us, a moral relates to (a) how the world works or should work, (b) what happens when it doesn’t work in the way we expect, (c) who is to blame for that, and/ or (d) what we should do about it.

For example, what if we start with the WM as a good thing: you get strong, decisive, and responsible government and you know who is in charge and therefore to blame. If it doesn’t quite work out like that, we might jump straight to pragmatism: if elected policymakers can’t govern from the centre, you don’t get strong and decisive government, it makes little sense to blame elected policymakers for things outside of their control, and so we need more realistic forms of accountability (including institutional, local, and service-user).

Who would buy that story though? We need someone to blame!

Yet, things get complicated when you try to identify a moral built on who to blame for it:

There is a ‘universal’ part of the story, and it is difficult to hold a grudge against the universe. In other words, think of the aspects of policymaking that seem to relate to limitations such as ‘bounded rationality’. Ministers can only pay attention to a fraction of the things for which they are formally in charge. So, they pay disproportionate attention to a small number of issues and ignore the rest. They delegate responsibility for those tasks to civil servants, who consult with stakeholders to produce policy. Consequently, there is a blurry boundary between formal responsibility and informal influence, often summed up by the term governance rather than government. A huge number of actors are involved in the policy process and it is difficult to separate their effects. Instead, think of policy outcomes as the product of collective action, only some of which is coordinated by central government. Or, policy outcomes seem to ‘emerge’ from local practices and rules, often despite central government attempts to control them.

There is UKspecific part of the story, but it’s difficult to blame policymakers that are no longer in government. UK Governments have exacerbated the ‘governance problem’, or the gap between an appearance of central control and what central governments can actually do. A collection of administrative reforms from the 1980s, many of which were perhaps designed to reassert central government power, has reinforced a fragmented public landscape and a periodic sense that no one is in control. Examples include privatisation, civil service reforms, and the use of quangos and non-governmental organisations to deliver policies. Further, a collection of constitutional reforms has shifted power up to the EU and down to devolved and regional or local authorities.

How do policymakers (and their critics) tell these stories, how should they tell them, and what is the effect in each case?

Let’s see how many different stories we can come up with, perhaps with reference to specific examples. Their basic characteristics might include:

  • Referring primarily to the WM, to blame elected governments for not fulfilling their promises or for being ineffectual. If they are in charge, and they don’t follow through, it’s their fault linked to poor judgement.
  • Referring to elements of both stories, but still blaming ministers. Yes, there are limits to central control but it’s up to ministers to overcome them.
  • Referring to elements of both stories, and blaming other people. Ministers gave you this task, so why didn’t you deliver?
  • Referring to CG, and blaming more people. Yes, there are many actors, but why the hell can’t they get together to fix this?
  • Referring to CG and wondering if it makes sense to blame anyone in particular. It’s the whole damn system! Government is a mystery wrapped in a riddle inside an enigma.

Joe Pesci JFK the system

In broader terms, let’s discuss what happens when our two initial stories collide: when policymakers need to find a way to balance a pragmatic approach to complexity and the need to describe their activities in a way that the public can understand and support.

For example, do they try to take less responsibility for policy outcomes, to reflect their limited role in complex government, and/ or try to reassert central control, on the assumption that they may as well be more influential if they will be held responsible?

The answer, I think, is that they try out lots of solutions at the same time:

  • They try to deliver as many manifesto promises as possible, and the manifesto remains a key reference point for ministers and civil servants.
  • They often deal with ‘bounded rationality’ by making quick emotional and moral choices about ‘target populations’ before thinking through the consequences
  • In cases of ‘low politics’ they might rely on policy communities and/ or seek to delegate responsibility to other public bodies
  • In cases of ‘high politics’, they need to present an image of governing competence based on central control, so they intervene regularly
  • Sometimes low politics becomes high politics, and vice versa, so they intervene on an ad hoc basis before ignoring important issues for long periods.
  • They try to delegate and centralise simultaneously, for example via performance management based on metrics and targets.

We might also talk, yet again, about Brexit. If Brexit is in part a response to these problems of diminished control, what stories can we identify about how ministers plan to take it back? What, for example, are the Three Musketeers saying these days? And how much control can they take back, given that the EU is one small part of our discussion?

Illustrative example: (1) troubled families

I can tell you a quick story about ‘troubled families’ policy, because I think it sums up neatly the UK Government’s attempt to look in control of a process over which it has limited influence:

  • It provides a simple story with a moral about who was to blame for the riots in England in 2011: bad parents and their unruly children (and perhaps the public sector professionals being too soft on them).
  • It sets out an immediate response from the centre: identify the families, pump in the money, turn their lives around.
  • But, if you look below the surface, you see the lack of control: it’s not that easy to identify ‘troubled families’, the government relies on many local public bodies to get anywhere, and few lives are actually being ‘turned around’.
  • We can see a double whammy of ‘wicked problems’: the policy problem often seems impervious to government action, and there is a lack of central control of that action.
  • So, governments focus on how they present their action, to look in control even when they recognise their limits.

Illustrative example: (2) prevention and early intervention

If you are still interested by this stage, look at this issue in its broader context, of the desire of governments to intervene early in the lives of (say) families to prevent bad things happening. With Emily St Denny, I ask why governments seem to make a sincere commitment to this task but fall far shorter than they expected. The key passage is here:

“Our simple answer is that, when they make a sincere commitment to prevention, they do not know what it means or appreciate scale of their task. They soon find a set of policymaking constraints that will always be present. When they ‘operationalise’ prevention, they face several fundamental problems, including: the identification of ‘wicked’ problems (Rittell and Webber, 1973) which are difficult to define and seem impossible to solve; inescapable choices on how far they should go to redistribute income, distribute public resources, and intervene in people’s lives; major competition from more salient policy aims which prompt them to maintain existing public services; and, a democratic system which limits their ability to reform the ways in which they make policy. These problems may never be overcome. More importantly, policymakers soon think that their task is impossible. Therefore, there is high potential for an initial period of enthusiasm and activity to be replaced by disenchantment and inactivity, and for this cycle to be repeated without resolution”.

Group exercise.

Here is what I’ll ask you to do this week:

  • Describe the WM and CG stories in some depth in your groups, then we’ll compare your accounts.
  • Think of historical and contemporary examples of decision-making which seem to reinforce one story or the other, to help us decide which story seems most convincing in each case.
  • Try to describe the heroes/ villains in these stories, or their moral. For example, if the WM doesn’t explain the examples you describe, what should policymakers do about it? Will we only respect them if they refuse to give up, like Forest Gump or the ‘never give up, never surrender’ guy in Galaxy Quest? Or, if we would like to see pragmatic politicians, how would we sell their behaviour as equally heroic?

7 Comments

Filed under POLU9UK, public policy, UK politics and policy

British politics, Brexit and UK sovereignty: what does it all mean? #POLU9UK

 This is the first of 10 blog posts for the course POLU9UK: Policy and Policymaking in the UK. They will be a fair bit longer than the blog posts I asked you to write. I have also recorded a short lecture to go with it (OK, 22 minutes isn’t short).

In week 1 we’ll identify all that we think we knew about British politics, compare notes, then throw up our hands and declare that the Brexit vote has changed what we thought we knew.

I want to focus on the idea that a vote for the UK to leave the European Union was a vote for UK sovereignty. People voted Leave/ Remain for all sorts of reasons, and bandied around all sorts of ways to justify their position, but the idea of sovereignty and ‘taking back control’ is central to the Leave argument and this module.

For our purposes, it relates to broader ideas about the images we maintain about who makes key decisions in British politics, summed up by the phrases ‘parliamentary sovereignty’ and the ‘Westminster model’, and challenged by terms such as ‘bounded rationality’, ‘policy communities’, ‘multi-level governance’, and ‘complex government’.

Parliamentary Sovereignty

UK sovereignty relates strongly to the idea of parliamentary sovereignty: we vote in constituencies to elect MPs as our representatives, and MPs as a whole represent the final arbiters on policy in the UK. In practice, one party tends to dominate Parliament, and the elected government tends to dominate that party, but the principle remains important.

So, ‘taking back control’ is about responding, finally, to the sense that (a) the UK’s entry to the European Union from 1972 (when it signed the accession treaty) involved giving up far more sovereignty than most people expected, and (b) the European Union’s role has strengthened ever since, at the further expense of parliamentary sovereignty.

The Westminster Model

This idea of parliamentary sovereignty connects strongly to elements of the ‘Westminster model’ (WM), a shorthand phrase to describe key ways in which the UK political system is designed to work.

Our main task is to examine how well the WM: (a) describes what actually happens in British politics, and (b) represents what should happen in British politics. We can separate these two elements analytically but they influence each other in practice. For example, I ask what happens when elected policymakers know their limits but have to pretend that they don’t.

What should happen in British politics?

Perhaps policymaking should reflect strongly the wishes of the public. In representative democracies, political parties engage each other in a battle of ideas, to attract the attention and support of the voting public; the public votes every 4-5 years; the winner forms a government; the government turns its manifesto into policy; and, policy choices are carried out by civil servants and other bodies. In other words, there should be a clear link between public preferences, the strategies and ideas of parties and the final result.

The WM serves this purpose in a particular way: the UK has a plurality (‘first past the post’) voting system which tends to exaggerate support for, and give a majority in Parliament to, the winning party. It has an adversarial (and majoritarian?) style of politics and a ‘winner takes all’ mentality which tends to exclude opposition parties. The executive resides in the legislature and power tends to be concentrated within government – in ministers that head government departments and the Prime Minister who heads (and determines the members of) Cabinet. The government is responsible for the vast majority of public policy and it uses its governing majority, combined with a strong party ‘whip’, to make sure that its legislation is passed by Parliament.

In other words, the WM narrative suggests that the UK policy process is centralised and that the arrangement reflects a ‘British political tradition’: the government is accountable to public on the assumption that it is powerful and responsible. So, you know who is in charge and therefore who to praise or blame, and elections every 4-5 years are supplemented by parliamentary scrutiny built on holding ministers directly to account.

Pause for further reading: at this point, consider how this WM story links to a wider discussion of centralised policymaking (in particular, read the 1000 Words post on the policy cycle).

What actually happens?

One way into this discussion is to explore modern discussions of disenchantment with distant political elites who seem to operate in a bubble and not demonstrate their accountability to the public. For example, there is a literature on the extent to which MPs are likely to share the same backgrounds: white, male, middle class, and educated in private schools and Oxford or Cambridge. Or, the idea of a ‘Westminster bubble’ and distant ‘political class’ comes up in discussions of constitutional change (including the Scottish referendum debate), and was exacerbated during the expenses scandal in 2009.

Another is to focus on the factors that undermine this WM image of central control: maybe Westminster political elites are remote, but they don’t control policy outcomes. Instead, there are many factors which challenge the ability of elected policymakers to control the policy process. We will focus on these challenges throughout the course:

Challenge 1. Bounded rationality

Ministers only have the ability to pay attention to a tiny proportion of the issues over which have formal responsibility. So, how can they control issues if they have to ignore them? Much of the ‘1000 Words’ series explores the general implications of bounded rationality.

Challenge 2. Policy communities

Ministers don’t quite ignore issues; they delegate responsibility to civil servants at a quite-low level of government. Civil servants make policy in consultation with interest groups and other participants with the ability to trade resources (such as information) for access or influence. Such relationships can endure long after particular ministers or elected governments have come and gone.

In fact, this argument developed partly in response to discussions in the 1970s about the potential for plurality elections to cause huge swings in party success, and therefore frequent changes of government and reversals of government policy. Rather, scholars such as Jordan and Richardson identified policy continuity despite changes of government (although see Richardson’s later work).

Challenge 3. Multi-level governance

‘Multi-level’ refers to a tendency for the UK government to share policymaking responsibility with international, EU, devolved, and local governments.

‘Governance’ extends the logic of policy communities to identify a tendency to delegate or share responsibility with non-governmental and quasi-non-governmental organisations (quangos).

So, MLG can describe a clear separation of powers at many levels and a fairly coherent set of responsibilities in each case. Or, it can describe a ‘patchwork quilt’ of relationships which is difficult to track and understand. In either case, we identify ‘polycentricity’ or the presence of more than one ‘centre’ in British politics.

Challenge 4. Complex government

The phrase ‘complex government’ can be used to describe the complicated world of public policy, with elements including:

    • the huge size and reach of government – most aspects of our lives are regulated by the state
    • the potential for ministerial ‘overload’ and need to simplify decision-making
    • the blurry boundaries between the actors who make policy and those who seek to influence and/ or implement it (public policy results from their relationships and interactions)
    • the multi-level nature of policymaking
  • the complicated network of interactions between policy actors and many different ‘institutions’

 

  • the complexity of the statute book and the proliferation of rules and regulations, many of which may undermine each other.

 

Overall, these factors generate a sense of complex government that challenges the Westminster-style notion of accountability. How can we hold elected ministers to account if:

  1. they seem to have no hope of paying attention to much of complex government, far less control it
  2. there is so much interaction with unpredictable effects
  3. we don’t understand enough about how this process works to know if ministers are acting effectively?

Challenge 5. The policy environment and unpredictable events

Further, such governments operate within a wider environment in which conditions and events are often out of policymakers’ control. For example, how do they deal with demographic change or global economic crisis? Policymakers have some choice about the issues to which they pay attention, and the ways in which they understand and address them. However, they do not control that agenda or policy outcomes in the way we associate with the WM image of central control.

How has the UK government addressed these challenges?

We can discuss two key themes throughout the course:

  1. UK central governments have to balance two stories of British politics. One is the need to be pragmatic in the face of these five challenges to their power and sense of control. Another is the need to construct an image of governing competence, and most governments do so by portraying an image of power and central control!
  2. This dynamic contributes to state reform. There has been a massive build-up and partial knock-down of the ‘welfare state’ in the post-war period (please have a think about the key elements). This process links strongly to that idea of pragmatism versus central control: governments often reform the state to (a) deliver key policy outcomes (the development of the welfare state and aims such as full employment), or (b) reinvigorate central control (for example, to produce a ‘lean state’ or ‘hollowing state’).

What does this discussion tell us about our initial discussion of Brexit?

None of these factors help downplay the influence of the EU on the UK. Rather, they prompt us to think harder about the meaning, in practice, of parliamentary sovereignty and the Westminster model which underpins ongoing debates about the UK-EU relationship. In short, we can explore the extent to which a return to ‘parliamentary sovereignty’ describes little more than principles not evidence in practice. Such principles are important, but let’s also focus on what actually happens in British politics.

 

1 Comment

Filed under POLU9UK, UK politics and policy

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Imagine this as your ‘early intervention’ policy choice: (a) a universal and non-stigmatising programme for all parents/ children, with minimal evidence of effectiveness, high cost, and potential public opposition about the state intervening in family life; or (b) a targeted, stigmatising programme for a small number, with more evidence, less cost, but the sense that you are not really intervening ‘early’ (instead, you are waiting for problems to arise before you intervene). What would you do, and how would you sell your choice to the public?

I ask this question because ‘early intervention’ seems to the classic valence issue with a twist. Most people seem to want it in the abstract: isn’t it best to intervene as early as possible in a child’s life to protect them or improve their life chances?

However, profound problems or controversies arise when governments try to pursue it. There are many more choices than I presented, but the same basic trade-offs arise in each case. So, at the start, it looks like you have lucked onto a policy that almost everyone loves. At the end, you realise that you can’t win. There is no such thing as a valence issue at the point of policy choice and delivery.

To expand on these dilemmas in more depth, I compare cases of Scottish and UK Government ‘families policies’. In previous posts, I portrayed their differences – at least in the field of prevention and early intervention policies – as more difficult to pin down than you might think. Often, they either say the same things but ‘operationalise’ them in very different ways, or describe very different problems then select very similar solutions.

This basic description sums up very similar waves of key ‘families policies’ since devolution: an initial focus on social inclusion, then anti-social behaviour, followed by a contemporary focus on ‘whole family’ approaches and early intervention. I will show how they often go their own ways, but note the same basic context for choice, and similar choices, which help qualify that picture.

Early intervention & prevention policies are valence issues …

A valence (or ‘motherhood and apple pie’) issue is one in which you can generate huge support because the aim seems, to most people, to be obviously good. Broad aims include ‘freedom’ and ‘democracy’. In the UK specific aims include a national health service free at the point of use. We often focus on valence issues to highlight the importance of a political party’s or leader’s image of governing competence: it is not so much what we want (when the main parties support very similar things), but who you trust to get it.

Early intervention seems to fit the bill: who would want you to intervene late or too late in someone’s life when you can intervene early, to boost their life chances at an early stage as possible? All we have to do is work out how to do it well, with reference to some good evidence. Yet, as I discuss below, things get complicated as soon as we consider the types of early intervention available, generally described roughly as a spectrum from primary (stop a problem occurring and focus on the whole population – like a virus inoculation) to secondary (address a problem at an early stage, using proxy indicators to identify high-risk groups), and tertiary (stop a problem getting worse in already affected groups).

Similarly, look at how Emily St Denny and I describe prevention policy. Would many people object to the basic principles?

“In the name of prevention, the UK and Scottish Governments propose to radically change policy and policymaking across the whole of government. Their deceptively simple definition of ‘prevention policy’ is: a major shift in resources, from the delivery of reactive public services to solve acute problems, to the prevention of those problems before they occur. The results they promise are transformative, to address three crises in politics simultaneously: a major reduction in socioeconomic equalities by focusing on their ‘root causes’; a solution to unsustainable public spending which is pushing public services to breaking point; and, new forms of localised policymaking, built on community and service user engagement, to restore trust in politics”.

… but the evidence on their effectiveness is inconvenient …

A good simple rule about ‘evidence-based policymaking’ is that there is never a ‘magic bullet’ to tell you what to do or take the place of judgement. Politics is about making choices which benefit some people while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution. A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to high evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field. The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention: intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour; and an outreach model of support and training. The evidence of success comes from evaluation and a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened to prevent (for example) family homelessness. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without an intervention of this sort.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success. This reputation has been generated according to evidential rules associated with ‘evidence based medicine’ (EBM), in which there is relatively strong adherence to a hierarchy of evidence, with RCTs and their systematic review at the top, and the belief that there should be ‘fidelity’ to programmes to make sure that the ‘dosage’ of the intervention is delivered properly and its effect measured. Key examples include the Family Nurse Partnership (although its first UK RCT evaluation was not promising), Triple P (although James Coyne has his doubts!), and Incredible Years (but note the importance of ‘indicated’ versus ‘selective’ programmes, below). In this approach, there may be more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and levels of existing services. We know that some interventions are associated with positive outcomes, but we struggle to establish definitively that they caused them (solely, separate from their context).

  1. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem with primary prevention in this field. It is difficult to see much evidence of success because: there are few examples of taking effective specialist projects ‘to scale’; there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners); and, it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

  1. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

… so governments have to make and defend highly ‘political’ choices …

I think this is key context in which we can try to understand the often-different choices by the UK and Scottish Governments. Faced with the same broad aim, to intervene early to prevent poor outcomes, the same uncertainty and lack of evidence that their interventions will produce the desired effect, and the same need to DO SOMETHING rather than wait for the evidence that may never arise, what do they do?

Both governments often did remarkably similar things before they did different things

From the late 1990s, both governments placed primary emphasis initially on a positive social inclusion agenda, followed by a relatively negative focus on anti-social behaviour (ASB), before a renewed focus on the social determinants of inequalities and the use of early intervention to prevent poor outcomes.

Both governments link families policies strongly to parenting skills, reinforcing the idea that parents are primarily responsible for the life chances of their children.

Both governments talk about getting away from deficit models of intervention (the Scottish Government in particular focuses on the ‘assets’ of individuals, families, and communities) but use deficit-model proxies to identify families in need of support, including: lone parenthood, debt problems, ill health (including disability and depression), and at least one member subject to domestic abuse or intergenerational violence, as well as professional judgements on the ‘chaotic’ or ‘dysfunctional’ nature of family life and of the likelihood of ‘family breakdown’ when, for example, a child it taken into care.

So, when we consider their headline-grabbing differences, note this common set of problems and drivers, and similar responses.

… and selling their early intervention choices is remarkably difficult …

Although our starting point was valence politics, prevention and early intervention policies are incredibly hard to get off the ground. As Emily St Denny and I describe elsewhere, when policymakers ‘make a sincere commitment to prevention, they do not know what it means or appreciate the scale of their task. They soon find a set of policymaking constraints that will always be present. When they ‘operationalise’ prevention, they face several fundamental problems, including: the identification of ‘wicked’ problems which are difficult to define and seem impossible to solve; inescapable choices on how far they should go to redistribute income, distribute public resources, and intervene in people’s lives; major competition from more salient policy aims which prompt them to maintain existing public services; and, a democratic system which limits their ability to reform the ways in which they make policy. These problems may never be overcome. More importantly, policymakers soon think that their task is impossible. Therefore, there is high potential for an initial period of enthusiasm and activity to be replaced by disenchantment and inactivity, and for this cycle to be repeated without resolution’.

These constraints refer to the broad idea of prevention policy, while specific policies can involve different drivers and constraints. With general prevention policy, it is difficult to know what government policy is and how you measure its success. ‘Prevention’ is vague, plus governments encourage local discretion to adapt the evidence of ‘what works’ to local circumstances.

Governments don’t get away with this regarding specific policies. Instead, Westminster politics is built on a simple idea of accountability in which you know who is in charge and therefore to blame. UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect, particularly in the UK, but also the Scottish, government.

… so the UK Government goes for it and faces the consequences ….

‘Troubled Families’ in England: the massive expansion of secondary prevention?

So, although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable short term outcomes – even if the broader aim is to encourage local discretion and successful long term outcomes.

In the absence of unequivocally supportive evidence (which may never appear), the UK government relied on a crisis (the London riots in 2011) to sell policy, and ridiculous processes of estimation of the size of the problem and performance measurement to sell the success of its solution. In this system, ministers perceive the need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and to do these things far more quickly than the people gathering evidence of more substantive success. There is a lot of criticism of the programme in terms of its lack, or cynical use, of evidence but little of it considers policy from an elected government’s perspective.

…while the Scottish Government is more careful, but faces unintended consequences

This particular UK Government response has no parallel in Scotland. The UK Government is far more likely than its Scottish counterpart to link families policies to a moral agenda in response to crisis, and there is no Scottish Government equivalent to ‘payment by results’ and massive programme expansion. Instead, it continued more modest roll-outs in partnership with local public bodies. Indeed, if we ‘zoom in’ to this one example, at this point in time, the comparison confirms the idea of a ‘Scottish Approach’ to policy and policymaking.

Yet, the Scottish Government has not solved the problems I describe in this post: it has not found an alternative ‘evidence based’ way to ‘scale up’ early intervention significantly and move from secondary/ tertiary forms of prevention to the more universal/ primary initiatives that you might associate intuitively with prevention policy.

Instead, its different experiences have highlighted different issues. For example, its key vehicle for early intervention and prevention is the ‘collaborative’ approach, such as in the Early Years Collaborative. Possibly, it represents the opposite of the UK’s attempt to centralise and performance-manage-the-hell-out-of the direction of major expansion.

Table 1 Three ideal types EBBP

Certainty, with this approach, your main aim is not to generate evidence of the success of interventions – at least not in the way we associate with ‘evidence based medicine’, randomised control trials, and the star ratings developed by the Early Intervention Foundation. Rather, the aim is to train local practitioners to use existing evidence and adapt it to local circumstances, experimenting as you go, and gathering/using data on progress in ways not associated with, for example, the family nurse partnership.

So, in terms of the discussion so far, perhaps its main advantage is that a government does not have to sell its political choices (it is more of a delivery system than a specific intervention) or back them up with evidence of success elsewhere. In the absence of much public, media, or political party attention, maybe it’s a nice pragmatic political solution built more on governance principles than specific evidence.

Yet, despite our fixation with the constitution, some policy issues do occasionally get discussed. For our purposes, the most relevant is the ‘named person’ scheme because it looks like a way to ‘scale up’ an initiative to support a universal or primary prevention approach and avoid stigmatising some groups by offering a service to everyone (in this respect, it is the antithesis to ‘troubled families’). In this case, all children in Scotland (and their parents or guardians) get access to a senior member of a public service, and that person acts as a way to ‘join up’ a public sector response to a child’s problems.

Interestingly, this universal approach has its own problems. ‘Troubled families’ sets up a distinction between troubled/ untroubled to limit its proposed intervention in family life. Its problem is the potential to stigmatise and demoralise ‘troubled’ families. ‘Named person’ shows the potential for greater outcry when governments try to not identify and stigmatise specific families. The scheme is largely a response to the continuous suggestion – made after high profile cases of child abuse or neglect – that children can suffer when no agency takes overall responsibility for their care, but has been opposed as excessive infringement on normal family life and data protection, successfully enough to delay its implementation.

[Update 20.9.19: Named person scheme scrapped by Scottish Government]

The punchline to early intervention as a valence issue

Problems arise almost instantly when you try to turn a valence issue into something concrete. A vague and widely-supported policy, to intervene early to prevent bad outcomes, becomes a set of policy choices based on how governments frame the balance between ideology, stigma, and the evidence of the impact and cost-effectiveness of key interventions (which is often very limited).

Their experiences are not always directly comparable, but the UK and Scottish Governments have helped show us the pitfalls of concrete approaches to prevention and early intervention. They help us show that your basic policy choices include: (a) targeted programmes which increase stigma, (b) ‘indicated’ approaches which don’t always look like early intervention; (c) ‘selective’ approaches which seem to be less effective despite intervening at an earlier stage, (d) universal programmes which might cross a notional line between the state and the family, and (e) approaches which focus primarily on local experimentation with uncertain outcomes.

None of these approaches provide a solution to the early intervention dilemmas that all governments face, and there is no easy way to choose between approaches. We can make these choices more informed and systematic, by highlighting how all of the pieces of the jigsaw fit together, and somehow comparing their intended and unintended consequences. However, this process does not replace political judgement – and quite right too – because there is no such thing as a valence issue at the point of policy choice and delivery.

See also:

Paul Cairney (2019) ‘The UK government’s imaginative use of evidence to make policy’, British Politics, 14, 1, 1-22 Open Access PDF

Paul Cairney and Emily St Denny (in press, January 2020) Why Isn’t Government Policy More Preventive? (Oxford: Oxford University Press) Preview Introduction Preview Conclusion

 

 

 

 

12 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Scottish politics, UK politics and policy

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

The UK Government’s ‘troubled families’ policy appears to be a classic top-down, evidence-free, and quick emotional reaction to crisis. It developed after riots in England (primarily in London) in August 2011. Within one week, and before announcing an inquiry into them, then Prime Minister David Cameron made a speech linking behaviour directly to ‘thugs’ and immorality – ‘people showing indifference to right and wrong…people with a twisted moral code…people with a complete absence of self-restraint’ – before identifying a breakdown in family life as a major factor (Cameron, 2011a).

Although the development of parenting programmes was already government policy, Cameron used the riots to raise parenting to the top of the agenda:

We are working on ways to help improve parenting – well now I want that work accelerated, expanded and implemented as quickly as possible. This has got to be right at the top of our priority list. And we need more urgent action, too, on the families that some people call ‘problem’, others call ‘troubled’. The ones that everyone in their neighbourhood knows and often avoids …Now that the riots have happened I will make sure that we clear away the red tape and the bureaucratic wrangling, and put rocket boosters under this programme …with a clear ambition that within the lifetime of this Parliament we will turn around the lives of the 120,000 most troubled families in the country.

Cameron reinforced this agenda in December 2011 by stressing the need for individuals and families to take moral responsibility for their actions, and for the state to intervene earlier in their lives to reduce public spending in the long term:

Officialdom might call them ‘families with multiple disadvantages’. Some in the press might call them ‘neighbours from hell’. Whatever you call them, we’ve known for years that a relatively small number of families are the source of a large proportion of the problems in society. Drug addiction. Alcohol abuse. Crime. A culture of disruption and irresponsibility that cascades through generations. We’ve always known that these families cost an extraordinary amount of money…but now we’ve come up the actual figures. Last year the state spent an estimated £9 billion on just 120,000 families…that is around £75,000 per family.

The policy – primarily of expanding the provision of ‘family intervention’ approaches – is often described as a ‘classic case of policy based evidence’: policymakers cherry pick or tell tall tales about evidence to justify action. It is a great case study for two reasons:

  1. Within this one programme are many different kinds of evidence-use which attract the ire of academic commentators, from an obviously dodgy estimate and performance management system to a more-sincere-but-still-criticised use of evaluations and neuroscience.
  2. It is easy to criticise the UK government’s actions but more difficult to say – when viewing the policy problem from its perspective – what the government should do instead.

In other words, it is useful to note that the UK government is not winning awards for ‘evidence-based policymaking’ (EBPM) in this area, but less useful to deny the politics of EBPM and hold it up to a standard that no government can meet.

The UK Government’s problematic use of evidence

Take your pick from the following ways in which the UK Government has been criticised for its use of evidence to make and defend ‘troubled families’ policy.

Its identification of the most troubled families: cherry picking or inventing evidence

At the heart of the programme is the assertion that we know who the ‘troubled families’ are, what causes their behaviour, and how to stop it. Yet, much of the programme is built on a value judgements about feckless parents, and tipping the balance from support to sanctions, and unsubstantiated anecdotes about key aspects such as the tendency of ‘worklessness’ or ‘welfare dependency’ to pass from one generation to another.

The UK government’s target of almost 120000 families was based speculatively on previous Cabinet Office estimates in 2006 that about ‘2% of families in England experience multiple and complex difficulties’. This estimate was based on limited survey data and modelling to identify families who met five of seven criteria relating to unemployment, poor housing, parental education, the mental health of the mother, the chronic illness or disability of either parent, an income below 60% of the median, and an inability to by certain items of food or clothing.

It then gave locally specific estimates to each local authority and asked them to find that number of families, identifying households with: (1) at least one under-18-year-old who has committed an offense in the last year, or is subject to an ASBO; and/ or (2) has been excluded from school permanently, or suspended on three consecutive terms, in a Pupil Referral Unit, off the school roll, or has over 15% unauthorised absences over three consecutive terms; and (3) an adult on out of work benefits.

If the household met all three criteria, they would automatically be included. Otherwise, local authorities had the discretion to identify further troubled families meeting two of the criteria and other indicators of concerns about ‘high costs’ of late intervention such as, ‘a child who is on a Child Protection Plan’, ‘Families subject to frequent police call-outs or arrests’, and ‘Families with health problems’ linked to mental health, addiction, chronic conditions, domestic abuse, and teenage pregnancy.

Its measure of success: ‘turning around’ troubled families

The UK government declared almost-complete success without convincing evidence. Success ‘in the last 6 months’ to identify a ‘turned around family’ is measured in two main ways: (1) the child no longer having three exclusions in a row, a reduction in the child offending rate of 33% or ASB rate of 60%, and/or the adult entering a relevant ‘progress to work’ programme; or (2) at least one adult moving from out of work benefits to continuous employment. It was self-declared by local authorities, and both parties had a high incentive to declare it: local authorities received £4000 per family payments and the UK government received a temporary way to declare progress without long term evidence.

The declaration is in stark contrast to an allegedly suppressed report to the government which stated that the programme had ‘no discernible effect on unemployment, truancy or criminality’. This lack of impact was partly confirmed by FOI requests by The Guardian – demonstrating that at least 8000 families received no intervention, but showed improvement anyway – and analysis by Levitas and Crossley which suggests that local authorities could only identify families by departing from the DCLG’s initial criteria.

Its investment in programmes with limited evidence of success

The UK government’s massive expansion of ‘family intervention projects’, and related initiatives, is based on limited evidence of success from a small sample of people from a small number of pilots. The ‘evidence for the effectiveness of family intervention projects is weak’ and a government-commissioned systematic review suggests that there are no good quality evaluations to demonstrate (well) the effectiveness or value-for-money of key processes such as coordinated service provision. The impact of other interventions, previously with good reputations, has been unclear, such as the Family Nurse Partnership imported from the US which so far has produced ‘no additional short-term benefit’. Overall, Crossley and Lambert suggest that “the weight of evidence surrounding ‘family intervention’ and similar approaches, over the longue durée, actually suggests that the approach doesn’t work”. There is also no evidence to support its heroic claim that spending £10000 per family will save £65000.

Its faith in sketchy neuroscientific evidence on the benefits of early intervention

The government is driven partly by a belief in the benefits of early intervention in the lives of children (from 0-3, or even before birth), which is based partly on the ‘now or never’ argument found in key reviews by Munro and Allen (one and two).

normal brain

Policymakers take liberties with neuroscientific evidence to emphasise the profound effect of stress on early brain development (measured, for example, by levels of cortisol found in hair samples). These accounts underpinning the urgency of early intervention are received far more critically in fields such as social science, neuroscience, and psychology. For example, Wastell and White find no good quality scientific evidence behind the comparison of child brain development reproduced in Allen’s reports.

Now let’s try to interpret and explain these points partly from a government perspective

Westminster politics necessitates this presentation of ‘prevention’ policies

If you strip away the rhetoric, the troubled families programme is a classic attempt at early intervention to prevent poor outcomes. In this general field, it is difficult to know what government policy is – what it stands for and how you measure its success. ‘Prevention’ is vague, plus governments make a commitment to meaningful local discretion and the sense that local actors should be guided by a combination of the evidence of ‘what works’ and its applicability to local circumstances.

This approach is not tolerated in Westminster politics, built on the simple idea of accountability in which you know who is in charge and therefore to blame! UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect: although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable outcomes – even if the broader aim is to encourage local discretion.

This context helps explain why governments appear to exploit crises to sell existing policies, and pursue ridiculous processes of estimation and performance measurement. They need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and they have to do these things very quickly.

Consequently, for example, they will not worry about some academics complaining about policy based evidence – they are more concerned about their media and public reception and the ability of the opposition to exploit their failures – and few people in politics have the time (that many academics take for granted) to wait for research. This is the lens through which we should view all discussions of the use of evidence in politics and policy.

Unequivocal evidence is impossible to produce and we can’t wait forever

The argument for evidence-based-policy rather than policy-based-evidence suggests that we know what the evidence is. Yet, in this field in particular, there is potential for major disagreement about the ‘bar’ we set for evidence.

Table 1 Three ideal types EBBP

For some, it relates to a hierarchy of evidence in which randomised control trials (RCTs) and their systematic review are at the top: the aim is to demonstrate that an intervention’s effect was positive, and more positive than another intervention or non-intervention. This requires experiments: to compare the effects of interventions in controlled settings, in ways that are directly comparable with other experiments.

As table 1 suggests, some other academics do not adhere to – and some reject – this hierarchy. This context highlights three major issues for policymakers:

  1. In general, when they seek evidence, they find this debate about how to gather and analyse it (and the implications for policy delivery).
  2. When seeking evidence on interventions, they find some academics using the hierarchy to argue the ‘evidence for the effectiveness of family intervention projects is weak’. This adherence to a hierarchy to determine research value also doomed to failure a government-commissioned systematic review: the review applied a hierarchy of evidence to its analysis of reports by authors who did not adhere to the same model. The latter tend to be more pragmatic in their research design (and often more positive about their findings), and their government audience rarely adheres to the same evidential standard built on a hierarchy. In the absence of someone giving ground, some researchers will never be satisfied with the available evidence and elected policymakers are unlikely to listen to them.
  3. The evidence generated from RCTs is often disappointing. The so-far-discouraging experience of the Family Nurse Partnership has a particularly symbolic impact, and policymakers can easily pick up a general sense of uncertainty about the best policies in which to invest.

So, if your main viewpoint is academic, you can easily conclude that the available evidence does not yet justify massive expansion in the troubled families programme (perhaps you might prefer the Scottish approach of smaller scale piloting, or for the government to abandon certain interventions altogether).

However, if you are a UK government policymaker feeling the need to act – and knowing that you always have to make decisions despite uncertainty – you may also feel that there will never be enough evidence on which to draw. Given the problems outlined above, you may as well act now than wait for years for little to change.

The ends justify the means

Policymakers may feel that the ends of such policies – investment in early intervention by shifting funds from late intervention – may justify the means, which can include a ridiculous oversimplification of evidence. It may seem almost impossible for governments to find other ways to secure the shift, given the multiple factors which undermine its progress.

Governments sometimes hint at this approach when simplifying key figures – effectively to argue that late intervention costs £9bn while early intervention will only cost £448m – to reinforce policy change: ‘the critical point for the Government was not necessarily the precise figure, but whether a sufficiently compelling case for a new approach was made’.

Similarly the vivid comparison of healthy versus neglected brains provides shocking reference points to justify early intervention. Their rhetorical value far outweighs their evidential value. As in all EBPM, the choice for policymakers is to play the game, to generate some influence in not-ideal circumstances, or hope that science and reason will save the day (and the latter tends to be based on hope rather than evidence). So, the UK appeared to follow the US’ example in which neuroscience ‘was chosen as the scientific vehicle for the public relations campaign to promote early childhood programs more for rhetorical, than scientific reasons’, partly because a focus on, for example, permanent damage to brain circuitry is less abstract than a focus on behaviour.

Overall, policymakers seem willing to build their case on major simplifications and partial truths to secure what they believe to be a worthy programme (although it would be interesting to find out which policymakers actually believe the things they say). If so, pointing out their mistakes or alleging lies can often have a minimal impact (or worse, if policymakers ‘double down’ in the face of criticism).

Implications for academics, practitioners, and ‘policy based evidence’

I have been writing on ‘troubled families’ while encouraging academics and practitioners to describe pragmatic strategies to increase the use of evidence in policy.

Palgrave C special

Our starting point is relevant to this discussion – since it asks what we should do if policymakers don’t think like academics:

  • They worry more about Westminster politics – their media and public reception and the ability of the opposition party to exploit their failures – than what academics think of their actions.
  • They do not follow the same rules of evidence generation and analysis.
  • They do not have the luxury of uncertainty and time.

Generally, this is a useful lens through which we should view discussions of the realistic use of evidence in politics and policy. Without being pragmatic – to recognise that policymakers will never think like scientists, and always face different pressures – we might simply declare ‘policy based evidence’ in all cases. Although a commitment to pragmatism does not solve these problems, at least it prompts us to be more specific about categories of PBE, the criteria we use to identify it, if our colleagues share a commitment to those criteria, what we can reasonably expect of policymakers, and how we might respond.

In disciplines like social policy we might identify a further issue, linked to:

  1. A tradition of providing critical accounts of government policy to help hold elected policymakers to account. If so, the primary aim may be to publicise key flaws without engaging directly with policymakers to help fix them – and perhaps even to criticise other scholars for doing so – because effective criticism requires critical distance.
  2. A tendency of many other social policy scholars to engage directly in evaluations of government policy, with the potential to influence and be influenced by policymakers.

It is a dynamic that highlights well the difficulty of separating empirical and normative evaluations when critics point to the inappropriate nature of the programmes as they interrogate the evidence for their effectiveness. This difficulty is often more hidden in other fields, but it is always a factor.

For example, Parr noted in 2009 that ‘despite ostensibly favourable evidence … it has been argued that the apparent benign-welfarism of family and parenting-based antisocial behaviour interventions hide a growing punitive authoritarianism’. The latter’s most extreme telling is by Garrett in 2007, who compares residential FIPs (‘sin bins’) to post-war Dutch programmes resembling Nazi social engineering and criticises social policy scholars for giving them favourable evaluations – an argument criticised in turn by Nixon and Bennister et al.

For present purposes, note Nixon’s identification of ‘an unusual case of policy being directly informed by independent research’, referring to the possible impact of favourable evaluations of FIPs on the UK Government’s move way from (a) an intense focus on anti-social behaviour and sanctions towards (b) greater support. While it would be a stretch to suggest that academics can set government agendas, they can at least enhance their impact by framing their analysis in a way that secures policymaker interest. If academics seek influence, rather than critical distance, they may need to get their hands dirty: seeking to understand policymakers to find alternative policies that still give them what they want.

5 Comments

Filed under Prevention policy, public policy, UK politics and policy

The Politics of Evidence-based Policymaking in 2500 words

Here is a 2500 word draft of an entry to the Oxford Research Encyclopaedia (public administration and policy) on EBPM. It brings together some thoughts in previous posts and articles

Evidence-based Policymaking (EBPM) has become one of many valence terms that seem difficult to oppose: who would not want policy to be evidence based? It appears to  be the most recent incarnation of a focus on ‘rational’ policymaking, in which we could ask the same question in a more classic way: who would not want policymaking to be based on reason and collecting all of the facts necessary to make good decisions?

Yet, as we know from classic discussions, there are three main issues with such an optimistic starting point. The first is definitional: valence terms only seem so appealing because they are vague. When we define key terms, and produce one definition at the expense of others, we see differences of approach and unresolved issues. The second is descriptive: ‘rational’ policymaking does not exist in the real world. Instead, we treat ‘comprehensive’ or ‘synoptic’ rationality as an ideal-type, to help us think about the consequences of ‘bounded rationality’ (Simon, 1976). Most contemporary policy theories have bounded rationality as a key starting point for explanation (Cairney and Heikkila, 2014). The third is prescriptive. Like EBPM, comprehensive rationality seems – initially – to be unequivocally good. Yet, when we identify its necessary conditions, or what we would have to do to secure this aim, we begin to question EBPM and comprehensive rationality as an ideal scenario.

What is ‘evidence-based policymaking?’ is a lot like ‘what is policy?’ but more so!

Trying to define EBPM is like magnifying the problem of defining policy. As the entries in this encyclopaedia suggest, it is difficult to say what policy is and measure how much it has changed. I use the working definition, ‘the sum total of government action, from signals of intent to the final outcomes’ (Cairney, 2012: 5) not to provide something definitive, but to raise important qualifications, including: there is a difference between what people say they will do, what they actually do, and the outcome; and, policymaking is also about the power not to do something.

So, the idea of a ‘sum total’ of policy sounds intuitively appealing, but masks the difficulty of identifying the many policy instruments that make up ‘policy’ (and the absence of others), including: the level of spending; the use of economic incentives/ penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organisational change; and, the levels of resources/ methods dedicated to policy implementation and evaluation (2012: 26). In that context, we are trying to capture a process in which actors make and deliver ‘policy’ continuously, not identify a set-piece event providing a single opportunity to use a piece of scientific evidence to prompt a policymaker response.

Similarly, for the sake of simplicity, we refer to ‘policymakers’ but in the knowledge that it leads to further qualifications and distinctions, such as: (1) between elected and unelected participants, since people such as civil servants also make important decisions; and (2) between people and organisations, with the latter used as a shorthand to refer to a group of people making decisions collectively and subject to rules of collective engagement (see ‘institutions’). There are blurry dividing lines between the people who make and influence policy and decisions are made by a collection of people with formal responsibility and informal influence (see ‘networks’). Consequently, we need to make clear what we mean by ‘policymakers’ when we identify how they use evidence.

A reference to EBPM provides two further definitional problems (Cairney, 2016: 3-4). The first is to define evidence beyond the vague idea of an argument backed by information. Advocates of EBPM are often talking about scientific evidence which describes information produced in a particular way. Some describe ‘scientific’ broadly, to refer to information gathered systematically using recognised methods, while others refer to a specific hierarchy of methods. The latter has an important reference point – evidence based medicine (EBM) – in which the aim is to generate the best evidence of the best interventions and exhort clinicians to use it. At the top of the methodological hierarchy are randomized control trials (RCTs) to determine the evidence, and the systematic review of RCTs to demonstrate the replicated success of interventions in multiple contexts, published in the top scientific journals (Oliver et al, 2014a; 2014b).

This reference to EBM is crucial in two main ways. First, it highlights a basic difference in attitude between the scientists proposing a hierarchy and the policymakers using a wider range of sources from a far less exclusive list of publications: ‘The tools and programs of evidence-based medicine … are of little relevance to civil servants trying to incorporate evidence in policy advice’ (Lomas and Brown 2009: 906).  Instead, their focus is on finding as much information as possible in a short space of time – including from the ‘grey’ or unpublished/non-peer reviewed literature, and incorporating evidence on factors such as public opinion – to generate policy analysis and make policy quickly. Therefore, second, EBM provides an ideal that is difficult to match in politics, proposing: “that policymakers adhere to the same hierarchy of scientific evidence; that ‘the evidence’ has a direct effect on policy and practice; and that the scientific profession, which identifies problems, is in the best place to identify the most appropriate solutions, based on scientific and professionally driven criteria” (Cairney, 2016: 52; Stoker 2010: 53).

These differences are summed up in the metaphor ‘evidence-based’ which, for proponents of EBM suggests that scientific evidence comes first and acts as the primary reference point for a decision: how do we translate this evidence of a problem into a proportionate response, or how do we make sure that the evidence of an intervention’s success is reflected in policy? The more pragmatic phrase ‘evidence-informed’ sums up a more rounded view of scientific evidence, in which policymakers know that they have to take into account a wider range of factors (Nutley et al, 2007).

Overall, the phrases ‘evidence-based policy’ and ‘evidence-based policymaking’ are less clear than ‘policy’. This problem puts an onus on advocates of EBPM to state what they mean, and to clarify if they are referring to an ideal-type to aid description of the real world, or advocating a process that, to all intents and purposes, would be devoid of politics (see below). The latter tends to accompany often fruitless discussions about ‘policy based evidence’, which seems to describe a range of mistakes by policymakers – including ignoring evidence, using the wrong kinds, ‘cherry picking’ evidence to suit their agendas, and/ or producing a disproportionate response to evidence – without describing a realistic standard to which to hold them.

For example, Haskins and Margolis (2015) provide a pie chart of ‘factors that influence legislation’ in the US, to suggest that research contributes 1% to a final decision compared to, for example, ‘the public’ (16%), the ‘administration’ (11%), political parties (8%) and the budget (8%). Theirs is a ‘whimsical’ exercise to lampoon the lack of EBPM in government (compare with Prewitt et al’s 2012 account built more on social science studies), but it sums up a sense in some scientific circles about their frustrations with the inability of the policymaking world to keep up with science.

Indeed, there is an extensive literature in health science (Oliver, 2014a; 2014b), emulated largely in environmental studies (Cairney, 2016: 85; Cairney et al, 2016), which bemoans the ‘barriers’ between evidence and policy. Some identify problems with the supply of evidence, recommending the need to simplify reports and key messages. Others note the difficulties in providing timely evidence in a chaotic-looking process in which the demand for information is unpredictable and fleeting. A final main category relates to a sense of different ‘cultures’ in science and policymaking which can be addressed in academic-practitioner workshops (to learn about each other’s perspectives) and more scientific training for policymakers. The latter recommendation is often based on practitioner experiences and a superficial analysis of policy studies (Oliver et al, 2014b; Embrett and Randall’s, 2014).

EBPM as a misleading description

Consequently, such analysis tends to introduce reference points that policy scholars would describe as ideal-types. Many accounts refer to the notion of a policy cycle, in which there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, breaking down their task into clearly defined and well-ordered stages (Cairney, 2016: 16-18). The hope may be that scientists can help policymakers make good decisions by getting them as close as possible to ‘comprehensive rationality’ in which they have the best information available to inform all options and consequences. In that context, policy studies provides two key insights (2016; Cairney et al, 2016).

  1. The role of multi-level policymaking environments, not cycles

Policymaking takes place in less ordered and predictable policy environments, exhibiting:

  • a wide range of actors (individuals and organisations) influencing policy in many levels and types of government
  • a proliferation of rules and norms followed in different venues
  • close relationships (‘networks’) between policymakers and powerful actors
  • a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  • policy conditions and events that can prompt policymaker attention to lurch at short notice.

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multilevel policy process. It shows scientists that they are competing with many actors to present evidence in a particular way to secure a policymaker audience. Support for particular solutions varies according to which organisation takes the lead and how it understands the problem. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift – but major policy change is rare.

  1. Policymakers use two ‘shortcuts’ to deal with bounded rationality and make decisions

Policymakers deal with ‘bounded rationality’ by employing two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, beliefs, habits, and familiar reference points to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing.

Framing refers to the ways in which we understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), and responsible for policy, how much attention they pay, and what kind of solution they favour. Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with evidence. Rather, policy theories signal the strategies that actors use to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (True, Jones, and Baumgartner 2007)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (Jones, Shanahan, and McBeth 2014)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (Weible, Heikkila, and Sabatier 2012)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (Kingdon 1984).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, it can take years to produce support for an ‘evidence-based’ policy solution, built on its technical and political feasibility (will it work as intended, and do policymakers have the motive and opportunity to select it?).

EBPM as a problematic prescription

A pragmatic solution to the policy process would involve: identifying the key venues in which the ‘action’ takes place; learning the ‘rules of the game’ within key networks and institutions; developing framing and persuasion techniques; forming coalitions with allies; and engaging for the long term (Cairney, 2016: 124; Weible et al, 2012: 9-15). The alternative is to seek reforms to make EBPM in practice more like the EBM ideal.

Yet, EBM is defendable because the actors involved agree to make primary reference to scientific evidence and be guided by what works (combined with their clinical expertise and judgement). In politics, there are other – and generally more defendable – principles of ‘good’ policymaking (Cairney, 2016: 125-6). They include the need to legitimise policy: to be accountable to the public in free and fair elections, consult far and wide to generate evidence from multiple perspectives, and negotiate policy across political parties and multiple venues with a legitimate role in policymaking. In that context, we may want scientific evidence to play a major role in policy and policymaking, but pause to reflect on how far we would go to secure a primary role for unelected experts and evidence that few can understand.

Conclusion: the inescapable and desirable politics of evidence-informed policymaking

Many contemporary discussions of policymaking begin with the naïve belief in the possibility and desirability of an evidence-based policy process free from the pathologies of politics. The buzz phrase for any complaint about politicians not living up to this ideal is ‘policy based evidence’: biased politicians decide first what they want to do, then cherry pick any evidence that backs up their case. Yet, without additional thought, they put in its place a technocratic process in which unelected experts are in charge, deciding on the best evidence of a problem and its best solution.

In other words, new discussions of EBPM raise old discussions of rationality that have occupied policy scholars for many decades. The difference since the days of Simon and Lindblom (1959) is that we now have the scientific technology and methods to gather information in ways beyond the dreams of our predecessors. Yet, such advances in technology and knowledge have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policymaking at multiple levels (partly to uphold a principled commitment, in many political system, to devolve or share power).  Policy theories do not tell us what to do about these limits to EBPM, but they help us to separate pragmatism from often-misplaced idealism.

References

Cairney, Paul (2012) Understanding Public Policy (Basingstoke: Palgrave)

Cairney, Paul (2016) The Politics of Evidence-based Policy Making (Basingstoke: Palgrave)

Cairney, Paul and Heikkila, Tanya (2014) ‘A Comparison of Theories of the Policy Process’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early view, DOI:10.1111/puar.12555

Embrett, M. and Randall, G. (2014) ‘Social determinants of health and health equity policy research: Exploring the use, misuse, and nonuse of policy analysis theory’, Social Science and Medicine, 108, 147-55

Haskins, Ron and Margolis, Greg (2015) Show Me the Evidence: Obama’s fight for rigor and results in social policy (Washington DC: Brookings Institution Press)

Kingdon, J. (1984) Agendas, Alternatives and Public Policies 1st ed. (New York, NY: Harper Collins)

Lindblom, C. (1959) ‘The Science of Muddling Through’, Public Administration Review, 19: 79–88

Lomas J. and Brown A. (2009) ‘Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health’, Milbank Quarterly, 87, 4, 903–926

McBeth, M., Jones, M. and Shanahan, E. (2014) ‘The Narrative Policy Framework’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Nutley, S., Walter, I. and Davies, H. (2007) Using evidence: how research can inform public services (Bristol: The Policy Press)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Kenneth Prewitt, Thomas A. Schwandt, and Miron L. Straf, (Editors) (2012) Using Science as Evidence in Public Policy http://www.nap.edu/catalog.php?record_id=13460

Simon, H. (1976) Administrative Behavior, 3rd ed. (London: Macmillan)

Stoker, G. (2010) ‘Translating experiments into policy’, The ANNALS of the American Academy of Political and Social Science, 628, 1, 47-58

True, J. L., Jones, B. D. and Baumgartner, F. R. (2007) Punctuated Equilibrium Theory’ in P. Sabatier (ed.) Theories of the Policy Process, 2nd ed (Cambridge, MA: Westview Press)

Weible, C., Heikkila, T., deLeon, P. and Sabatier, P. (2012) ‘Understanding and influencing the policy process’, Policy Sciences, 45, 1, 1–21

 

6 Comments

Filed under Evidence Based Policymaking (EBPM)

Idealism versus pragmatism in politics and policymaking: Labour, Brexit, and evidence-based policymaking

In a series of heroic leaps of logic, I aim to highlight some important links between three current concerns: Labour’s leadership contest, the Brexit vote built on emotion over facts, and the insufficient use of evidence in policy. In each case, there is a notional competition between ‘idealism’ and ‘pragmatism’ (as defined in common use, not philosophy); the often-unrealistic pursuit of a long term ideal versus the focus on solving more immediate problems often by compromising ideals and getting your hands dirty. We know what this looks like in party politics, including the compromises that politicians make to win elections and the consequences for their image, but do we know how to make the same compromises when we appeal for a more deliberative referendum or more evidence-informed policymaking?

I searched Google for a few minutes until I found a decent hook for this post. It is a short Forbes article by Susan Gunelius advocating a good mix of pragmatic and idealistic team members:

Pragmatic leaders focus on the practical, “how do we get this done,” side of any task, initiative or goal.  They can erroneously be viewed as negative in their approach when in fact they simply view the entire picture (roadblocks included) to get to the end result.  It’s a linear, practical way of thinking and “doing.”

Idealist leaders focus on the visionary, big ideas.  It could be argued that they focus more on the end result than the path to get there, and they can erroneously be viewed as looking through rose-colored glasses when, in fact, they simply “see” the end goal and truly believe there is a way to get there”.

On the surface, it’s a neat description of the current battle to win the Labour party, with Jeremy Corbyn representing the idealist willing to lose elections to stay true to the pure ideal, and Owen Smith representing the pragmatist willing to compromise on the ideal to win an election.

In this context, pragmatic politicians face a dilemma that we often take for granted in party politics: they want to look flexible enough to command the ‘centre’ ground, but also appear principled and unwilling to give up completely on their values to secure office. Perhaps pragmatists also accept to a large extent that the means can justify the ends: they can compromise their integrity and break a few rules to win office if it means that they serve the long term greater good as a result (in this case, better a compromised socialist than a Tory government). So, politicians accept that a slightly tarnished image is the price you pay to get what you want.

For current purposes, let us assume that you are the kind of person drawn more to the pragmatist rather than the idealist politician; you despair at the naiveté of the idealist politician, and expect to see them fail rather than gain office.

If so, how might we draw comparisons with other areas in politics and policymaking?

Referendums should be driven by facts and an intelligent public, not lies and emotions

Many people either joke or complain seriously about most of the public being too stupid to engage effectively in elections and referendums. I will use this joke about Trump because I saw it as a meme, and on Facebook it has 49000 smiley faces already:

Borowitz

An more serious idealistic argument about the Brexit vote goes something like this:

  • the case for Remain was relatively strong and backed by most of the best experts
  • most Leave voters ignored or mistrusted the experts
  • the Leave campaign was riddled with lies and exaggerations; and,
  • a large chunk of the public was not intelligent enough to separate the lies from the facts.

You often have to read between the lines and piece together this argument, but Dame Liz Forgan recently did me a favour by spelling out a key part in a speech to the British Academy:

Democracies require not just literate and numerate electorates. They need people who cannot be sold snake oil by every passing shyster because their critical faculties have been properly honed. Whose popular culture has not degenerated so completely that every shopping channel hostess is classed as a celebrity. Where post-modern irony doesn’t undermine both honest relaxation and serious endeavour. Where the idea of a post-factual age is seen as an acute peril not an amusing cultural meme. If the events of June have taught us anything it is that we need to put the rigour back in our education, the search for truth back in our media.

Of course, I have cherry picked the juiciest part to highlight a sense of idealism that I have seen in many places. Let’s link it back to our despair at the naïvely idealist politician: doesn’t this look quite similar? If we took this line, and pursued public education as our main solution to Brexit, wouldn’t people think that we are doomed to fail in the long term and lose a lot of other votes on the way?

Another (albeit quicker and less idealistic) solution, proposed largely by academics (many of whom are highly critical of the campaigns) is largely institutional: let’s investigate the abuse of facts during the referendum to help us produce new rules of engagement. Yet, if the problem is that people are too stupid or emotional to process facts, it doesn’t seem that much more effective.

At this stage, I’d like to say: instead of clinging to idealism, let’s be pragmatic about this. If you despair of the world, get your hands dirty to win key votes rather than hope that people will do the right thing or wait for a sufficiently ‘rational’ public.

Yet, I don’t think we yet know enough about how to do it and how far ‘experts’ should go, particularly since many experts are funded – directly or indirectly – by the state and are subject to different (albeit often unwritten) rules than politicians. So, in a separate post, I provide some bland advice that might apply to all:

  • Don’t simply supply people with more information when you think they are not paying enough attention to it. Instead, try to work out how they think, to examine how they are likely to demand and interpret information.
  • Don’t just bemoan the tendency of people to accept simple stories that reinforce their biases. Instead, try to work out how to produce evidence-based stories that can compete for attention with those of campaigners.
  • Don’t stop at providing simpler and more accessible information. People might be more likely to read a blog post than a book or lengthy report, but most people are likely to remain blissfully unaware of most academic blogs.

Yet, if we think that other referendum participants are winning because they are lying and cheating, we might also think that honourable strategies won’t tip the balance. We know that, like pragmatic politicians, we might need to go a bit further to win key debates. Anything else is idealism, right?

Policy should be based on evidence, not electoral politics, ideology and emotion

The same can be said for many scientists bemoaning the lack of ‘evidence-based policymaking’ (EBPM). Some express the naïve hope that politicians become trained to think like scientists and/ or the view that evidence-based policymaking should be more like the idea of evidence-based medicine in which there is a hierarchy of evidence. Others try to work out how they can improve the supply of evidence or set up new institutions to get policymakers to pay more attention to facts. This seems to be EBPM’s equivalent of idealism, in which you largely wish for something that won’t exist rather than trying to produce pragmatic strategies for the real world.

A more pragmatic two-step solution is to:

(1) work out how and why policymakers demand information, and the policymaking context in which they operate (which I describe in The Politics of Evidence-Based Policymaking, and with Kathryn Oliver and Adam Wellstead in PAR).

(2) draw on as many interdisciplinary insights to explore how to do something about it, such as to establish the psychology of policymakers and identify good ways to tell simple stories to generate an emotional connection to your evidence (which I describe in a forthcoming special issue in Palgrave Communications).

Should academics remain idealists rather than pragmatists?

Of course, it is legitimate to take what I am calling an idealistic approach. In politics, Corbyn’s idealism is certainly capturing a part of the public imagination (while another part of the public watches on, sniggering or aghast). In the Academy, it may be a part of a legitimate attempt to maintain your integrity by not engaging directly in politics or policymaking, and/or accepting that academics largely contribute to a very long term enlightenment function rather than enjoy immediate impact. All I am saying is that you need to choose and, if you seek more direct impact, you need to forego idealism and start thinking about what it means to be pragmatic while pursuing ‘evidence informed’ politics.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, UK politics and policy

What we know about consultation and policy-making

Here are some notes for today’s workshop on ‘consultation’, at the Law Reform and Public Policy Group, School of Law, University of Glasgow. I discuss key insights from policy theory, the idea of a ‘draft Act’, and how the ‘Scottish policy style’ fits into this discussion.

My second favourite phrase, as an undergraduate at Glasgow’s top University, was Brian Hogwood’s: ‘if consultation means everything, then maybe it means nothing’. It was a call to ‘unpack’ the term, whose meaning could range from cosmetic consultation in public to crucial interventions in private.

My first favourite was Grant Jordan and Jeremy Richardson’s ‘policy community’, which describes an important relationship between policymakers and some of the actors they consult (see also ‘informal governance’, including Alison Woodward’s example of the ‘velvet triangle’). The logic is as follows:

1. Policymakers are subject tobounded rationality’: they cannot process issues comprehensively. By necessity, they have to make decisions in the face of uncertainty and ambiguity. Uncertainty relates to the amount of information we have to inform policy and policymaking. Ambiguity relates to the way in which we understand policy problems. The policy process is therefore about (1) the short cuts that policymakers use to gather information and understand complex issues, and (2) the ways in which policy participants compete to determine which information is used and how policymakers understand problems.

2. Policymakers and key actors form policy networks or communities. To deal with bounded rationality, they delegate responsibility to civil servants who, in turn, rely on specialist organisations for information and advice. Those organisations trade information for access to government. This process often becomes routine: civil servants begin to trust and rely on certain organisations and they form meaningful relationships. If so, most public policy is conducted primarily through small and specialist ‘policy communities’ that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement. Network theories tend to consider the key implications, including a tendency for governments to contain ‘silos’ and struggle to ‘join up’ government when policy is made in so many different places.

3. Note the relevance to our current focus on ‘evidence-based policymaking. In most cases, policymakers use consultation to reduce uncertainty: they gather information to help identify the size of a problem on which they already have a view. In some, they use it to reduce ambiguity: only some actors will influence how they understand and try to solve a policy problem, and therefore what further information they will seek.

4. Note the importance of pluralist democracy to some policymakers. Consider the implications of bounded rationality to democracy: we put our faith in representative democracy, but ministers can only pay attention to a tiny proportion of their responsibilities. A lot of practical responsibility is in the hands of civil servants, who partly seek legitimacy by consulting far and wide, to generate the ‘ownership’ of policy among key actors, professional groups, and perhaps ‘civil society’. This takes place well before, for example, a government presents a draft Act to Parliament and, in many cases, it produces policy without subsequent reference to Parliament.

If you put all these things together, you can see the importance of different kinds of consultation.

Consider a simple spectrum of consultation. At one end is cosmetic consultation: policymakers are paying high attention and they already know how they feel about a problem. So, they consult as part of a ‘standard operating procedure’ in which governments seek legitimacy, but the consultation will not influence their decision. Perhaps it comes towards the end of their deliberations.

At the other end is super-meaningful consultation, but perhaps with a small number of key people: they get together regularly, identify the issues that deserve most attention, and agree on how to ‘frame’ or understand the problem.

So, in our discussions, we might discuss how a range of activities fit in: the ‘trawling’ exercises to gain as many views as possible; working groups to generate questions for consultation; commissions to process rather technical looking issues, generally out of the public spotlight; other ‘pre-consultation’ with key actors before public consultation; and so on.

You can also see how the ‘Scottish Policy Style’ or ‘Scottish Approach’ fits in

We talk about a distinctive Scottish style or approach or system, but within the context that I describe above. There are three key reference points:

  1. Cultural. In the olden days we talked of new Scottish politics versus old Westminster: Scottish policymaking would be more consensual and participative; the Scottish Government would share power with the Scottish Parliament; the Parliament would be the hub for much consultation, or at least oversee the Scottish Government process; the Parliament would have a better gender balance; and, policymakers would not simply consult the ‘usual suspects’.
  2. Practical. In reality, most explanations for a Scottish policymaking culture relate to size and scale: it is easier for senior policymakers to form personal networks with key actors in interest and professional groups and with leaders of public bodies; and, a government with relatively low capacity relies relatively highly on external sources of information and advice.
  3. Storytelling. Note that the Scottish Government tells a particular story about its approach, built on high consultation and trust in public bodies, stakeholders and service users, which informs its approach to evidence gathering and policy delivery: the Scottish Approach to Policymaking.

An agenda for the study of consultation in Scotland

It is useful to talk of a Scottish style of consultation, but investigate rather than assume its distinctiveness, and to interpret that distinctiveness rather than assume it relates broadly to a Scottish political culture.

This is true even before we consider consultation in a multi-level system, of which the Scottish Government is one of many governments that groups may want to consult.

I usually do this research through interviews with pressure participants and civil servants, which often involves trying to separate the story we tell about Scotland (where everyone knows everyone else) from the other drivers for policy and policymaking (education, mental health legislation, general, more general, even more general).

Other methods worth discussing include consultation analysis (Darren Halpin used to analyse the number and types of responses to open consultations), networks analysis to gauge the interaction between policy makers and participants, and comparative analyses in which we examine, for example, the extent to which consultation on legal reform resembles that of other sectors. If not, what makes ‘the law’ distinctive?

1 Comment

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy, Scottish politics

Policy bubbles and emotional policymaking

I am at a workshop today on policy ‘bubbles’, or (real and perceived) disproportionate policy responses. For Moshe Maor, a bubble describes an over-reaction to a problem, and a negative policy bubble describes under-reaction.

For Maor, this focus on bubbles is one way into our increasing focus on the role of emotion in policymaking: we pay disproportionate attention to problems, and try to solve some but not others, based on the ways in which we engage emotionally with information.

This focus on psychology is, I think, gaining a lot of traction in political science now, and I think it is crucial to explaining, for example, processes associated with ‘evidence-based policymaking’.

In taking this agenda forward, there remain some outstanding issues:

How much of the psychology literature is already reflected in policy studies? For example, see the social construction of target populations (emotion-driven treatment of social groups), ACF (on loss aversion and the devil shift), and the NPF (telling stories to exploit cognitive biases).

What insights remain untapped from key fields such as organisational psychology? I’ll say more about this in a forthcoming post.

How can we study the psychology of policymaking? Most policy theory begins with some reference to bounded rationality, including PET and the identification of disproportionate information processing (policymakers pay disproportionate attention to some issues and ignore the rest). It is largely deductive then empirical: we make some logical steps about the implications of bounded rationality, then study the process in that light.

Similarly, I think most studies of emotion/ policymaking take insights from psychology (e.g. people value losses more than gains, or they make moral judgements then seek evidence to justify them) and then apply them indirectly to policymaking (asking, for example, what is the effect of prospect theory on the behaviour of coalitions).

Can we do more, by studying more directly the actions of policymakers rather than merely interpreting their actions? The problem, of course, is that few policymakers may be keen on engaging in the types of study (e.g. experiments with control groups) that psychologists have used to establish things like fluency effects.

How does policymaker psychology fit into broader explanations of policymaking? The psychology of policymakers is one part of the story. The other is the system or environment in which they operate. So, we have some choices to make about future studies. Some might ‘zoom in’ to focus on emotionally-driven policymaking in key actors, perhaps at the centre of government.

Others may ‘zoom out’. The latter may involve ascribing the same basic thought processes to a large number of actors, examining that process at a relatively abstract level. This is the necessary consequence of trying to account for the effects of a very large number of actors, and to take into account the role of a policymaking environment, only some of which is in the control of policymakers.

Can we really demonstrate disproportionate policy action? The idea of a proportionate policy response interests me, because I think it is always in the eye of the beholder. We make moral and other personal evaluative statements when we describe a proportionate solution in relation to the size of the problem.

For example, in tobacco policy, a well-established argument in public health is that a proportionate policy response to the health effects of smoking and passive smoking (a) has been 20-30 years behind the evidence in ‘leading countries’, and (b) has yet to happen in ‘laggard’ countries. The counterargument is that the identification of a problem does not necessitate the favoured public health solution (comprehensive tobacco control, towards the ‘endgame’ of zero smoking) because it involves major limits to personal liberties and choice.

Is emotion-driven policymaking necessarily a bad thing?

[excerpt from my 2014 PSA paper ] This is partly the focus of Alter and Oppenheimer (2008) when they argue that policymakers spend disproportionate amounts of money on risks with which they are familiar, at the expense of spending money on things with more negative effects, producing a ‘dramatic misallocation of funds’. They draw on Sunstein (2002), who suggests that emotional bases for attention to environmental problems from the 1970s prompted many regulations to be disproportionate to the risk involved. Further, Slovic’s work suggest that people’s feelings towards risk may even be influenced by the way in which it is described, for example as a percentage versus a 1 in X probability (Slovic, P. 2010: xxii).

Haidt (2001: 815) argues that a focus on psychology can be used to improve policymaking: the identification of the ‘intuitive basis of moral judgment’ can be used to help policymakers ‘avoid mistakes’ or allow people to develop ‘programs’ or an ‘environment’ to ‘improve the quality of moral judgment and behavior’. Similarly, Alter and Oppenheimer (2009: 232) worry about medical and legal judgements swayed by fluid diagnoses and stories.

These studies compare with arguments focusing on the positive role of emotions of decision-making, either individually (see Constantinescu, 2012, drawing on Frank, 1988 and Elster, 2000 on the decisions of judges) or as part of social groups, with emotional responses providing useful information in the form of social cues (Van Kleef et al, 2010).

Policy theory does not shy away from these issues. For example, Schneider and Ingram (2014) argue that the outcomes of social construction are often dysfunctional and not based on a well-reasoned, goal-oriented strategy: ‘Studies have shown that rules, tools, rationales and implementation structures inspired by social constructions send dysfunctional messages and poor choices may hamper the effectiveness of policy’. However, part of the value of policy theory is to show that policy results from the interaction of large numbers of people and institutions. So, the poor actions of one policymaker would not be the issue; we need to know more about the cumulative effect of individual emotional decision making in collective decision-making – not only in discrete organisations, but also networks and systems.

And finally: if it is a bad thing, should we do something about it?

Our choice is to find it interesting then go home (this might appeal to the academics) or try to limit the damage/ maximise the benefits of policymaker psychology to policy and society (this might appeal to practitioners). There is no obvious way to do something, though, is there?

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), public policy

Did the Scottish Parliament just vote to ban fracking?

Not really.

Almost every headline reports that the Scottish Parliament voted to ban fracking on the 1st June 2016 (Guardian, BBC, Scotsman, National, STV, Holyrood).

The headlines are technically correct but super-misleading.

If watching from afar, you might deduce that Scottish Government policy is now (or about to be) in favour of a complete ban. Or, if you know more about the Scottish Parliament process, you might at least see it as a major defeat for the SNP under minority government even if the vote is not binding (indeed, the Guardian’s second headline states that the ‘Vote does not create binding policy but is significant defeat for SNP so soon into new parliamentary term’).

In both cases, you would be wrong because:

  • 33 of 123 available MSPs voted for the ban, 29 opposed, and 62 abstained.
  • The 33 were from the 3 smallest parties in the Scottish Parliament.
  • It is clear to everyone that the amendment-to-motion only passed because the SNP abstained.

The vote was embarrassing (particularly since it was on an amendment to a motion proposed by the SNP’s Environment Secretary Roseanna Cunningham) rather than binding. Its main effect is to produce this picture (source: BBC News) of the SNP squirming in the chamber.

BBC fracking 2016.JPG

In the past, a vote like this might have had more important effect. For example, the SNP agreed in 2007 (at the beginning of its previous spell of minority government) to reconsider the Edinburgh trams project after most of opposition parties voted in its favour. That motion was not binding, but the SNP took it far more seriously because the other parties could generate a vague sense of the ‘will of the Parliament’.

In the case of fracking, there is no such sense. Instead, the three smallest parties are restating their manifesto commitments, the now-more-important Conservatives are voting the other way, and the SNP is trying to ignore the whole thing.

This vote is unlikely to change the course of events too much: the SNP government still intends to delay things (while maintaining a moratorium) while it commissions and processes more research. The biggest factors are still likely to be public opinion, business versus environmental group pressure, and the level of disagreement within the SNP itself.

For more on fracking in Scotland, see:

Briefing: Unconventional Onshore Oil and Gas (or here)

Fracking posts

Holyrood election 2016 briefing

 

2 Comments

Filed under Fracking, Scottish politics