Practical Lessons from Policy Theories

Paul Cairney: Politics & Public Policy

These links to blog posts (the underlined headings) and tweets (with links to their full article) describe a new special issue of Policy and Politics, published in April 2018 and free to access until the end of May.

Weible Cairney abstract

Three habits of successful policy entrepreneurs

View original post 576 more words

Leave a comment

Filed under Uncategorized

Evidence-based policymaking: political strategies for scientists living in the real world

Note: I wrote the following discussion (last year) to be a Nature Comment but it was not to be!

Nature articles on evidence-based policymaking often present what scientists would like to see: rules to minimise bias caused by the cognitive limits of policymakers, and a simple policy process in which we know how and when to present the best evidence.[1]  What if neither requirement is ever met? Scientists will despair of policymaking while their competitors engage pragmatically and more effectively.[2]

Alternatively, if scientists learned from successful interest groups, or by using insights from policy studies, they could develop three ‘take home messages’: understand and engage with policymaking in the real world; learn how and when evidence ‘wins the day’; and, decide how far you should go to maximise the use of scientific evidence. Political science helps explain this process[3], and new systematic and thematic reviews add new insights.[4] [5] [6] [7]

Understand and engage with policymaking in the real world

Scientists are drawn to the ‘policy cycle’, because it offers a simple – but misleading – model for engagement with policymaking.[3] It identifies a core group of policymakers at the ‘centre’ of government, perhaps giving the impression that scientists should identify the correct ‘stages’ in which to engage (such as ‘agenda setting’ and ‘policy formulation’) to ensure the best use of evidence at the point of authoritative choice. This is certainly the image generated most frequently by health and environmental scientists when they seek insights from policy studies.[8]

Yet, this model does not describe reality. Many policymakers, in many levels and types of government, adopt and implement many measures at different times. For simplicity, we call the result ‘policy’ but almost no modern policy theory retains the linear policy cycle concept. In fact, it is more common to describe counterintuitive processes in which, for example, by the time policymaker attention rises to a policy problem at the ‘agenda setting’ stage, it is too late to formulate a solution. Instead, ‘policy entrepreneurs’ develop technically and politically feasible solutions then wait for attention to rise and for policymakers to have the motive and opportunity to act.[9]

Experienced government science advisors recognise this inability of the policy cycle image to describe real world policymaking. For example, Sir Peter Gluckman presents an amended version of this model, in which there are many interacting cycles in a kaleidoscope of activity, defying attempts to produce simple flow charts or decision trees. He describes the ‘art and craft’ of policy engagement, using simple heuristics to deal with a complex and ‘messy’ policy system.[10]

Policy studies help us identify two such heuristics or simple strategies.

First, respond to policymaker psychology by adapting to the short cuts they use to gather enough information quickly: ‘rational’, via trusted sources of oral and written evidence, and ‘irrational’, via their beliefs, emotions, and habits. Policy theories describe many interest group or ‘advocacy coalition’ strategies, including a tendency to combine evidence with emotional appeals, romanticise their own cause and demonise their opponents, or tell simple emotional stories with a hero and moral to exploit the biases of their audience.[11]

Second, adapt to complex ‘policy environments’ including: many policymakers at many levels and types of government, each with their own rules of evidence gathering, network formation, and ways of understanding policy problems and relevant socioeconomic conditions.[2] For example, advocates of international treaties often find that the evidence-based arguments their international audience takes for granted become hotly contested at national or subnational levels (even if the national government is a signatory), while the same interest groups presenting the same evidence of a problem can be key insiders in one government department but ignored in another.[3]

Learn the conditions under which evidence ‘wins the day’ in policymaking

Consequently, the availability and supply of scientific evidence, on the nature of problems and effectiveness of solutions, is a necessary but insufficient condition for evidence-informed policy. Three others must be met: actors use scientific evidence to persuade policymakers to pay attention to, and shift their understanding of, policy problems; the policy environment becomes broadly conducive to policy change; and, actors exploit attention to a problem, the availability of a feasible solution, and the motivation of policymakers, during a ‘window of opportunity’ to adopt specific policy instruments.10

Tobacco control represents a ‘best case’ example (box 1) from which we can draw key lessons for ecological and environmental policies, giving us a sense of perspective by highlighting the long term potential for major evidence-informed policy change. However, unlike their colleagues in public health, environmental scientists have not developed a clear sense of how to produce policy instruments that are technically and politically feasible, so the delivery of comparable policy change is not inevitable.[12]

Box 1: Tobacco policy as a best case and cautionary tale of evidence-based policymaking

Tobacco policy is a key example – and useful comparator for ecological and environmental policies – since it represents a best case scenario and cautionary tale.[13] On the one hand, the scientific evidence on the links between smoking, mortality, and preventable death forms the basis for modern tobacco control policy. Leading countries – and the World Health Organisation, which oversees the Framework Convention on Tobacco Control (FCTC) – frame tobacco use as a public health ‘epidemic’ and allow their health departments to take the policy lead. Health departments foster networks with public health and medical groups at the expense of the tobacco industry, and emphasise the socioeconomic conditions – reductions in (a) smoking prevalence, (b) opposition to tobacco control, and (c) economic benefits to tobacco – most supportive of tobacco control. This framing, and conducive policymaking environment, helps give policymakers the motive and opportunity to choose policy instruments, such as bans on smoking in public places, which would otherwise seem politically infeasible.

On the other hand, even in a small handful of leading countries such as the UK, it took twenty to thirty years to go from the supply of the evidence to a proportionate government response: from the early evidence on smoking in the 1950s prompting major changes from the 1980s, to the evidence on passive smoking in the 1980s prompting public bans from the 2000s onwards. In most countries, the production of a ‘comprehensive’ set of policy measures is not yet complete, even though most signed the FCTC.

Decide how far you’ll go to maximise the use of scientific evidence in policymaking

These insights help challenge the naïve position that, if policymaking can change to become less dysfunctional[1], scientists can be ‘honest brokers’[14] and expect policymakers to use their evidence quickly, routinely, and sincerely. Even in the best case scenario, evidence-informed change takes hard work, persistence, and decades to achieve.

Since policymaking will always appear ‘irrational’ and complex’[3], scientists need to think harder about their role, then choose to engage more effectively or accept their lack of influence.

To deal with ‘irrational’ policymakers, they should combine evidence with persuasion, simple stories, and emotional appeals, and frame their evidence to make the implications consistent with policymakers’ beliefs.

To deal with complex environments, they should engage for the long term to work out how to form alliances with influencers who share their beliefs, understand in which ‘venues’ authoritative decisions are made and carried out, the rules of information processing in those venues, and the ‘currency’ used by policymakers when they describe policy problems and feasible solutions.[2] In other words, develop skills that do not come with scientific training, avoid waiting for others to share your scientific mindset or respect for scientific evidence, and plan for the likely eventuality that policymaking will never become ‘evidence based’.

This approach may be taken for granted in policy studies[15], but it raises uncomfortable dilemmas regarding how far scientists should go, to maximise the use of scientific evidence in policy, using persuasion and coalition-building.

These dilemmas are too frequently overshadowed by claims – more comforting to scientists – that politicians are to blame because they do not understand how to generate, analyse, and use the best evidence. Scientists may only become effective in politics if they apply the same critical analysis to themselves.

[1] Sutherland, W.J. & Burgman, M. Nature 526, 317–318 (2015).

[2] Cairney, P. et al. Public Administration Review 76, 3, 399-402 (2016)

[3] Cairney, P. The Politics of Evidence-Based Policy Making (Palgrave Springer, 2016).

[4] Langer, L. et al. The Science of Using Science (EPPI, 2016)

[5] Breckon, J. & Dodson, J. Using Evidence. What Works? (Alliance for Useful Evidence, 2016)

[6] Palgrave Communications series The politics of evidence-based policymaking (ed. Cairney, P.)

[7] Practical lessons from policy theories (eds. Weible, C & Cairney, P.) Policy and Politics April 2018

[8] Oliver, K. et al. Health Research Policy and Systems, 12, 34 (2016)

[9] Kingdon, J. Agendas, Alternatives and Public Policies (Harper Collins, 1984)

[10] Gluckmann, P. Understanding the challenges and opportunities at the science-policy interface

[11] Cairney, P. & Kwiatkowski, R. Palgrave Communications.

[12] Biesbroek et al. Nature Climate Change, 5, 6, 493–494 (2015)

[13] Cairney, P. & Yamazaki, M. Journal of Comparative Policy Analysis

[14] Pielke Jr, R. originated the specific term The honest broker (Cambridge University Press, 2007) but this role is described more loosely by other commentators.

[15] Cairney, P. & Oliver, K. Health Research Policy and Systems 15:35 (2017)

3 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

How you want your presentation to go, versus how it goes.

At the Los Angeles County Museum of Art

At MOCA

At Broad

At Six Flags MM

At a novelty sweet shop at Citywalk

At the Hollywood Museum

On the walk of fame

At the Getty Museum

At the California Academy of Sciences

At the de Young Museum, San Francisco

… It’s more of a comment than a question …

At the Legion of Honor, San Francisco

At the San Francisco Museum of Modern Art

In retrospect, this constant search for new photos for a joke that became tired by photo 3 could have ruined the trip, but it totally didn’t. No room for The Thinker, though. This look would be welcome during a presentation.

See also:

Leave a comment

Filed under Uncategorized

One year research post at Stirling, energy politics and policy in a multi-level UK

Update: the deadline has passed and the interviews will be May 1st.

We are recruiting: Research Assistant in Energy Politics and Public Policy.

The post is one year full time and it begins as soon as possible (hopefully early May). The deadline is Thursday 5th April (3 days!) and I think we will interview on the week beginning 23rd April.

The panel will likely be three people: me (describing the specific project), Dr Emily St Denny (subject expert, likely probing your knowledge of politics/ public policy research), and Professor Richard Oram (Head of Faculty, likely probing your wider transferable skills). It will be less daunting than the usual panel for a lectureship (at least five people, including one of the most senior managers).

I apologise for such a short contract (and short notice). We have tried to do some things to make it more rewarding:

  1. We consolidated our funding, and the University of Stirling contributed extra funding, to make the post full time and include some space for training/ career development.
  2. If you and I are being honest, the biggest part of career development – at least if you seek a permanent lectureship or similar University post – is to add to your list of publications. It is harder to make any guarantees in this regard, but I will work with you (and our wider team) to make sure that your contribution to the project is recognised fully and reflected accurately in our outputs. One or two decent team-authored journal articles seems realistic (albeit published after the end of the contract). I will also deal with ‘reviewer 2’ on our behalf.
  3. ‘Impact’ is also central to academic careers. You should have the chance to, for example, take part in academic-practitioner workshops and develop ‘networking’ skills (I apologise for turning network into a verb).
  4. I’ll do all I can to be flexible, to support your choice about how you work most effectively (perhaps this will be the first conversation with the successful candidate).

The full description of the advert is here and there is some background on our project here.

Leave a comment

Filed under UKERC

Why don’t policymakers listen to your evidence?

Since 2016, my most common academic presentation to interdisciplinary scientist/ researcher audiences is a variant of the question, ‘why don’t policymakers listen to your evidence?’

I tend to provide three main answers.

1. Many policymakers have many different ideas about what counts as good evidence

Few policymakers know or care about the criteria developed by some scientists to describe a hierarchy of scientific evidence. For some scientists, at the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policymakers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence, they also pursue other key principles, such as to foster consensus driven policymaking or a shift from centralist to localist practices.

Consequently, they often only recommend interventions rather than impose one uniform evidence-based position. If local actors favour a different policy solution, we may find that the same type of evidence may have more or less effect in different parts of government.

2. Policymakers have to ignore almost all evidence and almost every decision taken in their name

Many scientists articulate the idea that policymakers and scientists should cooperate to use the best evidence to determine ‘what works’ in policy (in forums such as INGSA, European Commission, OECD). Their language is often reminiscent of 1950s discussions of the pursuit of ‘comprehensive rationality’ in policymaking.

The key difference is that EBPM is often described as an ideal by scientists, to be compared with the more disappointing processes they find when they engage in politics. In contrast, ‘comprehensive rationality’ is an ideal-type, used to describe what cannot happen, and the practical implications of that impossibility.

The ideal-type involves a core group of elected policymakers at the ‘top’, identifying their values or the problems they seek to solve, and translating their policies into action to maximise benefits to society, aided by neutral organisations gathering all the facts necessary to produce policy solutions. Yet, in practice, they are unable to: separate values from facts in any meaningful way; rank policy aims in a logical and consistent manner; gather information comprehensively, or possess the cognitive ability to process it.

Instead, Simon famously described policymakers addressing ‘bounded rationality’ by using ‘rules of thumb’ to limit their analysis and produce ‘good enough’ decisions. More recently, punctuated equilibrium theory uses bounded rationality to show that policymakers can only pay attention to a tiny proportion of their responsibilities, which limits their control of the many decisions made in their name.

More recent discussions focus on the ‘rational’ short cuts that policymakers use to identify good enough sources of information, combined with the ‘irrational’ ways in which they use their beliefs, emotions, habits, and familiarity with issues to identify policy problems and solutions. Or, they explore how individuals communicate their narrow expertise within a system of which they have almost no knowledge. In each case, ‘most members of the system are not paying attention to most issues most of the time’.

This scarcity of attention helps explain, for example, why policymakers ignore most issues in the absence of a focusing event, policymaking organisations make searches for information which miss key elements routinely, and organisations fail to respond to events or changing circumstances proportionately.

In that context, attempts to describe a policy agenda focusing merely on ‘what works’ are based on misleading expectations. Rather, we can describe key parts of the policymaking environment – such as institutions, policy communities/ networks, or paradigms – as a reflection of the ways in which policymakers deal with their bounded rationality and lack of control of the policy process.

3. Policymakers do not control the policy process (in the way that a policy cycle suggests)

Scientists often appear to be drawn to the idea of a linear and orderly policy cycle with discrete stages – such as agenda setting, policy formulation, legitimation, implementation, evaluation, policy maintenance/ succession/ termination – because it offers a simple and appealing model which gives clear advice on how to engage.

Indeed, the stages approach began partly as a proposal to make the policy process more scientific and based on systematic policy analysis. It offers an idea of how policy should be made: elected policymakers in central government, aided by expert policy analysts, make and legitimise choices; skilful public servants carry them out; and, policy analysts assess the results with the aid of scientific evidence.

Yet, few policy theories describe this cycle as useful, while most – including the advocacy coalition framework , and the multiple streams approach – are based on a rejection of the explanatory value of orderly stages.

Policy theories also suggest that the cycle provides misleading practical advice: you will generally not find an orderly process with a clearly defined debate on problem definition, a single moment of authoritative choice, and a clear chance to use scientific evidence to evaluate policy before deciding whether or not to continue. Instead, the cycle exists as a story for policymakers to tell about their work, partly because it is consistent with the idea of elected policymakers being in charge and accountable.

Some scholars also question the appropriateness of a stages ideal, since it suggests that there should be a core group of policymakers making policy from the ‘top down’ and obliging others to carry out their aims, which does not leave room for, for example, the diffusion of power in multi-level systems, or the use of ‘localism’ to tailor policy to local needs and desires.

Further Reading

The politics of evidence-based policymaking

The politics of evidence-based policymaking: maximising the use of evidence in policy

Images of the policy process

How to communicate effectively with policymakers

Forthcoming special issue in Policy and Politics called ‘Practical lessons from policy theories’, which includes my discussion of how to be a ‘policy entrepreneur’.

1 Comment

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, Public health, public policy

Policy concepts in 1000 words: Institutional memory

Guest post by Jack Corbett, Dennis Grube, Heather Lovell and Rodney Scott

Democratic governance is defined by the regular rotation of elected leaders. Amidst the churn, the civil service is expected to act as the repository of received wisdom about past policies, including assessments of what works and what doesn’t. The claim is that to avoid repeating the same mistakes we need to know what happened last time and what were the effects. Institutional memory is thus central to the pragmatic task of governing.

What is institutional memory? And, how is it different to policy learning?

Despite increasing recognition of the role that memory can or should play in the policy process, the concept has defied easy scholarly definition.

In the classic account, institutional memory is the sum total of files, procedures and knowledge held by an organisation. Christopher Pollitt, who has pioneered the study of institutional memory, refers to the accumulated knowledge and experience of staff, technical systems, including electronic databases and various kinds of paper records, the management system, and the norms and values of the organizational culture, when talking about institutional memory. In this view, which is based on the key principles of the new institutionalism, memory is essentially an archive.

The problem with this definition is that it is hard to distinguish the concept from policy learning (see also here). If policy learning is in part about increasing knowledge about policy, including correcting for past mistakes, then we could perhaps conceive of a continuum from learning to memory with an inflection point where one starts and the other stops. But, this is easier to imagine than it is to measure empirically. It also doesn’t acknowledge the forms memories take and the ways memories are contested, suppressed and actively forgotten.

In our recent contribution to this debate (see here and here) we define memories as ‘representations of the past’ that actors draw on to narrate what has been learned when developing and implementing policy. When these narratives are embedded in processes they become ‘institutionalised’. It is this emphasis on embedded narratives that distinguishes institutional memory from policy learning. Institutional memory may facilitate policy learning but equally some memories may prohibit genuine adaptation and innovation. As a result, while there is an obvious affinity between the two concepts it is imperative that they remain distinct avenues of inquiry. Policy learning has unequivocally positive connotations that are echoed in some conceptualisations of institutional memory (i.e. Pollitt). But, equally, memory (at least in a ‘static’ form) can be said to provide administrative agents with an advantage over political principals (think of the satirical Sir Humphrey of Yes Minister fame). The below table seeks to distinguish between these two conceptualisations of institutional memory:

Key debates: Is institutional memory declining?

The scholar who has done the most to advance our understanding of institutional memory in government is Christopher Pollitt. His main contention is that institutional memory has declined over recent decades due to: the high rotation of staff in the civil service, changes in IT systems which prevent proper archiving, regular organisational restructuring, rewarding management skills above all others, and adopting new management ‘fads’ that favour constant change as they become popular. This combination of factors has proven to be a perfect recipe for the loss of institutional memory within organisations.  The result is a contempt for the past that leads to repeated policy failure.

We came to a different view. Our argument is that one of the key reasons why institutional memory is said to have declined is that it has been conceptualised in a ‘static’ manner more in keeping with an older way of doing government. This practice has assumed that knowledge on a given topic is held centrally (by government departments) and can be made explicit for the purpose of archiving. But, if government doesn’t actually work this way (see relevant posts on networks here) then we shouldn’t expect it to remember this way either. Instead of static repositories of summative documents holding a singular ‘objective’ memory, we propose a more ‘dynamic’ people-centred conceptualisation that sees institutional memory as a composite of intersubjective memories open to change. This draws to the fore the role of actors as crucial interpreters of memory, combining the documentary record with their own perspectives to create a story about the past. In this view, institutional memory has not declined, it is simply being captured in a fundamentally different way.

Corbett et al memory

Key debates: How can an institution improve how it remembers?

How an institution might improve its memory is intrinsically linked to how memory is defined and whether or not it is actually in decline. If we follow Pollitt’s view that memory is about the archive of accumulated knowledge that is being ignored or deliberately dismantled by managerialism then the answer involves returning to an older way of doing government that placed a higher value on experience. By putting a higher value on the past as a resource institutions would reduce staff turnover, stop regular restructures and changes in IT systems, etc. For those of us who work in an institution where restructuring and IT changes are the norm, this solution has obvious attractions. But, would it actually improve memory? Or would it simply make it easier to preserve the status quo (a process that involves actively forgetting disruptive but generative innovations)?

Our definition, relying as it does on a more dynamic conceptualisation of memory, is sceptical about the need to improve practices of remembering. But, if an institution did want to remember better we would favour increasing the opportunity for actors within an institution to reflect on and narrate the past. One example of this might be a ‘Wikipedia’ model of memory in which the story of a policy, it success and failure, is constructed by those involved, highlighting points of consensus and conjecture.

Additional reading:

 Corbett J, Grube D, Lovell H, Scott R. “Singular memory or institutional memories? Toward a dynamic approach”. Governance. 2018;00:1–19. https://doi.org/10.1111/gove.12340

 Pollitt, C. 2009. “Bureaucracies Remember, Post‐Bureaucratic Organizations Forget?” Public Administration 87 (2): 198-218.

Pollitt, C. 2000. “Institutional Amnesia: A Paradox of the ‘Information Age’?” Prometheus 18 (1): 5-16.

 

Leave a comment

Filed under 1000 words, public policy, Uncategorized

POLU9RM Asking the right research question

When I supervise dissertation students, I try to get them to do things in a specific order:

  • get the research question right
  • write an abstract to see if you can answer it (and explain how you structure the dissertation to allow you to answer it)
  • write the introduction to see if you can explain the whole rationale for your dissertation before you do most of the research.

Now, I don’t want to get into a big debate with the deviants who want to write or rewrite their introductions at the end. You can do what you like, pal.

Instead, I want to emphasise the benefits of the early investment. If you get the research question spot-on, in relation to the introduction, you can do the following:

  • make your project manageable from the start, without learning the hard way that you’ve bitten off more than you can chew
  • save a remarkably hellish amount of time on your ‘literature review’ by producing a clear sense of what is relevant/ to be skipped over
  • boast to your friends that you finished on time.

There is some good advice out there on designing a question to speak to a big question and a narrow research project at the same time.

For example, most of my projects follow roughly the same format: what is policy, how much has it changed, and why?

We can then narrow it down in several ways:

  • Choose, say, tobacco policy (quite specific) versus health policy (very broad indeed)
  • Choose one political system or one region, or limit your comparison of systems
  • Choose one time period
  • Choose what aspect of change you want to explain.

The latter is often the most important, because (in my case) it can make the difference between (a) feeling the need to explain many, many theories to give the whole picture (an impossible task) or (b) narrowing down theory selection by focusing on a small number of causes/ dynamics.

Ideally, the question should be super-important and sophisticated, but a dissertation also takes a lot of time and attention. So, my best advice is to choose a question to which you actually want to know the answer. If so, you should end up very satisfied in your result. If you don’t find the question interesting, you may come to resent your dissertation.

A final thought is that students often don’t know what question to ask, and they talk quite broadly about a very general topic. In such cases, it’s important to work with your supervisor until you’re both happy with the final result. My most memorable example is a student who, above all else, wanted to write about Beyoncé (and it worked out very well indeed).

Leave a comment

Filed under Research design, Uncategorized