Category Archives: IMAJINE

Lessons from policy theories for the pursuit of equity in health, education and gender policy

By Paul Cairney, Emily St.Denny, Sean Kippin, Heather Mitchell

This post first appeared on the Policy & Politics blog. It summarizes an article published in Policy & Politics.

Could policy theories help to understand and facilitate the pursuit of equity (or reduction of unfair inequalities)?

We are producing a series of literature reviews to help answer that question, beginning with the study of equity policy and policymaking in healtheducation, and gender research.

Each field has a broadly similar focus.  Most equity researchers challenge the ‘neoliberal’ approaches to policy that favour low state action in favour of individual responsibility and market forces.   They seek ‘social justice’ approaches, favouring far greater state intervention to address the social and economic causes of unfair inequalities, via redistributive or regulatory measures. They seek policymaking reforms to reflect the fact that most determinants of inequalities are not contained to one policy sector and cannot be solved in policy ‘silos’. Rather, equity policy initiatives should be mainstreamed via collaboration across (and outside of) government. Each field also projects a profound sense of disenchantment with limited progress, including a tendency to describe a too-large gap between their aspirations and actual policy outcomes. They describe high certainty about what needs to happen, but low confidence that equity advocates have the means to achieve it (or to persuade powerful politicians to change course).

Policy theories could offer some practical insights for equity research, but not always offer the lessons that some advocates seek. In particular, health equity researchers seek to translate insights on policy processes into a playbook for action, such as to frame policy problems to generate more attention to inequalities, secure high-level commitment to radical change, and improve the coherence of cross-cutting policy measures. Yet, policy theories are more likely to identify the dominance of unhelpful policy frames, the rarity of radical change, and the strong rationale for uncoordinated policymaking across a large number of venues. Rather than fostering technical fixes with a playbook, they encourage more engagement with the inescapable dilemmas and trade-offs inherent to policy choice. This focus on contestation (such as when defining and addressing policy problems) is more of a feature of education and gender equity research.

While we ask what policy theories have to offer other disciplines, in fact the most useful lessons emerge from cross-disciplinary insights. They highlight two very different approaches to transformational political change. One offers the attractive but misleading option of radical change through non-radical action, by mainstreaming equity initiatives into current arrangements and using a toolbox to make continuous progress. Yet, each review highlights a tendency for radical aims to be co-opted and often used to bolster the rules and practices that protect the status quo. The other offers radical change through overtly political action, fostering continuous contestation to keep the issue high on the policy agenda and challenge co-option. There is no clear step-by-step playbook for this option, since political action in complex policymaking systems is necessarily uncertain and often unrewarding. Still, insights from policy theories and equity research shows that grappling with these challenges is inescapable.

Ultimately, we conclude that advocates of profound social transformation are wasting each other’s time if they seek short-cuts and technical fixes to enduring political problems. Supporters of policy equity should be cautious about any attempt to turn a transformational political project into a technical process containing a ‘toolbox’ or ‘playbook’.

You can read the original research in Policy & Politics:

Paul Cairney, Emily St.Denny, Sean Kippin, and Heather Mitchell (2022) ‘Lessons from policy theories for the pursuit of equity in health, education, and gender policy’, Policy and Politics https://doi.org/10.1332/030557321X16487239616498

This article is an output of the IMAJINE project, which focuses on addressing inequalities across Europe.

2 Comments

Filed under 750 word policy analysis, agenda setting, education policy, IMAJINE, Policy learning and transfer, Prevention policy, Public health, public policy, Social change

Policy Analysis in 750 Words: Two approaches to policy learning and transfer

This post forms one part of the Policy Analysis in 750 words series. It draws on work for an in-progress book on learning to reduce inequalities. Some of the text will seem familiar if you have read other posts. Think of it as an adventure game in which the beginning is the same but you don’t know the end.

Policy learning is the use of new information to update policy-relevant knowledge. Policy transfer involves the use of knowledge about policy and policymaking in one government to inform policy and policymaking in another.

These processes may seem to relate primarily to research and expertise, but they require many kinds of political choices (explored in this series). They take place in complex policymaking systems over which no single government has full knowledge or control.

Therefore, while the agency of policy analysts and policymakers still matters, they engage with a policymaking context that constrains or facilitates their action.

Two approaches to policy learning: agency and context-driven stories

Policy analysis textbooks focus on learning and transfer as an agent-driven process with well-established  guidance (often with five main steps). They form part of a functionalist analysis where analysts identify the steps required to turn comparative analysis into policy solutions, or part of a toolkit to manage stages of the policy process.

Agency is less central to policy process research, which describes learning and transfer as contingent on context. Key factors include:

Analysts compete to define problems and determine the manner and sources of learning, in a multi-centric environment where different contexts will constrain and facilitate action in different ways. For example, varying structural factors – such as socioeconomic conditions – influence the feasibility of proposed policy change, and each centre’s institutions provide different rules for gathering, interpreting, and using evidence.

The result is a mixture of processes in which:

  1.  Learning from experts is one of many possibilities. For example, Dunlop and Radaelli also describe ‘reflexive learning’, ‘learning through bargaining’, and ‘learning in the shadow hierarchy’
  2.  Transfer takes many forms.

How should analysts respond?

Think of two different ways to respond to this description of the policy process with this lovely blue summary of concepts. One is your agency-centred strategic response. The other is me telling you why it won’t be straightforward.

An image of the policy process (see 5 images)

There are many policy makers and influencers spread across many policymaking ‘centres’

  1. Find out where the action is and tailor your analysis to different audiences.
  2. There is no straightforward way to influence policymaking if multiple venues contribute to policy change and you don’t know who does what.

Each centre has its own ‘institutions’

  1. Learn the rules of evidence gathering in each centre: who takes the lead, how do they understand the problem, and how do they use evidence?
  2. There is no straightforward way to foster policy learning between political systems if each is unaware of each other’s unwritten rules. Researchers could try to learn their rules to facilitate mutual learning, but with no guarantee of success.

Each centre has its own networks

  1. Form alliances with policymakers and influencers in each relevant venue.
  2. The pervasiveness of policy communities complicates policy learning because the boundary between formal power and informal influence is not clear.

Well-established ‘ideas’ tend to dominate discussion

  1. Learn which ideas are in good currency. Tailor your advice to your audience’s beliefs.
  2. The dominance of different ideas precludes many forms of policy learning or transfer. A popular solution in one context may be unthinkable in another.

Many policy conditions (historic-geographic, technological, social and economic factors) command the attention of policymakers and are out of their control. Routine events and non-routine crises prompt policymaker attention to lurch unpredictably.

  1. Learn from studies of leadership in complex systems or the policy entrepreneurs who find the right time to exploit events and windows of opportunity to propose solutions.
  2. The policy conditions may be so different in each system that policy learning is limited and transfer would be inappropriate. Events can prompt policymakers to pay disproportionately low or high attention to lessons from elsewhere, and this attention relates weakly to evidence from analysts.

Feel free to choose one or both forms of advice. One is useful for people who see analysts and researchers as essential to major policy change. The other is useful if it serves as a source of cautionary tales rather than fatalistic responses.

See also:

Policy Concepts in 1000 Words: Policy Transfer and Learning

Teaching evidence based policy to fly: how to deal with the politics of policy learning and transfer

Policy Concepts in 1000 Words: the intersection between evidence and policy transfer

Policy learning to reduce inequalities: a practical framework

Three ways to encourage policy learning

Epistemic versus bargaining-driven policy learning

The ‘evidence-based policymaking’ page explores these issues in more depth

1 Comment

Filed under 750 word policy analysis, IMAJINE, Policy learning and transfer, public policy

The COVID-19 exams fiasco across the UK: why did policymaking go so wrong?

This post first appeared on the LSE British Politics and Policy blog, and it summarises our new article: Sean Kippin and Paul Cairney (2021) ‘The COVID-19 exams fiasco across the UK: four nations and two windows of opportunity’, British Politics, PDF Annex. The focus on inequalities of attainment is part of the IMAJINE project on spatial justice and territorial inequalities.

In the summer of 2020, after cancelling exams, the UK and devolved governments sought teacher estimates on students’ grades, but supported an algorithm to standardise the results. When the results produced a public outcry over unfair consequences, they initially defended their decision but reverted quickly to teacher assessment. These experiences, argue Sean Kippin and Paul Cairney, highlight the confluence of events and choices in which an imperfect and rejected policy solution became a ‘lifeline’ for four beleaguered governments. 

In 2020, the UK and devolved governments performed a ‘U-turn’ on their COVID-19 school exams replacement policies. The experience was embarrassing for education ministers and damaging to students. There are significant differences between (and often within) the four nations in terms of the structure, timing, weight, and relationship between the different examinations. However, in general, the A-level (England, Northern Ireland, Wales) and Higher/ Advanced Higher (Scotland) examinations have similar policy implications, dictating entry to further and higher education, and influencing employment opportunities. The Priestley review, commissioned by the Scottish Government after their U-turn, described this as an ‘impossible task’.

Initially, each government defined the new policy problem in relation to the need to ‘credibly’ replicate the purpose of exams to allow students to progress to tertiary education or employment. All four quickly announced their intentions to allocate in some form grades to students, rather than replace the assessments with, for example, remote examinations. However, mindful of the long-term credibility of the examinations system and of ensuring fairness, each government opted to maintain the qualifications and seek a similar distribution of grades to previous years. A key consideration was that UK universities accept large numbers of students from across the UK.

One potential solution open to policymakers was to rely solely on teacher grading (CAG). CAGs are ‘based on a range of evidence including mock exams, non-exam assessment, homework assignments and any other record of student performance over the course of study’. Potential problems included the risk of high variation and discrepancies between different centres, the potential overload of the higher education system, and the tendency for teacher predicted grades to reward already privileged students and punish disabled, non-white, and economically deprived children.

A second option was to take CAGs as a starting point, then use an algorithm to produce ‘standardisation’, which was potentially attractive to each government as it allowed students to complete secondary education and to progress to the next level in similar ways to previous (and future) cohorts. Further, an emphasis on the technical nature of this standardisation, with qualifications agencies taking the lead in designing the process by which grades would be allocated, and opting not share the details of its algorithm were a key part of its (temporary) viability. Each government then made similar claims when defending the problem and selecting the solution. Yet this approach reduced both the debate on the unequal impact of this process on students, and the chance for other experts to examine if the algorithm would produce the desired effect. Policymakers in all four governments assured students that the grading would be accurate and fair, with teacher discretion playing a large role in the calculation of grades.

To these governments, it appeared at first that they had found a fair and efficient (or at least defendable) way to allocate grades, and public opinion did not respond negatively to its announcement. However, these appearances proved to be profoundly deceptive and vanished on each day of each exam result. The Scottish national mood shifted so intensely that, after a few days, pursuing standardisation no longer seemed politically feasible. The intense criticism centred on the unequal level of reductions of grades after standardisation, rather than the unequal overall rise in grade performance after teacher assessment and standardisation (which advantaged poorer students).

Despite some recognition that similar problems were afoot elsewhere, this shift of problem definition did not happen in the rest of the UK until (a) their published exam results highlighted similar problems regarding the role of previous school performance on standardised results, and (b) the Scottish Government had already changed course. Upon the release of grades outside Scotland, it became clear that downgrades were also concentrated in more deprived areas. For instance, in Wales, 42% of students saw their A-Level results lowered from their Centre Assessed Grades, with the figure close to a third for Northern Ireland.

Each government thus faced similar choices between defending the original system by challenging the emerging consensus around its apparent unfairness; modifying the system by changing the appeal system; or abandoning it altogether and reverting to solely teacher assessed grades. Ultimately, all three governments followed the same path. Initially, they opted to defend their original policy choice. However, by 17 August, the UK, Welsh, and Northern education secretaries announced (separately) that examination grades would be based solely on CAGs – unless the standardisation process had generated a higher grade (students would receive whichever was highest).

Scotland’s initial experience was instructive to the rest of the UK and its example provided the UK government with a blueprint to follow (eventually). It began with a new policy choice – reverting to teacher assessed grades – sold as fairer to victims of the standardisation process. Once this precedent had been set, a different course for policymakers at the UK level became difficult to resist, particularly when faced with a similar backlash. The UK’s government’s decision in turn influenced the Welsh and Northern Irish governments.

In short, we can see that the particular ordering of choices created a cascading effect across the four governments which created initially one policy solution, before triggering a U-turn. This focus on order and timing should not be lost during the inevitable inquiries and reports on the examinations systems. The take-home message is to not ignore the policy process when evaluating the long-term effect of these policies. Focus on why the standardisation processes went wrong is welcome, but we should also focus on why the policymaking process malfunctioned, to produce a wildly inconsistent approach to the same policy choice in such a short space of time. Examining both aspects of this fiasco will be crucial to the grading process in 2021, given that governments will be seeking an alternative to exams for a second year.

__________________________

Note: the above draws on the authors’ published work in British Politics.

Leave a comment

Filed under IMAJINE, Policy learning and transfer, public policy, UK politics and policy

Policy learning to reduce inequalities: a practical framework

This post first appeared on LSE BPP on 16.11.2020 and it describes the authors’ published work in Territory, Politics, Governance (for IMAJINE)

While policymakers often want to learn how other governments have responded to certain policies, policy learning is characterized by contestation. Policymakers compete to define the problem, set the parameters for learning, and determine which governments should take the lead. Emily St.DennyPaul Cairney, and Sean Kippin discuss a framework that would encourage policy learning in multilevel systems.

Governments face similar policy problems and there is great potential for mutual learning and policy transfer. Yet, most policy research highlights the political obstacles to learning and the weak link between research and transfer. One solution may be to combine academic insights from policy research with practical insights from people with experience of learning in political environments. In that context, our role is to work with policy actors to produce pragmatic strategies to encourage realistic research-informed learning.

Pragmatic policy learning

Producing concepts, research questions, and methods that are interesting to both academics and practitioners is challenging. It requires balancing different approaches to gathering and considering ‘evidence’ when seeking to solve a policy problem. Practitioners need to gather evidence quickly, focusing on ‘what works’ or positive experiences from a small number of relevant countries. Policy scholars may seek more comprehensive research and warn against simple solutions. Further, they may do so without offering a feasible alternative to their audience.

To bridge these differences and facilitate policy learning, we encourage a pragmatic approach to policy learning that requires:

  • Seeing policy learning through the eyes of participants, to understand how they define and seek to solve this problem;
  • Incorporating insights from policy research to construct a feasible approach;
  • Reflecting on this experience to inform research.

Our aim is not ‘evidence-based policymaking’. Rather, it is to incorporate the fact that researchers and evidence form only one small component of a policymaking system characterized by complexity. Additionally, policy actors enjoy less control over these systems than we might like to admit. Learning is therefore best understood as a contested process in which actors combine evidence and beliefs to define policy problems, identify technically and politically feasible solutions, and negotiate who should be responsible for their adoption and delivery in multilevel policymaking systems. Taking seriously the contested, context-specific, and political nature of policymaking is crucial for producing effective advice from which to learn.

Policy learning to reduce inequalities

We apply these insights as part of the EU Horizon 2020 project Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe (IMAJINE). Its overall aim is to research how national and territorial governments across the European Union pursue ‘spatial justice’ and try to reduce inequalities.

Our role is to facilitate policy learning and consider the transfer of policy solutions from successful experiences. Yet, we are confronted by the usual challenges. They include the need to: identify appropriate exemplars from where to draw lessons; help policy practitioners control for differences in context; and translate between academic and practitioner communities.

Additionally, we work on an issue – inequality – which is notoriously ambiguous and contested. It involves not only scientific information about the lives and experiences of people, but also political disagreement about the legitimate role of the state in intervening in people’s lives or redistributing of resources. Developing a policy learning framework that is able to generate practically useful insights for policy actors is difficult but key to ensuring policy effectiveness and coherence.

Drawing on work we carried out for the Scottish Government’s National Advisory Council on Women and Girls on approaches to reducing inequalities in relation to gender mainstreaming, we apply the IMAJINE framework to support policy learning. The IMAJINE framework guides such academic–practitioner analysis in four steps:

Step 1: Define the nature of policy learning in political systems.

Preparing for learning requires taking into account the interaction between:

  • Politics, in which actors contest the nature of problems and the feasibility of solutions;
  • Bounded rationality, which requires actors to use organizational and cognitive shortcuts to gather and use evidence;
  • ‘Multi-centric’ policymaking systems, which limit a single central government’s control over choices and outcomes.

These dynamics play out in different ways in each territory, which means that the importers and exporters of lessons are operating in different contexts and addressing inequalities in different ways. Therefore, we must ask how the importers and exporters of lessons: define the problem, decide what policies are feasible, establish which level of government should be responsible for policy and identify criteria to evaluate policy success.

Step 2: Map policymaking responsibilities for the selection of policy instruments.

The Council of Europe defines gender mainstreaming as ‘the (re)organisation, improvement, development and evaluation of policy processes, so that a gender equality perspective is incorporated in all policies at all levels and at all stages’.

Such definitions help explain why mainstreaming approaches often appear to be incoherent. To map the sheer weight of possible measures, and the spread of responsibility across many levels of government (such as local, Scottish, UK and EU), is to identify a potentially overwhelming scale of policymaking ambition. Further, governments tend to address this potential by breaking policymaking into manageable sectors. Each sector has its own rules and logics, producing coherent policymaking in each ‘silo’ but a sense of incoherence overall, particularly if the overarching aim is a low priority in government. Mapping these dynamics and responsibilities is necessary to ensure lessons learned can be effectively applied in similarly complex domestic systems.

Step 3: Learn from experience.

Policy actors want to draw lessons from the most relevant exemplars. Often, they will have implicit or explicit ideas concerning which countries they would like to learn more from. Negotiating which cases to explore, so that it takes into consideration both policy actors’ interests and the need to generate appropriate and useful lessons, is vital.

In the case of mainstreaming, we focused on three exemplar approaches, selected by members of our audience according to perceived levels of ambition: maximal (Sweden), medial (Canada) and minimal (the UK, which controls aspects of Scottish policy). These cases were also justified with reference to the academic literature which often uses these countries as exemplars of different approaches to policy design and implementation.

Step 4: Deliberate and reflect.

Work directly with policy participants to reflect on the implications for policy in their context. Research has many important insights on the challenges to and limitations of policy learning in complex systems. In particular, it suggests that learning cannot be comprehensive and does not lead to the importation of a well-defined package of measures. Bringing these sorts of insights to bear on policy actors’ practical discussions of how lessons can be drawn and applied from elsewhere is necessary, though ultimately insufficient. In our experience so far, step 4 is the biggest obstacle to our impact.

___________________

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), feminism, IMAJINE, Policy learning and transfer, public policy

Taking lessons from policy theory into practice: 3 examples

Notes for ANZSOG/ ANU Crawford School/ UNSW Canberra workshop. Powerpoint here. The recording of the lecture (skip to 2m30) and Q&A is here (right click to download mp3 or dropbox link):

The context for this workshop is the idea that policy theories could be more helpful to policymakers/ practitioners if we could all communicate more effectively with each other. Academics draw general and relatively abstract conclusions from multiple cases. Practitioners draw very similar conclusions from rich descriptions of direct experience in a smaller number of cases. How can we bring together their insights and use a language that we all understand? Or, more ambitiously, how can we use policy theory-based insights to inform the early career development training that civil servants and researchers receive?

The first step is to translate policy theories into a non-technical language by trying to speak with an audience beyond our immediate peers (see for example Practical Lessons from Policy Theories).

However, translation is not enough. A second crucial step is to consider how policymakers and practitioners are likely to make sense of theoretical insights when they apply them to particular aims or responsibilities. For example:

  1. Central government policymakers may accept the descriptive accuracy of policy theories emphasising limited central control, but not the recommendation that they should let go, share power, and describe their limits to the public.
  2. Scientists may accept key limitations to ‘evidence based policymaking’ but reject the idea that they should respond by becoming better storytellers or more manipulative operators.
  3. Researchers and practitioners struggle to resolve hard choices when combining evidence and ‘coproduction’ while ‘scaling up’ policy interventions. Evidence choice is political choice. Can we do more than merely encourage people to accept this point?

I discuss these examples below because they are closest to my heart (especially example 1). Note throughout that I am presenting one interpretation about: (1) the most promising insights, and (2) their implications for practice. Other interpretations of the literature and its implications are available. They are just a bit harder to find.

Example 1: the policy cycle endures despite its descriptive inaccuracy

cycle

The policy cycle does not describe and explain the policy process well:

  • If we insist on keeping the cycle metaphor, it is more accurate to see the process as a huge set of policy cycles that connect with each other in messy and unpredictable ways.
  • The cycle approach also links strongly to the idea of ‘comprehensive rationality’ in which a small group of policymakers and analysts are in full possession of the facts and full control of the policy process. They carry out their aims through a series of stages.

Policy theories provide more descriptive and explanatory usefulness. Their insights include:

  • Limited choice. Policymakers inherit organisations, rules, and choices. Most ‘new’ choice is a revision of the old.
  • Limited attention. Policymakers must ignore almost all of the policy problems for which they are formally responsible. They pay attention to some, and delegate most responsibility to civil servants. Bureaucrats rely on other actors for information and advice, and they build relationships on trust and information exchange.
  • Limited central control. Policy may appear to be made at the ‘top’ or in the ‘centre’, but in practice policymaking responsibility is spread across many levels and types of government (many ‘centres’). ‘Street level’ actors make policy as they deliver. Policy outcomes appear to ‘emerge’ locally despite central government attempts to control their fate.
  • Limited policy change. Most policy change is minor, made and influenced by actors who interpret new evidence through the lens of their beliefs. Well-established beliefs limit the opportunities of new solutions. Governments tend to rely on trial-and-error, based on previous agreements, rather than radical policy change based on a new agenda. New solutions succeed only during brief and infrequent windows of opportunity.

However, the cycle metaphor endures because:

  • It provides a simple model of policymaking with stages that map onto important policymaking functions.
  • It provides a way to project policymaking to the public. You know how we make policy, and that we are in charge, so you know who to hold to account.

In that context, we may want to be pragmatic about our advice:

  1. One option is via complexity theory, in which scholars generally encourage policymakers to accept and describe their limits:
  • Accept routine error, reduce short-term performance management, engage more in trial and error, and ‘let go’ to allow local actors the flexibility to adapt and respond to their context.
  • However, would a government in the Westminster tradition really embrace this advice? No. They need to balance (a) pragmatic policymaking, and (b) an image of governing competence.
  1. Another option is to try to help improve an existing approach.

Further reading (blog posts):

The language of complexity does not mix well with the language of Westminster-style accountability

Making Sense of Policymaking: why it’s always someone else’s fault and nothing ever changes

Two stories of British politics: the Westminster model versus Complex Government

Example 2: how to deal with a lack of ‘evidence based policymaking’

I used to read many papers on tobacco policy, with the same basic message: we have the evidence of tobacco harm, and evidence of which solutions work, but there is an evidence-policy gap caused by too-powerful tobacco companies, low political will, and pathological policymaking. These accounts are not informed by theories of policymaking.

I then read Oliver et al’s paper on the lack of policy theory in health/ environmental scholarship on the ‘barriers’ to the use of evidence in policy. Very few articles rely on policy concepts, and most of the few rely on the policy cycle. This lack of policy theory is clear in their description of possible solutions – better communication, networking, timing, and more science literacy in government – which does not describe well the need to respond to policymaker psychology and a complex policymaking environment.

So, I wrote The Politics of Evidence-Based Policymaking and one zillion blog posts to help identify the ways in which policy theories could help explain the relationship between evidence and policy.

Since then, the highest demand to speak about the book has come from government/ public servant, NGO, and scientific audiences outside my discipline. The feedback is generally that: (a) the book’s description sums up their experience of engagement with the policy process, and (b) maybe it opens up discussion about how to engage more effectively.

But how exactly do we turn empirical descriptions of policymaking into practical advice?

For example, scientist/ researcher audiences want to know the answer to a question like: Why don’t policymakers listen to your evidence? and so I focus on three conversation starters:

  1. they have a broader view on what counts as good evidence (see ANZSOG description)
  2. they have to ignore almost all information (a nice way into bounded rationality and policymaker psychology)
  3. they do not understand or control the process in which they seek to use evidence (a way into ‘the policy process’)

Cairney 2017 image of the policy process

We can then consider many possible responses in the sequel What can you do when policymakers ignore your evidence?

Examples include:

  • ‘How to do it’ advice. I compare tips for individuals (from experienced practitioners) with tips based on policy concepts. They are quite similar-looking tips – e.g. find out where the action is, learn the rules, tell good stories, engage allies, seek windows of opportunity – but I describe mine as 5 impossible tasks!
  • Organisational reform. I describe work with the European Commission Joint Research Centre to identify 8 skills or functions of an organisation bringing together the supply/demand of knowledge.
  • Ethical dilemmas. I use key policy theories to ask people how far they want to go to privilege evidence in policy. It’s fun to talk about these things with the type of scientist who sees any form of storytelling as manipulation.

Further reading:

Is Evidence-Based Policymaking the same as good policymaking?

A 5-step strategy to make evidence count

Political science improves our understanding of evidence-based policymaking, but does it produce better advice?

Principles of science advice to government: key problems and feasible solutions

Example 3: how to encourage realistic evidence-informed policy transfer

This focus on EBPM is useful context for discussions of ‘policy learning’ and ‘policy transfer’, and it was the focus of my ANZOG talk entitled (rather ambitiously) ‘teaching evidence-based policy to fly’.

I’ve taken a personal interest in this one because I’m part of a project – called IMAJINE – in which we have to combine academic theory and practical responses. We are trying to share policy solutions across Europe rather than explain why few people share them!

For me, the context is potentially overwhelming:

So, when we start to focus on sharing lessons, we will have three things to discover:

  1. What is the evidence for success, and from where does it come? Governments often project success without backing it up.
  2. What story do policymakers tell about the problem they are trying to solve, the solutions they produced, and why? Two different governments may be framing and trying to solve the same problem in very different ways.
  3. Was the policy introduced in a comparable policymaking system? People tend to focus on political system comparability (e.g. is it unitary or federal?), but I think the key is in policymaking system comparability (e.g. what are the rules and dominant ideas?).

To be honest, when one of our external assessors asked me how well I thought I would do, we both smiled because the answer may be ‘not very’. In other words, the most practical lesson may be the hardest to take, although I find it comforting: the literature suggests that policymakers might ignore you for 20 years then suddenly become very (but briefly) interested in your work.

 

The slides are a bit wonky because I combined my old ppt to the Scottish Government with a new one for UNSW Paul Cairney ANU Policy practical 22 October 2018

I wanted to compare how I describe things to (1) civil servants (2) practitioners/ researcher (3) me, but who has the time/ desire to listen to 3 powerpoints in one go? If the answer is you, let me know and we’ll set up a Zoom call.

2 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), IMAJINE, Policy learning and transfer

We are recruiting three lecturers in Politics at the University of Stirling

Senior lectureship/ Associate Professor in Comparative Politics

Lectureship in International Politics

Lectureship in Politics and Public Policy (4 years – Horizon 2020 programme IMAJINE)

I am the pre-interview contact point and these are my personal thoughts on that process, which blend background information and some helpful advice.

The first two posts provide ‘open ended’ contracts. We are also seeking a postdoctoral researcher/ lecturer to work with me for 4.25 years on a Horizon2020 project. So, I’ll give some general advice on each, then emphasise some differences with the third post.

The politics staff in our division will be 10 following these appointments, so you will have the chance to play an important part of a group which is small enough to act collectively –  to, for example, influence its research direction.

Why do we make reference to ‘gender, sexuality, and race’ in the FPs?

5 of our 7 permanent lecturers are men and all 7 are white. We are not interested in simply reinforcing the imbalances that are already there. So, we worded the further particulars to ‘signal’ that we have realistic hopes of producing a more diverse and gender-balanced short list. Usually, job adverts will have a pro-forma statement about equalities, but we are trying to go one step further to signal – albeit with rather subtle cues – that we have thought about this issue a bit more; that we’d like to expand our networks and the ways in which our staff approach the study of politics. We are trying to make sure that our current set up does not put off women or people of colour from applying, recruiting from a subject pool in which there is (I think) a relatively good gender balance, and signalling support for research topics that might help expand our current offering.

These notes are also there to address a potentially major imbalance in the informal side to recruitment: if you do not have the contacts and networks that help give you the confidence to seek information (on the things not mentioned in the further particulars), here is the next best thing: the information I’d give you on the phone. Still, if you reach interview stage, we really should talk.

We hope to make this kind of informal advice a routine part of the application process, as part of our commitment to innovative best practice and Athena SWAN.

Here are some tips on the application and interview processes.

The application process:

  • At this stage, the main documents are the CV and the cover letter.
  • You should keep the cover letter short to show your skills at concise writing. Focus on what you can offer the Division specifically, given the nature of our call and further particulars.
  • Shortlisted candidates at the SL/ Associate Professor level will likely be established lecturers with a strong record on publications, income, and leadership – so what makes you stand out? Lecturers will be competing with many people who have completed a PhD – so what makes your CV stand out?
  • Note that you will have the chance to play an important part of a group which is small enough (10 in Politics, as part of a larger Division with History) to act collectively – to, for example, influence its research direction (as a group, we hold regular 90 minute research workshops for that purpose).
  • Focus on what you have already done when discussing what you will promise to do over the next five years. Those plans seem more realistic if there is already some sort of track record.
  • We take teaching very seriously. Within our division, we plan an overall curriculum together, discuss regularly if it is working, and come to agreements about how to teach and assess work. We pride ourselves on being a small and friendly bunch of people, open to regular student contact and, for example, committed to meaningful and regular feedback. You might think about how you would contribute in that context. In particular, you should think about how you would deliver large undergraduate courses (in which you may only be an expert on some of the material) as well as the smaller, more specialist and advanced, courses closer to your expertise.

The interview process

By the interview stage, you should almost certainly have a conversation with me to make sure that you are well prepared. For example, here are the things that you really should know at that stage:

  • The teaching and research specialisms of the division and their links to cross-divisional research.
  • The kinds of courses that the division would expect you to teach.

Perhaps most importantly, you need to be able to articulate why you want to come and work at Stirling. ‘Why Stirling?’ or ‘Why this division?’ is usually the first question in an interview, so you should think about it in advance. We recommend doing some research on Stirling and the division/ faculty, to show in some detail that you have a considered reply (beyond ‘it is a beautiful campus’). We will see through a generic response in a heartbeat and, since it is the first question, your answer will set the tone for the rest of the interview. You might check, for example, who you might share interests with in the Division, and how you might  develop links beyond the division (for example, the Centre for Gender & Feminist Studies in our school) or faculty (such as the Faculty of Social Sciences) – since this is likely to be a featured question too.

  • Then you might think about what you would bring to the University in a wider sense, such as through well-established (domestic and international) links with other scholars in academic networks.
  • Further, since ‘impact’ is of rising importance, you might discuss your links with people and organisations outside of the University, and how you have pursued meaningful engagement with the public or practitioners to maximise the wider contribution of your research.

The presentation plus interview format

In our system there tend to be presentations to divisional (and other interested) staff in the morning, with interviews in the afternoon. The usual expectation is that if you can’t make the date, you can’t get the job (although we can make accommodations to help you apply or interview via Skype).

  • We recommend keeping the presentation compact, to show that you can present complex information in a concise and clear way. Presentations are usually a mix of what you do in research and what you will contribute in a wider sense to the University.
  • The usual interview panel format at this level is five members: one subject specialist from the Division, one other member of the Faculty (not necessarily from our division), the Head of Faculty of Arts and Humanities, a senior manager of the University (in the chair), and a senior academic in another Faculty (by the time of interview you should know what these terms mean at Stirling).
  • So, it is possible that only 1 member of your panel will be a subject specialist (in Politics). This means that (at the very least) you need to describe your success in a way that a wider audience will appreciate. For example, you would have to explain the significance of a single-author article in the APSR!

It sounds daunting, but we are a friendly bunch and want you to do well. You might struggle to retain all of our names (nerves), so focus on the types of question we ask – for example, the general question to get you started will be from the senior manager, and the research question from the divisional representative. There are often more men than women on the panel, and they are often all-white panels, but I hope that we are providing other more useful ‘signals’ about our commitment to equality and diversity.

I am happy to answer your questions. We can try email first – p.a.cairney@stir.ac.uk – and then phone or skype if you prefer.

The Horizon 2020 post

I have described some key concepts in two separate posts, to give you an idea of our part of the larger project:

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

Please also note why we are offering a 4.25 year post: we want it to be a platform for your long term success. A lot of applicants will know that our research funding system has some unintended consequences: some people get grants and are bought out of teaching, others get more teaching in return, and many research fellows compete for very short term contracts with limited job security. This post should reduce those consequences: you and I would share my full teaching load, you would have the chance to co-author a lot of research with me (and we can both single author other pieces), and we would seek more opportunities for funding throughout. By the end of 2021, I hope that your CV will be impressive enough for you to think about applying for senior lecturing positions.

9 Comments

Filed under Athena Swan, IMAJINE

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

See also our project website IMAJINE.

Two recent articles explore the role of academics in the ‘co-production’ of policy and/or knowledge.

Both papers suggest (I think) that academic engagement in the ‘real world’ is highly valuable, and that we should not pretend that we can remain aloof from politics when producing new knowledge (research production is political even if it is not overtly party political). They also suggest that it is fraught with difficulty and, perhaps, an often-thankless task with no guarantee of professional or policy payoffs (intrinsic motivation still trumps extrinsic motivation).

So, what should we do?

I plan to experiment a little bit while conducting some new research over the next 4 years. For example, I am part of a new project called IMAJINE, and plan to speak with policymakers, from the start to the end, about what they want from the research and how they’ll use it. My working assumption is that it will help boost the academic value and policy relevance of the research.

I have mocked up a paper abstract to describe this kind of work:

In this paper, we use policy theory to explain why the ‘co-production’ of comparative research with policymakers makes it more policy relevant: it allows researchers to frame their policy analysis with reference to the ways in which policymakers frame policy problems; and, it helps them identify which policymaking venues matter, and the rules of engagement within them.  In other words, theoretically-informed researchers can, to some extent, emulate the strategies of interest groups when they work out ‘where the action is’ and how to adapt to policy agendas to maximise their influence. Successful groups identify their audience and work out what it wants, rather than present their own fixed views to anyone who will listen.

Yet, when described so provocatively, our argument raises several practical and ethical dilemmas about the role of academic research. In abstract discussions, they include questions such as: should you engage this much with politics and policymakers, or maintain a critical distance; and, if you engage, should you simply reflect or seek to influence the policy agenda? In practice, such binary choices are artificial, prompting us to explore how to manage our engagement in politics and reflect on our potential influence.

We explore these issues with reference to a new Horizon 2020 funded project IMAJINE, which includes a work package – led by Cairney – on the use of evidence and learning from the many ways in which EU, national, and regional policymakers have tried to reduce territorial inequalities.

So, in the paper we (my future research partner and I), would:

  • Outline the payoffs to this engage-early approach. Early engagement will inform the research questions you ask, how you ask them, and how you ‘frame’ the results. It should also help produce more academic publications (which is still the key consideration for many academics), partly because this early approach will help us speak with some authority about policy and policymaking in many countries.
  • Describe the complications of engaging with different policymakers in many ‘venues’ in different countries: you would expect very different questions to arise, and perhaps struggle to manage competing audience demands.
  • Raise practical questions about the research audience, including: should we interview key advocacy groups and private sources of funding for applied research, as well as policymakers, when refining questions? I ask this question partly because it can be more effective to communicate evidence via policy influencers rather than try to engage directly with policymakers.
  • Raise ethical questions, including: what if policymaker interviewees want the ‘wrong’ questions answered? What if they are only interested in policy solutions that we think are misguided, either because the evidence-base is limited (and yet they seek a magic bullet) or their aims are based primarily on ideology (an allegedly typical dilemma regards left-wing academics providing research for right-wing governments)?

Overall, you can see the potential problems: you ‘enter’ the political arena to find that it is highly political! You find that policymakers are mostly interested in (what you believe are) ineffective or inappropriate solutions and/ or they think about the problem in ways that make you, say, uncomfortable. So, should you engage in a critical way, risking exclusion from the ‘coproduction’ of policy, or in a pragmatic way, to ‘coproduce’ knowledge and maximise your chances of their impact in government?

The case study of territorial inequalities is a key source of such dilemmas …

…partly because it is difficult to tell how policymakers define and want to solve such policy problems. When defining ‘territorial inequalities’, they can refer broadly to geographical spread, such as within the EU Member States, or even within regions of states. They can focus on economic inequalities, inequalities linked strongly to gender, race or ethnicity, mental health, disability, and/ or inequalities spread across generations. They can focus on indicators of inequalities in areas such as health and education outcomes, housing tenure and quality, transport, and engagement with social work and criminal justice. While policymakers might want to address all such issues, they also prioritise the problems they want to solve and the policy instruments they are prepared to use.

When considering solutions, they can choose from three basic categories:

  1. Tax and spending to redistribute income and wealth, perhaps treating economic inequalities as the source of most others (such as health and education inequalities).
  2. The provision of public services to help mitigate the effects of economic and other inequalities (such as free healthcare and education, and public transport in urban and rural areas).
  3. The adoption of ‘prevention’ strategies to engage as early as possible in people’s lives, on the assumption that key inequalities are well-established by the time children are three years old.

Based on my previous work with Emily St Denny, I’d expect that many governments express a high commitment to reduce inequalities – and it is often sincere – but without wanting to use tax/ spending as the primary means, and faced with limited evidence on the effectiveness of public services and prevention. Or, many will prefer to identify ‘evidence-based’ solutions for individuals rather than to address ‘structural’ factors linked to factors such as gender, ethnicity, and class. This is when the production and use of evidence becomes overtly ‘political’, because at the heart of many of these discussions is the extent to which individuals or their environments are to blame for unequal outcomes, and if richer regions should compensate poorer regions.

‘The evidence’ will not ‘win the day’ in such debates. Rather, the choice will be between, for example: (a) pragmatism, to frame evidence to contribute to well-established beliefs, about policy problems and solutions, held by the dominant actors in each political system; and, (b) critical distance, to produce what you feel to be the best evidence generated in the right way, and challenge policymakers to explain why they won’t use it. I suspect that (a) is more effective, but (b) better reflects what most academics thought they were signing up to.

For more on IMAJINE, see New EU study looks at gap between rich and poor and The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

For more on evidence/ policy dilemmas, see Kathryn Oliver and I have just published an article on the relationship between evidence and policy

 

4 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

I am now part of a large EU-funded Horizon2020 project called IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe), which begins in January 2017. It is led by Professor Michael Woods at Aberystwyth University and has a dozen partners across the EU. I’ll be leading one work package in partnership with Professor Michael Keating.

imajine-logo-2017

The aim in our ‘work package’ is deceptively simple: generate evidence to identify how EU countries try to reduce territorial inequalities, see who is the most successful, and recommend the transfer of that success to other countries.

Life is not that simple, though, is it?! If it were, we’d know for sure what ‘territorial inequalities’ are, what causes them, what governments are willing to do to reduce them, and if they’ll succeed if they really try.

Instead, here are some of the problems you encounter along the way, including an inability to identify:

  • What policies are designed explicitly to reduce inequalities. Instead, we piece together many intentions, actions, instruments, and outputs, in many levels and types of government, and call it ‘policy’.
  • The link between ‘policy’ and policy outcomes, because many factors interact to produce those outcomes.
  • Success. Even if we could solve the methodological problems, to separate cause and effect, we face a political problem about choosing measures to evaluate and report success.
  • Good ways to transfer successful policies. A policy is not like a #gbbo cake, in which you can produce a great product and give out the recipe. In that scenario, you can assume that we all have the same aims (we all want cake, and of course chocolate is the best), starting point (basically the same shops and kitchens), and language to describe the task (use loads of sugar and cocoa). In policy, governments describe and seek to solve similar-looking problems in very different ways and, if they look elsewhere for lessons, those insights have to be relevant to their context (and the evidence-gathering process has to fit their idea of good governance). They also ‘transfer’ some policies while maintaining their own, and a key finding from our previous work is that governments simultaneously pursue policies to reduce inequalities and undermine their inequality-reducing policies.

So, academics like me tend to spend their time highlighting problems, explaining why such processes are not ‘evidence-based’, and identifying all the things that will go wrong from your perspective if you think policymaking and policy transfer can ever be straightforward.

Yet, policymakers do not have this luxury to identify problems, find them interesting, then go home. Instead, they have to make decisions in the face of ambiguity (what problem are they trying to solve?), uncertainty (evidence will help, but always be limited), and limited time.

So, academics like me are now focused increasingly on trying to help address the problems we raise. On the plus side, it prompts us to speak with policymakers from start to finish, to try to understand what evidence they’re interested in and how they’ll use it. On the less positive side (at least if you are a purist about research), it might prompt all sorts of compromises about how to combine research and policy advice if you want policymakers to use your evidence (on, for example, the line between science and advice, and the blurry boundaries between evidence and advice). If you are interested, please let me know, or follow the IMAJINE category on this site (and #IMAJINE).

See also:

New EU study looks at gap between rich and poor

New research project examines regional inequalities in Europe

Understanding the transfer of policy failure: bricolage, experimentalism and translation by Diane Stone

 

5 Comments

Filed under Evidence Based Policymaking (EBPM), IMAJINE, public policy