Monthly Archives: September 2017

#EU4Facts: 3 take-home points from the JRC annual conference

See EU4FACTS: Evidence for policy in a post-fact world

The JRC’s annual conference has become a key forum in which to discuss the use of evidence in policy. At this scale, in which many hundreds of people attend plenary discussions, it feels like an annual mass rally for science; a ‘call to arms’ to protect the role of science in the production of evidence, and the protection of evidence in policy deliberation. There is not much discussion of storytelling, but we tell each other a fairly similar story about our fears for the future unless we act now.

Last year, the main story was of fear for the future of heroic scientists: the rise of Trump and the Brexit vote prompted many discussions of post-truth politics and reduced trust in experts. An immediate response was to describe attempts to come together, and stick together, to support each other’s scientific endeavours during a period of crisis. There was little call for self-analysis and reflection on the contribution of scientists and experts to barriers between evidence and policy.

This year was a bit different. There was the same concern for reduced trust in science, evidence, and/ or expertise, and some references to post-truth politics and populism, but with some new voices describing the positive value of politics, often when discussing the need for citizen engagement, and of the need to understand the relationship between facts, values, and politics.

For example, a panel on psychology opened up the possibility that we might consider our own politics and cognitive biases while we identify them in others, and one panellist spoke eloquently about the importance of narrative and storytelling in communicating to audiences such as citizens and policymakers.

A focus on narrative is not new, but it provides a challenging agenda when interacting with a sticky story of scientific objectivity. For the unusually self-reflective, it also reminds us that our annual discussions are not particularly scientific; the usual rules to assess our statements do not apply.

As in studies of policymaking, we can say that there is high support for such stories when they remain vague and driven more by emotion than the pursuit of precision. When individual speakers try to make sense of the same story, they do it in different – and possibly contradictory – ways. As in policymaking, the need to deliver something concrete helps focus the mind, and prompts us to make choices between competing priorities and solutions.

I describe these discussions in two ways: tables, in which I try to boil down each speaker’s speech into a sentence or two (you can get their full details in the programme and the speaker bios); and a synthetic discussion of the top 3 concerns, paraphrasing and combining arguments from many speakers:

1. What are facts?

The key distinction began as between politics-values-facts which is impossible to maintain in practice.

Yet, subsequent discussion revealed a more straightforward distinction between facts and opinion, ‘fake news’, and lies. The latter sums up an ever-present fear of the diminishing role of science in an alleged ‘post truth’ era.

2. What exactly is the problem, and what is its cause?

The tables below provide a range of concerns about the problem, from threats to democracy to the need to communicate science more effectively. A theme of growing importance is the need to deal with the cognitive biases and informational shortcuts of people receiving evidence: communicate with reference to values, beliefs, and emotions; build up trust in your evidence via transparency and reliability; and, be prepared to discuss science with citizens and to be accountable for your advice. There was less discussion of the cognitive biases of the suppliers of evidence.

3. What is the role of scientists in relation to this problem?

Not all speakers described scientists as the heroes of this story:

  • Some described scientists as the good people acting heroically to change minds with facts.
  • Some described their potential to co-produce important knowledge with citizens (although primarily with like-minded citizens who learn the value of scientific evidence?).
  • Some described the scientific ego as a key barrier to action.
  • Some identified their low confidence to engage, their uncertainty about what to do with their evidence, and/ or their scientist identity which involves defending science as a cause/profession and drawing the line between providing information and advocating for policy. This hope to be an ‘honest broker’ was pervasive in last year’s conference.
  • Some (rightly) rejected the idea of separating facts/ values and science/ politics, since evidence is never context free (and gathering evidence without thought to context is amoral).

Often in such discussions it is difficult to know if some scientists are naïve actors or sophisticated political strategists, because their public statements could be identical. For the former, an appeal to objective facts and the need to privilege science in EBPM may be sincere. Scientists are, and should be, separate from/ above politics. For the latter, the same appeal – made again and again – may be designed to energise scientists and maximise the role of science in politics.

Yet, energy is only the starting point, and it remains unclear how exactly scientists should communicate and how to ‘know your audience’: would many scientists know who to speak to, in governments or the Commission, if they had something profoundly important to say?

Keynotes and introductory statements from panel chairs
Vladimír Šucha: We need to understand the relationship between politics, values, and facts. Facts are not enough. To make policy effectively, we need to combine facts and values.
Tibor Navracsics: Politics is swayed more by emotions than carefully considered arguments. When making policy, we need to be open and inclusive of all stakeholders (including citizens), communicate facts clearly and at the right time, and be aware of our own biases (such as groupthink).
Sir Peter Gluckman: ‘Post-truth’ politics is not new, but it is pervasive and easier to achieve via new forms of communication. People rely on like-minded peers, religion, and anecdote as forms of evidence underpinning their own truth. When describing the value of science, to inform policy and political debate, note that it is more than facts; it is a mode of thinking about the world, and a system of verification to reduce the effect of personal and group biases on evidence production. Scientific methods help us define problems (e.g. in discussion of cause/ effect) and interpret data. Science advice involves expert interpretation, knowledge brokerage, a discussion of scientific consensus and uncertainty, and standing up for the scientific perspective.
Carlos Moedas: Safeguard trust in science by (1) explaining the process you use to come to your conclusions; (2) provide safe and reliable places for people to seek information (e.g. when they Google); (3) make sure that science is robust and scientific bodies have integrity (such as when dealing with a small number of rogue scientists).
Pascal Lamy: 1. ‘Deep change or slow death’ We need to involve more citizens in the design of publicly financed projects such as major investments in science. Many scientists complain that there is already too much political interference, drowning scientists in extra work. However, we will face a major backlash – akin to the backlash against ‘globalisation’ – if we do not subject key debates on the future of science and technology-driven change (e.g. on AI, vaccines, drone weaponry) to democratic processes involving citizens. 2. The world changes rapidly, and evidence gathering is context-dependent, so we need to monitor regularly the fitness of our scientific measures (of e.g. trade).
Jyrki Katainen: ‘Wicked problems’ have no perfect solution, so we need the courage to choose the best imperfect solution. Technocratic policymaking is not the solution; it does not meet the democratic test. We need the language of science to be understandable to citizens: ‘a new age of reason reconciling the head and heart’.

Panel: Why should we trust science?
Jonathan Kimmelman: Some experts make outrageous and catastrophic claims. We need a toolbox to decide which experts are most reliable, by comparing their predictions with actual outcomes. Prompt them to make precise probability statements and test them. Only those who are willing to be held accountable should be involved in science advice.
Johannes Vogel: We should devote 15% of science funding to public dialogue. Scientific discourse, and a science-literature population, is crucial for democracy. EU Open Society Policy is a good model for stakeholder inclusiveness.
Tracey Brown: Create a more direct link between society and evidence production, to ensure discussions involve more than the ‘usual suspects’. An ‘evidence transparency framework’ helps create a space in which people can discuss facts and values. ‘Be open, speak human’ describes showing people how you make decisions. How can you expect the public to trust you if you don’t trust them enough to tell them the truth?
Francesco Campolongo: Claude Juncker’s starting point is that Commission proposals and activities should be ‘based on sound scientific evidence’. Evidence comes in many forms. For example, economic models provide simplified versions of reality to make decisions. Economic calculations inform profoundly important policy choices, so we need to make the methodology transparent, communicate probability, and be self-critical and open to change.

Panel: the politician’s perspective
Janez Potočnik: The shift of the JRC’s remit allowed it to focus on advocating science for policy rather than policy for science. Still, such arguments need to be backed by an economic argument (this policy will create growth and jobs). A narrow focus on facts and data ignores the context in which we gather facts, such as a system which undervalues human capital and the environment.
Máire Geoghegan-Quinn: Policy should be ‘solidly based on evidence’ and we need well-communicated science to change the hearts and minds of people who would otherwise rely on their beliefs. Part of the solution is to get, for example, kids to explain what science means to them.

Panel: Redesigning policymaking using behavioural and decision science
Steven Sloman: The world is complex. People overestimate their understanding of it, and this illusion is burst when they try to explain its mechanisms. People who know the least feel the strongest about issues, but if you ask them to explain the mechanisms their strength of feeling falls. Why? People confuse their knowledge with that of their community. The knowledge is not in their heads, but communicated across groups. If people around you feel they understand something, you feel like you understand, and people feel protective of the knowledge of their community. Implications? 1. Don’t rely on ‘bubbles’; generate more diverse and better coordinated communities of knowledge. 2. Don’t focus on giving people full information; focus on the information they need at the point of decision.
Stephan Lewandowsky: 97% of scientists agree that human-caused climate change is a problem, but the public thinks it’s roughly 50-50. We have a false-balance problem. One solution is to ‘inoculate’ people against its cause (science denial). We tell people the real figures and facts, warn them of the rhetorical techniques employed by science denialists (e.g. use of false experts on smoking), and mock the false balance argument. This allows you to reframe the problem as an investment in the future, not cost now (and find other ways to present facts in a non-threatening way). In our lab, it usually ‘neutralises’ misinformation, although with the risk that a ‘corrective message’ to challenge beliefs can entrench them.
Françoise Waintrop: It is difficult to experiment when public policy is handed down from on high. Or, experimentation is alien to established ways of thinking. However, our 12 new public innovation labs across France allow us to immerse ourselves in the problem (to define it well) and nudge people to action, working with their cognitive biases.
Simon Kuper: Stories combine facts and values. To change minds: persuade the people who are listening, not the sceptics; find go-betweens to link suppliers and recipients of evidence; speak in stories, not jargon; don’t overpromise the role of scientific evidence; and, never suggest science will side-line human beings (e.g. when technology costs jobs).

Panel: The way forward
Jean-Eric Paquet: We describe ‘fact based evidence’ rather than ‘science based’. A key aim is to generate ‘ownership’ of policy by citizens. Politicians are more aware of their cognitive biases than we technocrats are.
Anne Bucher: In the European Commission we used evidence initially to make the EU more accountable to the public, via systematic impact assessment and quality control. It was a key motivation for better regulation. We now focus more on generating inclusive and interactive ways to consult stakeholders.
Ann Mettler: Evidence-based policymaking is at the heart of democracy. How else can you legitimise your actions? How else can you prepare for the future? How else can you make things work better? Yet, a lot of our evidence presentation is so technical; even difficult for specialists to follow. The onus is on us to bring it to life, to make it clearer to the citizen and, in the process, defend scientists (and journalists) during a period in which Western democracies seem to be at risk from anti-democratic forces.
Mariana Kotzeva: Our facts are now considered from an emotional and perception point of view. The process does not just involve our comfortable circle of experts; we are now challenged to explain our numbers. Attention to our numbers can be unpredictable (e.g. on migration). We need to build up trust in our facts, partly to anticipate or respond to the quick spread of poor facts.
Rush Holt: In society we can find the erosion of the feeling that science is relevant to ‘my life’, and few US policymakers ask ‘what does science say about this?’ partly because scientists set themselves above politics. Politicians have had too many bad experiences with scientists who might say ‘let me explain this to you in a way you can understand’. Policy is not about science based evidence; more about asking a question first, then asking what evidence you need. Then you collect evidence in an open way to be verified.

Phew!

That was 10 hours of discussion condensed into one post. If you can handle more discussion from me, see:

Psychology and policymaking: Three ways to communicate more effectively with policymakers

The role of evidence in policy: EBPM and How to be heard  

Practical Lessons from Policy Theories

The generation of many perspectives to help us understand the use of evidence

How to be an ‘entrepreneur’ when presenting evidence

 

 

 

3 Comments

Filed under Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

A 5-step strategy to make evidence count

5 stepsLet’s imagine a heroic researcher, producing the best evidence and fearlessly ‘speaking truth to power’. Then, let’s place this person in four scenarios, each of which combines a discussion of evidence, policy, and politics in different ways.

  1. Imagine your hero presents to HM Treasury an evidence-based report concluding that a unitary UK state would be far more efficient than a union state guaranteeing Scottish devolution. The evidence is top quality and the reasoning is sound, but the research question is ridiculous. The result of political deliberation and electoral choice suggests that your hero is asking a research question that does not deserve to be funded in the current political climate. Your hero is a clown.
  2. Imagine your hero presents to the Department of Health a report based on the systematic review of multiple randomised control trials. It recommends that you roll out an almost-identical early years or public health intervention across the whole country. We need high ‘fidelity’ to the model to ensure the correct ‘dosage’ and to measure its effect scientifically. The evidence is of the highest quality, but the research question is not quite right. The government has decided to devolve this responsibility to local public bodies and/ or encourage the co-production of public service design by local public bodies, communities, and service users. So, to focus narrowly on fidelity would be to ignore political choices (perhaps backed by different evidence) about how best to govern. If you don’t know the politics involved, you will ask the wrong questions or provide evidence with unclear relevance. Your hero is either a fool, naïve to the dynamics of governance, or a villain willing to ignore governance principles.        
  3. Imagine two fundamentally different – but equally heroic – professions with their own ideas about evidence. One favours a hierarchy of evidence in which RCTs and their systematic review is at the top, and service user and practitioner feedback is near the bottom. The other rejects this hierarchy completely, identifying the unique, complex relationship between practitioner and service user which requires high discretion to make choices in situations that will differ each time. Trying to resolve a debate between them with reference to ‘the evidence’ makes no sense. This is about a conflict between two heroes with opposing beliefs and preferences that can only be resolved through compromise or political choice. This is, oh I don’t know, Batman v Superman, saved by Wonder Woman.
  4. Imagine you want the evidence on hydraulic fracturing for shale oil and gas. We know that ‘the evidence’ follows the question: how much can we extract? How much revenue will it produce? Is it safe, from an engineering point of view? Is it safe, from a public health point of view? What will be its impact on climate change? What proportion of the public supports it? What proportion of the electorate supports it? Who will win and lose from the decision? It would be naïve to think that there is some kind of neutral way to produce an evidence-based analysis of such issues. The commissioning and integration of evidence has to be political. To pretend otherwise is a political strategy. Your hero may be another person’s villain.

Now, let’s use these scenarios to produce a 5-step way to ‘make evidence count’.

Step 1. Respect the positive role of politics

A narrow focus on making the supply of evidence count, via ‘evidence-based policymaking’, will always be dispiriting because it ignores politics or treats political choice as an inconvenience. If we:

  • begin with a focus on why we need political systems to make authoritative choices between conflicting preferences, and take governance principles seriously, we can
  • identify the demand for evidence in that context, then be more strategic and pragmatic about making evidence count, and
  • be less dispirited about the outcome.

In other words, think about the positive and necessary role of democratic politics before bemoaning post-truth politics and policy-based-evidence-making.

Step 2. Reject simple models of evidence-based policymaking

Policy is not made in a cycle containing a linear series of separate stages and we won’t ‘make evidence count’ by using it to inform our practices.

cycle

You might not want to give up the cycle image because it presents a simple account of how you should make policy. It suggests that we elect policymakers then: identify their aims, identify policies to achieve those aims, select a policy measure, ensure that the selection is legitimised by the population or its legislature, identify the necessary resources, implement and then evaluate. Or, policymakers aided by expert policy analysts make and legitimise choices, skilful public servants carry them out, and, policy analysts assess the results using evidence.

One compromise is to keep the cycle then show how messy it is in practice:

However, there comes a point when there is too much mess, and the image no longer helps you explain (a) to the public what you are doing, or (b) to providers of evidence how they should engage in political systems. By this point, simple messages from more complicated policy theories may be more useful.

Or, we may no longer want a cycle to symbolise a single source of policymaking authority. In a multi-level system, with many ‘centres’ possessing their own sources of legitimate authority, a single and simple policy cycle seems too artificial to be useful.

Step 3. Tell a simple story about your evidence

People are ‘cognitive misers’ seeking ‘rational’ and ‘irrational’ shortcuts to gather information for action, so you won’t get far if you bombard them with too much evidence. Policymakers already have too much evidence and they seek ways to reduce their cognitive load, relying on: (a) trusted sources of concise evidence relevant to their aims, and (b) their own experience, gut instinct, beliefs, and emotions.

The implication of both shortcuts is that we need to tell simple and persuasive stories about the substance and implications of the evidence we present. To say that ‘the evidence does not speak for itself’ may seem trite, but I’ve met too many people who assume naively that it will somehow ‘win the day’. In contrast, civil servants know that the evidence-informed advice they give to ministers needs to relate to the story that government ministers tell to the public.

how-to-be-heard

Step 4.  Tailor your story to many audiences

In a complex or multi-level environment, one story to one audience (such as a minister) is not enough. If there are many key sources of policymaking authority – including public bodies with high autonomy, organisations and practitioners with the discretion to deliver services, and service users involved in designing services – there are many stories being told about what we should be doing and why. We may convince one audience and alienate (or fail to inspire) another with the same story.

Step 5. Clarify and address key dilemmas with political choice, not evidence

Let me give you one example of the dilemmas that must arise when you combine evidence and politics to produce policy: how do you produce a model of ‘evidence based best practice’ which combines evidence and governance principles in a consistent way? Here are 3 ideal-type models which answer the question in very different ways

Table 1 Three ideal types EBBP

The table helps us think through the tensions between models, built on very different principles of good evidence and governance.

In practice, you may want to combine different elements, perhaps while arguing that the loss of consistency is lower than the gain from flexibility. Or, the dynamics of political systems limit such choice or prompt ad hoc and inconsistent choices.

I built a lot of this analysis on the experiences of the Scottish Government, which juggles all three models, including a key focus on improvement method in its Early Years Collaborative.

However, Kathryn Oliver and I show that the UK government faces the same basic dilemma and addresses it in similar ways.

The example freshest in my mind is Sure Start. Its rationale was built on RCT evidence and systematic review. However, its roll-out was built more on local flexibility and service design than insistence on fidelity to a model. More recently, the Troubled Families programme initially set the policy agenda and criteria for inclusion, but increasingly invites local public bodies to select the most appropriate interventions, aided by the Early Intervention Foundation which reviews the evidence but does not insist on one-best-way. Emily St Denny and I explore these issues further in our forthcoming book on prevention policy, an exemplar case study of a field in which it is difficult to know how to ‘make evidence count’.

If you prefer a 3-step take home message:

  1. I think we use phrases like ‘impact’ and ‘make evidence count’ to reflect a vague and general worry about a decline in respect for evidence and experts. Certainly, when I go to large conferences of scientists, they usually tell a story about ‘post-truth’ politics.
  2. Usually, these stories do not acknowledge the difference between two different explanations for an evidence-policy gap: (a) pathological policymaking and corrupt politicians, versus (b) complex policymaking and politicians having to make choices despite uncertainty.
  3. To produce evidence with ‘impact’, and know how to ‘make evidence count’, we need to understand the policy process and the demand for evidence within it.

*Background. This is a post for my talk at the Government Economic Service and Government Social Research Service Annual Training Conference (15th September 2017). This year’s theme is ‘Impact and Future-Proofing: Making Evidence Count’. My brief is to discuss evidence use in the Scottish Government, but it faces the same basic question as the UK Government: how do you combine principles of evidence quality and governance principles? In other words, if you were in a position to design an (a) evidence-gathering system and (b) a political system, you’d soon find major points of tension between them. Resolving those tensions involves political choice, not more evidence. Of course, you are not in a position to design both systems, so the more complicated question is: how do you satisfy principles of evidence and governance in a complex policy process, often driven by policymaker psychology, over which you have little control?  Here are 7 different ‘answers’.

Powerpoint Paul Cairney @ GES GSRS 2017

1 Comment

Filed under Evidence Based Policymaking (EBPM), public policy, Scottish politics, UK politics and policy

Policy Concepts in 500 Words: Social Construction and Policy Design

Why would a democratic political system produce ‘degenerative’ policy that undermines democracy? Social Construction and Policy Design (SCPD) describes two main ways in which policymaking alienates many citizens:

1. The Social Construction of Target Populations

High profile politics and electoral competition can cause alienation:

  1. Political actors compete to tell ‘stories’ to assign praise or blame to groups of people. For example, politicians describe value judgements about who should be rewarded or punished by government. They base them on stereotypes of ‘target populations’, by (a) exploiting the ways in which many people think about groups, or (b) making emotional and superficial judgements, backed up with selective use of facts.
  2. These judgements have a ‘feed-forward’ effect: they are reproduced in policies, practices, and institutions. Such ‘policy designs’ can endure for years or decades. The distribution of rewards and sanctions is cumulative and difficult to overcome.
  3. Policy design has an impact on citizens, who participate in politics according to how they are characterised by government. Many know they will be treated badly; their engagement will be dispiriting.

Some groups have the power to challenge the way they are described by policymakers (and the media and public), and receive benefits behind the scenes despite their poor image. However, many people feel powerless, become disenchanted with politics, and do not engage in the democratic process.

SCTP depicts this dynamic with a 2-by-2 table in which target populations are described positively/ negatively and more or less able to respond:

SCPD 500 words 2 by 2

2. Bureaucratic and expert politics

Most policy issues are not salient and politicised in this way. Yet, low salience can exacerbate problems of citizen exclusion. Policies dominated by bureaucratic interests often alienate citizens receiving services. Or a small elite dominates policymaking when there is high acceptance that (a) the best policy is ‘evidence based’, and (b) the evidence should come from experts.

Overall, SCPD describes a political system with major potential to diminish democracy, containing key actors (a) politicising issues to reward or punish populations or (b) depoliticising issues with reference to science and objectivity. In both cases, policy design is not informed by routine citizen participation.

Take home message for students: SCPD began as Schneider and Ingram’s description of the US political system’s failure to solve major problems including inequality, poverty, crime, racism, sexism, and effective universal healthcare and education. Think about how its key drivers apply elsewhere: (1) some people make and exploit quick and emotional judgements for political gain, and others refer to expertise to limit debate; (2) these judgements inform policy design; and, (3) policy design sends signals to citizens which can diminish or boost their incentive to engage in politics.

For more, see the 1000-word and 5000-word versions. The latter has a detailed guide to further reading.

 

 

 

 

20 Comments

Filed under 500 words, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

How to design ‘maps’ for policymakers relying on their ‘internal compass’

This is a guest post by Dr Richard Simmons (below, in between Profs Alex Marsh and Catherine Farrell), discussing how to use insights from ‘cultural theory’ to think about how to design institutions to help policymakers think and make good decisions. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

My reading of Richard’s argument is through the lens of debates on ‘evidence-based policymaking’ and policymaker psychology. Policymakers can only pay attention to a tiny proportion of the world, and a small proportion of their responsibilities. They combine ‘rational’ and ‘irrational’ informational shortcuts to act quickly and make ‘good enough’ decisions despite high uncertainty. You can choose to criticize their reliance on ‘cognitive frailties’ (and perhaps design institutions to limit their autonomous powers) or praise their remarkable ability to develop ‘fast and frugal’ heuristics (and perhaps work with them to refine such techniques). I think Richard takes a relatively sympathetic approach to describing ‘thinking, fast and slow’:

  1. His focus on an ‘internal compass’ describes aspects of fast thinking (using gut or instinct, emotion, habit, familiarity) but without necessarily equating a compass with negative cognitive ‘biases’ that get in the way of ‘rationality’.
  2. Instead, an internal compass can be remarkably useful, particularly if combined with a ‘map’ of the ‘policymaking terrain’. Terrain can describe the organisations, rules, and other sources of sources of direction and learning in a policymaking system.
  3. Both compass and map are necessary; they reinforce the value of each other.
  4. However, perhaps unlike a literal map, we cannot simply design one-size-fits-all advice for policymakers. We need to speak with them in some depth, to help them work out what they think the policy problem is and probe how they would like to solve it.
  5. In that context, the role of policy analysis is to help policymakers think about and ask the right questions as it is to provide tailor-made answers.

It is a paradox that in a world where there are often more questions than answers, policymakers more often seek to establish and then narrow the range of possible answers than to establish and then narrow the range of possible questions. There are different explanations for this:

  • One is that policymakers occupy a ‘rational’, ‘technical’ space, in which everything from real-time data to scientific evidence can be balanced in ‘problem-solving’. This means doing the background work to support authoritative choice between policy alternatives, perhaps via ‘structured interactions’, as a way to bring order to the weight of evidence and expertise.
  • Another is that policymakers occupy a ‘formally structured’, ‘political’ space, in which the contest to have ‘agenda-setting’ power has already been decided. For policy actors, this means learning not to ‘question why’ – accepting the legitimacy, if not the substantive nature, of their political masters’ concerns and (outwardly, at least) directing their attention accordingly.
  • A third explanation, however, is that policymakers occupy a ‘complex’ and ‘uncertain’ space, in which ‘What is a good question?’ is itself a good question. Yet often we lack good ways to ask questions about questions – at least, without encountering accusations of either ‘avoiding the problem’ or ‘re-politicising technical concerns’.

Given that questions are logically prior to a technical search for the ‘best answer’, it seems sensible that the search for the ‘best question’ should start away from the realm of ‘the technical’ (cf. Explanation 1). As a result, two possible options remain in response to ‘What is a good question?’:

  1. That it is a subjectively-normative question that depends on the eyes of the beholder, best aggregated and understood in the realm of ‘the political’ (which returns us to Explanation 2).
  2. That it is an objectively-normative question that depends on ‘social construction’ in policy work, best aggregated and understood in the realm of ‘the institutional’ (which returns us to Explanation 3).

Option 1 is the stuff of basic politics; it will not be explored further here. This leaves the ‘objectively-normative’ Option2 , which is less often explored. This option is ‘normative’ in the sense that it gives space to framing a problem in ways that acknowledges different sets of values and beliefs, that may be socially constructed in different ways. It is ‘objective’ in the sense that it seeks to resolve tensions between these different sets of values and beliefs in without re-opening the kind of explicit competition normally reserved for the realm of politics. Yet in its basis in the realm of institutions, some might ask: how is ‘objective’ analysis even possible?

Step Forward, Cultural Theory (CT)?

There are still, perhaps, a few ‘flat-earth’ policy actors who doubt the importance of institutions. Yet even for those who do not deny their influence, the prospect of ‘objective’ institutional analysis seems remote. By their very nature hard to define and intangible to the eye, institutions can seem esoteric, ephemeral, and resistant to meaningful measurement. However, new developments in Cultural Theory (CT) can help policymakers get a grip. Now well-established in policy circles, CT constitutes institutions along two dimensions into four (and only four) rival ‘cultural biases’ – hierarchy, individualism, egalitarianism and fatalism:

Simmons 1

Importantly, biases combine in different institutional patterns – and the mix matters. Dominant patterns tend to structure policy problems and guide policymakers’ response in different ways. Through exposure and experience, institutional patterns can become internalised in their ‘thought styles’; as an ‘internal compass’ that directs ‘fast-thinking’. No bad thing, perhaps – unless and until this sends them off course. Faulty compass readings arise when narrow thought-styles become ‘cultural blinkers’. As ‘practical wisdom’ may be present in more than one location, navigation risks arise if a course ahead is plotted that blocks out other constructions of the problem.

How would policymakers realise when they have led themselves astray? One way might be ‘slower thinking’ – reflection on their actions to question their constructions and promote dynamic learning. CT provides a parsimonious way of framing such reflection. Simplifying complex criteria into just four cultural categories, skilled ‘reflective policymakers’ are facilitated more quickly to ask ‘good’ questions. However, space for such ‘slow thinking’ is often limited in practical policy work. When this is closed-out by constraints of time and attention, what more has Cultural Theory to offer?

Recent work operationalises CT to both map institutions and chart ‘internal compass’ bearings. Using stakeholder surveys to ‘materialise the intangible’, institutions are mapped by visually overlaying policy actors’ perceptions of how policy problems ‘actually are’ governed, with those of how they ‘should be’ governed:

Simmons 2

Meanwhile, as points of congruence and dissonance emerge in this institutional environment, policymakers internal compass bearings show the likelihood that they might actually see them. Together, these tools up-the-odds of asking ‘good’ questions even further than reflection. Actors learn to navigate both change and the obstacles to change.

But is this not still too slow? This process may indeed seem slow, but intelligent investment in institutional analysis potentially has payoffs that can make it worth the wait. The ‘map’ immediately provides a provocation for more valid and reliable policy practice – definitively directing policy attention, no matter where the compass is pointing. Speed and accuracy increases. Not only this; over time, such purposive action serves to maintain, create and disrupt institutions. As new patterns emerge that subconsciously subvert existing thought-styles, the compass itself is recalibrated. There are fewer faulty readings to direct ‘fast thinking’. Speed and accuracy increases again…

For some, the tools provided by CT may seem blunt; for others, as esoteric and ephemeral as the institutions this theory purports to portray. The recent work reported here certainly requires further refinement to reinforce its validity and reliability. But the effort of doing so may be a small price to pay. The practical potential of CT’s meaningful measurement makes further progress a beguiling prospect.

 

 

1 Comment

Filed under Uncategorized

Three ways to encourage policy learning

Claire Claudio

This is a guest post by  Claire A. Dunlop (left) and Claudio M. Radaelli (right), discussing how to use insights from the Policy Learning literature to think about how to learn effectively or adapt to the processes of ‘learning’ in policymaking that are more about politics than education. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

We often hear that university researchers are ‘all brains but no common sense’. There is often some truth to this stereotype. The literature on policy learning is an archetypal example of being high in IQ but low on street smarts. Researchers have generated a huge amount of ‘policy learning’ taxonomies, concepts and methods without showing what learning can offer policy-makers, citizens and societies.

This is odd because there is a substantive demand and need for practical insights on how to learn. Issues include economic growth, the control of corruption, and improvement in schools and health. Learning organisations range from ‘street level bureaucracies’ to international regulators like the European Union and the World Trade Organization.

To help develop a more practical agenda, we distil three major lessons from the policy learning literature.

1. Learning is often the by-product of politics, not the primary goal of policymakers

There is usually no clear incentive for political actors to learn how to improve public policy. Learning is often the by-product of bargaining, the effort to secure compliance with the law and rules, social participation, or problem-solving when there is radical uncertainty. This means that in politics we should not assume that politicians, bureaucrats, civil society organizations, experts interact to improve public policy. Consensus, participation, formal procedures, and social certification are more important.

Therefore, we have to learn how to design incentives so that the by-product of learning is actually generated. Otherwise, few actors will play the game of the policy-making process with learning as their first goal. Learning is all around us, but it appears in different forms, depending on whether the context is (a) bargaining, (b) compliance, (c) participation or (d) problem-solving under conditions of high uncertainty.

2. Each mode of learning has its triggers or hindrances

(a) Bargaining requires repeated interaction, low barriers to contract and mechanisms of preference aggregation.

(b) Compliance without trust in institutions is stymied.

(c) Participation needs its own deliberative spaces and a type of participant willing to go beyond the ‘dialogue of the deaf’. Without these two triggers, participation is chaotic, highly conflictual and inefficient.

(d) Expertise is key to problem-solving, but governments should design their advisory committees and special commissions of inquiry by recruiting a broad range of experts. The risk of excluding the next Galileo Galilei in a Ptolemaic committee is always there.

At the same time, there are specific hindrances:

(a) Bargaining stops when the winners are always the same (if you are thinking of Germany and Greece in the European Union you are spot-on).

(b) Hierarchy does not produce efficient compliance unless those at the top know exactly the solution to enforce.

(c) Incommensurable beliefs spoil participatory policy processes. If so, it’s better to switch to open democratic conflict, by counting votes in elections and referenda for example.

(d) Scientific scepticism and low policy capacity mar the work of experts in governmental bodies.

These triggers and hindrances have important lessons for design, perhaps prompting authorities (governments, regulators, public bodies) to switch from one context to another. For example, one can re-design the work of expert committees by including producers and consumers organizations or by allowing bargaining on the implementation of budgetary rules.

3. Beware the limitations of learning

We may get this precious by-product and avoid hindrances and traps, but still… learn the wrong lessons.

Latin America and Africa offer too many examples of diligent pupils who did exactly what they were supposed to do, but in the end implemented wrong policies. Perfect compliance does not provide breathing spaces to a policy and impairs the quality of innovation. We have to balance lay and professional knowledge. Bargaining does not allow us to learn about radical innovations; in some cases only a new participant can really change the nature of the game being played by the usual suspects.

So, whether the problem is learning how to fight organized crime and corruption, or to re-launch growth in Europe and development in Africa, the design of the policy process is crucial. For social actors, our analysis shows when and how they should try to change the nature of the game, or lobby for a re-design of the process. This lesson is often forgotten because social actors fight for a given policy objective, not for the parameters that define who does what and how in the policy process.

12 Comments

Filed under Evidence Based Policymaking (EBPM), public policy