Category Archives: Uncategorized

IMAJINE

I’m now working with Professor Michael Keating and Dr Emily St Denny on a work package for IMAJINE (Integrative Mechanisms for Addressing Spatial Justice and Territorial Inequalities in Europe).

We are leading WP6: Multilevel Policy-making and Inequalities, which “will explore how states design fiscal regimes and public services to mitigate the effects of socio-spatial inequalities”, through case studies in areas including Greece, Poland, Scotland and Wales”.

The website is about to grow. In the meantime, here are two posts with preliminary thoughts:

 

The theory and practice of evidence-based policy transfer: can we learn how to reduce territorial inequalities?

 

‘Co-producing’ comparative policy research: how far should we go to secure policy impact?

 

And here is a picture of many of us. One of our work plans is to look less menacing by year 4.

imajine-launch-photo-jan-2017

2 Comments

Filed under Uncategorized

How to design ‘maps’ for policymakers relying on their ‘internal compass’

This is a guest post by Dr Richard Simmons (below, in between Profs Alex Marsh and Catherine Farrell), discussing how to use insights from ‘cultural theory’ to think about how to design institutions to help policymakers think and make good decisions. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

My reading of Richard’s argument is through the lens of debates on ‘evidence-based policymaking’ and policymaker psychology. Policymakers can only pay attention to a tiny proportion of the world, and a small proportion of their responsibilities. They combine ‘rational’ and ‘irrational’ informational shortcuts to act quickly and make ‘good enough’ decisions despite high uncertainty. You can choose to criticize their reliance on ‘cognitive frailties’ (and perhaps design institutions to limit their autonomous powers) or praise their remarkable ability to develop ‘fast and frugal’ heuristics (and perhaps work with them to refine such techniques). I think Richard takes a relatively sympathetic approach to describing ‘thinking, fast and slow’:

  1. His focus on an ‘internal compass’ describes aspects of fast thinking (using gut or instinct, emotion, habit, familiarity) but without necessarily equating a compass with negative cognitive ‘biases’ that get in the way of ‘rationality’.
  2. Instead, an internal compass can be remarkably useful, particularly if combined with a ‘map’ of the ‘policymaking terrain’. Terrain can describe the organisations, rules, and other sources of sources of direction and learning in a policymaking system.
  3. Both compass and map are necessary; they reinforce the value of each other.
  4. However, perhaps unlike a literal map, we cannot simply design one-size-fits-all advice for policymakers. We need to speak with them in some depth, to help them work out what they think the policy problem is and probe how they would like to solve it.
  5. In that context, the role of policy analysis is to help policymakers think about and ask the right questions as it is to provide tailor-made answers.

It is a paradox that in a world where there are often more questions than answers, policymakers more often seek to establish and then narrow the range of possible answers than to establish and then narrow the range of possible questions. There are different explanations for this:

  • One is that policymakers occupy a ‘rational’, ‘technical’ space, in which everything from real-time data to scientific evidence can be balanced in ‘problem-solving’. This means doing the background work to support authoritative choice between policy alternatives, perhaps via ‘structured interactions’, as a way to bring order to the weight of evidence and expertise.
  • Another is that policymakers occupy a ‘formally structured’, ‘political’ space, in which the contest to have ‘agenda-setting’ power has already been decided. For policy actors, this means learning not to ‘question why’ – accepting the legitimacy, if not the substantive nature, of their political masters’ concerns and (outwardly, at least) directing their attention accordingly.
  • A third explanation, however, is that policymakers occupy a ‘complex’ and ‘uncertain’ space, in which ‘What is a good question?’ is itself a good question. Yet often we lack good ways to ask questions about questions – at least, without encountering accusations of either ‘avoiding the problem’ or ‘re-politicising technical concerns’.

Given that questions are logically prior to a technical search for the ‘best answer’, it seems sensible that the search for the ‘best question’ should start away from the realm of ‘the technical’ (cf. Explanation 1). As a result, two possible options remain in response to ‘What is a good question?’:

  1. That it is a subjectively-normative question that depends on the eyes of the beholder, best aggregated and understood in the realm of ‘the political’ (which returns us to Explanation 2).
  2. That it is an objectively-normative question that depends on ‘social construction’ in policy work, best aggregated and understood in the realm of ‘the institutional’ (which returns us to Explanation 3).

Option 1 is the stuff of basic politics; it will not be explored further here. This leaves the ‘objectively-normative’ Option2 , which is less often explored. This option is ‘normative’ in the sense that it gives space to framing a problem in ways that acknowledges different sets of values and beliefs, that may be socially constructed in different ways. It is ‘objective’ in the sense that it seeks to resolve tensions between these different sets of values and beliefs in without re-opening the kind of explicit competition normally reserved for the realm of politics. Yet in its basis in the realm of institutions, some might ask: how is ‘objective’ analysis even possible?

Step Forward, Cultural Theory (CT)?

There are still, perhaps, a few ‘flat-earth’ policy actors who doubt the importance of institutions. Yet even for those who do not deny their influence, the prospect of ‘objective’ institutional analysis seems remote. By their very nature hard to define and intangible to the eye, institutions can seem esoteric, ephemeral, and resistant to meaningful measurement. However, new developments in Cultural Theory (CT) can help policymakers get a grip. Now well-established in policy circles, CT constitutes institutions along two dimensions into four (and only four) rival ‘cultural biases’ – hierarchy, individualism, egalitarianism and fatalism:

Simmons 1

Importantly, biases combine in different institutional patterns – and the mix matters. Dominant patterns tend to structure policy problems and guide policymakers’ response in different ways. Through exposure and experience, institutional patterns can become internalised in their ‘thought styles’; as an ‘internal compass’ that directs ‘fast-thinking’. No bad thing, perhaps – unless and until this sends them off course. Faulty compass readings arise when narrow thought-styles become ‘cultural blinkers’. As ‘practical wisdom’ may be present in more than one location, navigation risks arise if a course ahead is plotted that blocks out other constructions of the problem.

How would policymakers realise when they have led themselves astray? One way might be ‘slower thinking’ – reflection on their actions to question their constructions and promote dynamic learning. CT provides a parsimonious way of framing such reflection. Simplifying complex criteria into just four cultural categories, skilled ‘reflective policymakers’ are facilitated more quickly to ask ‘good’ questions. However, space for such ‘slow thinking’ is often limited in practical policy work. When this is closed-out by constraints of time and attention, what more has Cultural Theory to offer?

Recent work operationalises CT to both map institutions and chart ‘internal compass’ bearings. Using stakeholder surveys to ‘materialise the intangible’, institutions are mapped by visually overlaying policy actors’ perceptions of how policy problems ‘actually are’ governed, with those of how they ‘should be’ governed:

Simmons 2

Meanwhile, as points of congruence and dissonance emerge in this institutional environment, policymakers internal compass bearings show the likelihood that they might actually see them. Together, these tools up-the-odds of asking ‘good’ questions even further than reflection. Actors learn to navigate both change and the obstacles to change.

But is this not still too slow? This process may indeed seem slow, but intelligent investment in institutional analysis potentially has payoffs that can make it worth the wait. The ‘map’ immediately provides a provocation for more valid and reliable policy practice – definitively directing policy attention, no matter where the compass is pointing. Speed and accuracy increases. Not only this; over time, such purposive action serves to maintain, create and disrupt institutions. As new patterns emerge that subconsciously subvert existing thought-styles, the compass itself is recalibrated. There are fewer faulty readings to direct ‘fast thinking’. Speed and accuracy increases again…

For some, the tools provided by CT may seem blunt; for others, as esoteric and ephemeral as the institutions this theory purports to portray. The recent work reported here certainly requires further refinement to reinforce its validity and reliability. But the effort of doing so may be a small price to pay. The practical potential of CT’s meaningful measurement makes further progress a beguiling prospect.

 

 

1 Comment

Filed under Uncategorized

Stop Treating Cognitive Science Like a Disease

ssloman-lg

At the beginning is a guest post by Professor Steven Sloman, responding to Professor Daniel Sarewitz’s post in the Guardian called Stop treating science denial like a disease.  At the end is Dan Sarewitz’s reply. If you are wondering why this debate is now playing out on my website, there is a connection of sorts, in: (a) the work of the European Commission’s JRC, with Sloman speaking at its annual conference EU4Facts, and (b) the work of INGSA on government-science advice, in which Sarewitz plays a key role.  

Modern science has its problems. As reviewed in a recent editorial by Daniel Sarewitz, many branches of science have been suffering from a replication crisis. Scientists are under tremendous pressure to publish, and cutting scientific corners has, in some fields, become normal. This, he thinks, justifies a kind of science denialism, one that recognizes that not every word expressed by a scientist should be taken on faith.

Sarewitz is right on a couple of counts: Not every branch of science has equal authority. And in many areas, too much of too little value is being published. Some of it does not pass even weak tests of scientific care and rigor. But his wild claim in favor of denialism is bluster: Science is making faster progress today than at any time in history.

Sarewitz’s intended victim in his piece is cognitive science. He argues that cognitive science appeals to a deficit model (my term) to explain science denialism. People are ignorant, in Sarewitz’s parody of cognitive science, and therefore they fail to understand science. If only they were smarter, or taught the truth about science, they wouldn’t deny it, but rather use it as a guide to truth, justice, and all things good.

This is a position in cognitive science, especially cognitive science of the 70’s and 80’s. But even cognitive science makes progress and today it is a minority view. What does modern cognitive science actually suggest about our understanding of science denial? The answer is detailed in our book The Knowledge Illusion that Sarewitz takes issue with. He would have done well to read it before reviewing it because what we say is diametrically opposed to his report, and largely consistent with his view, though a whole lot more nuanced.

The deficit model applies to one form of reasoning, what we call intuition. The human brain generates beliefs based on naïve causal models about how the world works. These are often sketchy and flawed (consider racists’ understanding of people of other races). Individuals are quite ignorant about how the world works, not because people are stupid, but because the world is so complex. The chaotic, uncertain nature of the universe means that everything we encounter is a tangle of enormous numbers of elements and interactions, far more than any individual could comprehend, never mind retain. As we show in our book, even the lowly ballpoint pen represents untold complexity. The source of ignorance is not so much about the biology of the individual; it’s about the complexity of the world that the individual lives in.

Despite their ignorance, humans have accomplished amazing things, from creating symphonies to laptops. How? In large part by relying on a second form of human reasoning, deliberation. Deliberation is not constrained wholly by biology because it extends beyond the individual. Deliberative thought uses the body to remember for us and even to compute. That’s why emotions are critical for good decision making and why children use their fingers to count. Thinking also uses the world. We compute whether it’s safe to cross the street by looking to see if a car is coming, and we use the presence of dirty dishes on the counter to tell us whether the dishes need doing.

But more than anything, deliberation uses other people. Whether we’re getting our dishwasher fixed, our spiritual lives developed, or our political opinions formed, we are guided by those we deem experts and those we respect in our communities. To a large extent, people are not the rational processors of information that some enlightenment philosophers dreamed about; we are shills for our communities.

The positive side of this is that people are built to collaborate; we are social entities in the most fundamental way, as thinkers. The negative side is that we can subscribe to ideologies that are perpetuated to pursue the self-interest of community leaders, ideologies that have no rational basis. Indeed, the most fervent adherents of a view tend to know the least about it. Fortunately, we have found (not just assumed as Sarewitz says) that when people are asked to explain the consequences of the policies they adhere to, they become less extremist as they discover they don’t really understand.

Scientists live in communities too, and science is certainly vulnerable to these same social forces. That’s why the scientific method was developed: to put ideas to the test and let the cream rise to the top. This takes time, but because science reports eventually to the truth inherent in nature, human foible and peer review can only steer it off course temporarily.

Cognitive science has historically bought into the deficit model, treating failures of science literacy as a kind of disease. But Sarewitz should practice the care and rigor that he preaches by reporting correctly: Cognitive science, like many forms of science, is slowly getting it right.

Reply by Dan Sarewitz

Normally I don’t respond to this kind of thing but a couple points demand rebuttal.

First:  I actually did read their book, cover-to-cover.  Neither the Guardian piece, nor the longer talk from which it draws, are book reviews, they are critiques of the larger intellectual program that The Knowledge Illusion positions itself within.

Second, the idea that deliberative and collaborative activities are a powerful sources of human creativity that overcome the cognitive limits of the individual is an entirely familiar one that has been well-recognized for centuries.  As Professor Sloman indicates, it occupies much of his book, and much of his comment above.  But it was not relevant to my concerns, which were how Sloman and co-author Fernbach position human cognitive limits as a source of so much difficulty in today’s world.  They write:  “Because we confuse the knowledge in our heads with the knowledge we have access to, we are largely unaware of how little we understand.  We live with the belief that we understand more than we do.  As we will explore in the rest of the book, many of society’s most pressing problems stem from this illusion.” [p. 129, my italics]  They wrote this, I didn’t.

Third, having read their book carefully, I am indeed well aware that Sloman and Fernbach understand the limits of the deficit model.  But as they make clear in a subsection of chapter 8, entitled “Filling the Deficit,” they still believe that IF ONLY people understood more about science, then “many of the societies most pressing problems” could be more effectively addressed:  “And the knowledge illusion means that we don’t check our understanding often or deeply enough.  This is a recipe for antiscientific thinking.” [p. 169] A page later they write: “[P]erhaps it is too soon to give up on the deficit model.”

This is where my posting sought to engage with Professor Sloman’s book.  I don’t think that people’s understanding of scientific knowledge has much at all to do with “many of society’s most pressing problems,” for reasons that I point toward in the Guardian piece, and have written extensively about in many other forums.  Professor Sloman may not agree with this position, but his comments above fail to indicate that he actually recognizes or understands it.

Finally, Professor Sloman writes, both tendentiously and with an apparently tone-deaf ear, that my “wild claim in favor of denialism is bluster: Science is making faster progress today than at any time in history.”  The progress of science is irrelevant to my argument, which addresses the intersection of politics and a scientific enterprise that is always pushing into the realm of the uncertain, the unknown, the unknowable, the contestable, the contingent—even as it also, sometimes and in some directions, makes magnificent progress.

Perhaps there is a valuable discussion to be had about whether poor understanding of science by the public is relevant to “many of society’s most pressing problems.”  My view is that this is an overblown, distracting, and to some extent dangerous belief on the part of some scientists, as I indicate in the Guardian post and in many other writings.  Professor Sloman may disagree, but his complaints here are about something else entirely, something that I didn’t write.

1 Comment

Filed under Psychology Based Policy Studies, Uncategorized

How to Navigate Complex Policy Designs

Tanya Krister

This is a guest post by Professor Tanya Heikkila (left) and  Professor Krister Andersson (right), discussing how to use insights from the institutional analysis and development (IAD) framework to think about how to design policy effectively. The full paper has been submitted to the series for Policy and Politics called Practical Lessons from Policy Theories.

Policy design is hard work. Attempts in the United States Congress to repeal and replace, or even revise, the Affordable Care Act (ACA) in the spring and summer or 2017 are good examples of the challenges.  Even setting aside issues of congressional partisanship, key lawmakers and President Trump seemed taken aback by how complex both the ACA and the underlying health care insurance issue are. Lawmakers struggled for several months, and failed, to come up with viable policy options that could make health insurance in the U.S. more cost-efficient, while providing flexibility to states, private firms, and individuals who must comply with the policy.

The recent experience in the US with revising the ACA is illustrative of a larger question:

How do policymakers or analysts navigate and design effective policies around complex collective problems?

Tips from the IAD framework

In general, these tips encourage a policymaker or analyst to take a step back and start with the basics—what we might call the research lessons for sound policy design. Below we offer a summary of three basic lessons.

1. People are capable of solving collective problems from the bottom-up, both outside and within government settings.

Some conventional wisdom has suggested that it’s best to leave policy design up to the “experts,” which might include technocrats, senior elected officials, or even benevolent dictators.

Institutional analysis research has shown, however, that people who are most closely tied to or affected by a policy issue, are not only capable but often best at designing policies. Excluding these groups from the design stage is likely to lead to weakened legitimacy resulting in less compliance.

One might assume that this idea only applies to localized issues, or problems that are small in scope.  Yet, in many formal policy settings or government venues, decision-makers similarly must learn to embrace the wisdom of their collectives, and of the actors affected by the policy issue.

Of course this takes time, because to do this well requires the development of trust, experience, and adequate information gathering. And in the case of a complex decision-making body like the US Congress, policymakers may believe it’s more important to take advantage of a political window of opportunity and push through decisions quickly. Additionally, individual elected officials may believe that their interests are best served by taking a position on an issue that seems to be most politically palatable to their constituents, without thinking in the interest of the whole country (or even their states).

This line of thinking back-fired in the summer of 2017 with the ACA repeal efforts. A more robust approach, as supported by the IAD-logic, would be for elected officials to take the time to build and open dialogue, work directly with key constituents in thinking about the best approach for health care for the country, and spend time with each other, in Congress and in state legislatures, to make to design and/or adapt policy toward more productive ends.

2. Use a framework to navigate the complexity of policy design

We can’t devise, amend, or adapt policy effectively without understanding it. Yet, people have a natural tendency to engage in reactionary and emotional reasoning when they are passionate about an issue. Even when not colored by emotional reasoning, policymakers and analysts also come with their own professional and cultural biases that can lead to ‘blind spots’.

Frameworks can help us guard against the tendency toward biased analyses or a focus on features of a policy that are most obvious. A good framework provides a toolkit for identifying the general factors that policymakers and other stakeholders should consider when developing new policies or trying to understanding existing policies.

The IAD framework, for instance, helps identify:

  • the relevant actors for devising and implementing policy
  • their information, knowledge motivations, and interactions
  • the various types of rules these actors already use
  • the biophysical and community context surrounding the actors
  • the evaluative criteria appropriate for assessing the policy in question.

It also helps us understand what external or broader rules can constrain or enable particular actions.  In other words, it makes us aware of our ‘blind spots’ and enables a deeper understanding of the factors that are important for effective policy design.

3. Stop looking for panaceas. Instead, understand the nuance of policy design

There are no silver bullets to policy designs. General blueprint solutions rarely work and it is important to design contextually appropriate policy interventions.

This requires scrutinizing the design elements of policies (e.g., the types of rules embedded in policies), and how they interact with the incentives and information that different actors use in devising or implementing a policy.

It also involves deep knowledge of the factors that can structure their choices in light of the local context where policies are used.  Policy designs ultimately are more likely to be successful when they acknowledge the autonomy and problem-solving capabilities of people whose behavior the interventions are trying to change.

In the context of the ACA, for instance, policymakers need to be tuned into the ways in which different rules for participating in health exchanges can affect the incentives for participation, which can be critical for keeping insurance costs down.

While these types of recommendations may seem intuitive, we often fail to follow them. Perhaps this is because they require extra time, which may mean we miss our windows of opportunity.

Ultimately, resisting the temptation to rush policy analysis and design, and instead engaging with people who understand the policy, using an analytical framework to mitigate our biases, and paying close attention to the nuances of policy design helps us produce more successful policies.

3 Comments

Filed under Uncategorized

Policy concepts in 1000 or 500 words

Imagine that your audience is a group of scientists who have read everything and are only interested in something new. You need a new theory, method, study, or set of results to get their attention.

Let’s say that audience is a few hundred people, or half a dozen in each subfield. It would be nice to impress them, perhaps with some lovely jargon and in-jokes, but almost no-one else will know or care what you are talking about.

Imagine that your audience is a group of budding scientists, researchers, students, practitioners, or knowledge-aware citizens who are new to the field and only interested in what they can pick up and use (without devoting their life to each subfield). Novelty is no longer your friend. Instead, your best friends are communication, clarity, synthesis, and a constant reminder not to take your knowledge and frame of reference for granted.

Let’s say that audience is a few gazillion people. If you want to impress them, imagine that you are giving them one of the first – if not the first – ways of understanding your topic. Reduce the jargon. Explain your problem and why people should care about how you try to solve it. Clear and descriptive titles. No more in-jokes (just stick with the equivalent of ‘I went to the doctor because a strawberry was growing in my arse, and she gave me some cream for it’).

At least, that’s what I’ve been telling myself lately. As things stand, my most-read post of all time is destined to be on the policy cycle, and most people read it because it’s the first entry on a google search. Most readers of that post may never read anything else I’ve written (over a million words, if I cheat a bit with the calculation). They won’t care that there are a dozen better ways to understand the policy process. I have one shot to make it interesting, to encourage people to read more. The same goes for the half-dozen other concepts (including multiple streams, punctuated equilibrium theory, the Advocacy Coalition Framework) which I explain to students first because I now do well in google search (go on, give it a try!).

I also say this because I didn’t anticipate this outcome when I wrote those posts. Now, a few years on, I’m worried that they are not very good. They were summaries of chapters from Understanding Public Policy, rather than first principles discussions, and lots of people have told me that UPP is a little bit complicated for the casual reader. So, when revising it, I hope to make it better, and by better I mean to appeal to a wider audience without dumping the insights. I have begun by trying to write 500-words posts as, I hope, improvements on the 1000-word versions. However, I am also open to advice on the originals. Which ones work, and which ones don’t? Where are the gaps in exposition? Where are the gaps in content?

This post is 500 words.

https://paulcairney.wordpress.com/1000-words/

https://paulcairney.wordpress.com/500-words/

Leave a comment

Filed under 1000 words, 500 words, Uncategorized

The role of ‘standards for evidence’ in ‘evidence informed policymaking’

Key points:

  • Maintaining strict adherence to evidence standards is like tying your hands behind your back
  • There is an inescapable trade-off between maintaining scientific distance for integrity and using evidence pragmatically to ensure its impact
  • So, we should not divorce discussions of evidence standards from evidence use

I once spoke with a policymaker from a health unit who described the unintended consequences of their self-imposed evidence standards. They held themselves to such a high standard of evidence that very few studies met their requirements. So, they often had a very strong sense of ‘what works’ but, by their own standards, could not express much confidence in their evidence base.

As a result, their policy recommendations were tentative and equivocal, and directed at a policymaker audience looking for strong and unequivocal support for (often controversial) policy solutions before putting their weight behind them. Even if evidence advocates had (what they thought to be) the best available evidence, they would not make enough of it. Instead, they value their reputations, based on their scientific integrity, producing the best evidence, and not making inflated claims about the policy implications. Let’s wait for more evidence, just to be sure. Let’s not use suboptimal evidence, even if it’s all we have.

Your competitors do not tie their own hands behind their backs in this way

I say this because I have attended many workshops, in the last year, in which we discuss principles for science advice and guidelines or standards for the evidence part of ‘evidence-based’ or ‘evidence-informed’ policymaking.

During such discussions, it is common for people to articulate the equivalent of crossing their fingers and hoping that they can produce rules for the highest evidence standards without the unintended consequences. If you are a fan of Field of Dreams, we can modify the slogan: if you build it (the evidence base), they will come (policymakers will use it sincerely, and we’ll all be happy).*

If you build it

Or, if you are more of a fan of Roger Pielke Jr, you can build the evidence base while remaining an ‘honest broker’, providing evidence without advocacy. Ideally, we’d want to maintain scientific integrity and have a major impact on policy (akin to me wanting to eat chips all day and lose weight) but, in the real world, may settle for the former.

If so, perhaps a more realistic way of phrasing the question would be: what rules for evidence should a small group of often-not-very-influential people agree among themselves? In doing so, we recognise that very few policy actors will follow these rules.

What happens when we don’t divorce a discussion of (a) standards of evidence from (b) the use of evidence for policy impact?

The latter depends on far more than evidence, such as the usual factors we discuss in these workshops, including trust in the messenger, and providing a ‘timely’ message.  Perhaps a high-standard evidence base helps the former (providing a Kite Mark for evidence) and one aspect of the latter (the evidence is there when you demand it). However, policy studies-inspired messages go much further, such as in Three habits of successful entrepreneurs which describes the strategies people use for impact:

  1. They tell simple and persuasive stories to generate demand for their evidence
  2. They have a technically and politically feasible (evidence-based) policy solution ready to chase policy problems
  3. They adapt their strategies to the scale of their policy environments, akin to surfers in large and competitive political systems, but more like Poseidon in less competitive ‘policy communities’ or subnational venues.

In such cases, the availability of evidence becomes secondary to:

  1. the way you use evidence to frame a policy problem, which is often more about the way you connect information to policymaker demand than the quality of the evidence.

Table 1

  1. your skills in being able to spot the right time to present evidence-based solutions, which is not about a mythical policy cycle, and not really about the availability of evidence or speed of delivery.

Table 2

So, when we talk about any guidance for evidence advocates, such as pursued by INGSA, I think you will always find these tensions between evidence quality and scientific integrity on the one hand, and ‘timeliness’ or impact on the other. You don’t address the need for timely evidence simply by making sure that the evidence exists in a database.

I discuss these tensions further on the INGSA website: Principles of science advice to government: key problems and feasible solutions

.

.

*Perhaps you’d like to point out that when Ray Kinsella built it (the baseball field in his cornfield), he did come (the ghost of Shoeless Joe Jackson appeared to play baseball there). I’m sorry to have to tell you this, but actually that was Ray Liotta pretending to be Jackson.

 

2 Comments

Filed under Evidence Based Policymaking (EBPM), Uncategorized

Professor Grant Jordan

Grant died on Friday. He was my friend and mentor for a long time, and I’m glad that I knew him for so long.

Let me share four short stories with you, to give you a sense of Grant.

The first is of an internationally respected scholar. When Grant and I visited Hokkaido University in 2004, his talk was preceded by a warm and glowing reference by our hosts. One host held aloft Governing Under Pressure and described it as one of the most influential books of his time. It was certainly one of the biggest influences on me and many of my colleagues (as I describe below), and his work was valued by as many colleagues in the US as the UK.

governing under pressure

The second is of his continuous help to students and colleagues. I associate Grant’s working life with one image: his open office door. He kept an office directly across from the departmental office, ensuring that if any student came to us with a problem, he’d be the first to try to solve it. It might not sound like much to say that he knew Aberdeen University’s regulations inside-out, but it symbolised his continuous efforts to make sure that students benefited from them. He offered the same continuous help to many of our colleagues.

The third is of a quietly influential mentor. Some of Grant’s advice was cryptic, but all of it was useful. At key points of my career and intellectual development, he was there to offer pointers and challenge incomplete thought. For example, often his favourite approach was to quote Mandy Rice-Davies (‘he would say that’) to challenge any claims I made from elite interviews, and you can see the effect of such caution on much of my published work.

The fourth is of a funny person. There are odd-sounding times that I’ll remember, such as when Darren Halpin and I met up with Grant and his partner Andrea in Toronto (during an APSA conference), and they were already quite merry when we arrived. Or, when at the football together (Grant was a big Aberdeen FC fan), he would often offer me and my children some suspiciously warm toffees from his pocket. Maybe one of his funniest lines is now one of the most poignant: when I tried to do a decent speech on his career at his retirement dinner, he described it as a speech well suited to his funeral (I guess you had to be there to appreciate the humour!).

Jordan and Richardson’s intellectual legacy

Grant will leave an intellectual legacy. His work with Jeremy Richardson is still at the heart of my understanding of politics and policy. In my first undergraduate year at Strathclyde, Jeremy lectured on British politics and public policy. He presented an image of politics that drew me in (partly via Yes Minster) and an argument – made in partnership with Grant – that I still use most frequently to this day:

  • The size and scope of the state is so large that it is in danger of becoming unmanageable. The same can be said of the crowded environment in which huge numbers of actors seek policy influence. Consequently, the state’s component parts are broken down into policy sectors and sub-sectors, with power spread across government.
  • Elected policymakers can only pay attention to a tiny proportion of issues for which they are responsible. So, they pay attention to a small number and ignore the rest. In effect, they delegate policymaking responsibility to other actors such as bureaucrats, often at low levels of government.
  • At this level of government and specialisation, bureaucrats rely on specialist organisations for information and advice.
  • Those organisations trade that information/advice and other resources for access to, and influence within, the government (other resources may relate to who groups represent – such as a large, paying membership, an important profession, or a high status donor or corporation).
  • Therefore, most public policy is conducted primarily through small and specialist policy communities that process issues at a level of government not particularly visible to the public, and with minimal senior policymaker involvement.

They initially made this argument in the late 1970s and early 1980s, before the rise of ‘Thatcherism’ in the UK. Then, they used it to challenge the idea that Thatcherism marked a radical departure in policymaking. Of course, this new phase of policymaking was important and distinctive, but the same basic argument outlined above still applies, and Jordan went on to do further empirical studies, with colleagues such as William Maloney, to highlight a surprising amount of policymaking stability and continuity. In other words, Jordan and Richardson have shown, and continue to show that the UK generally does not live up to its ‘majoritarian’ image. It’s an argument that I use to this day.

Overall, I was very lucky to have Grant in my life for so long, and I hope he knew how many people shared this combination of admiration for his work and fondness of him.

 

 

7 Comments

Filed under Uncategorized