Category Archives: Academic innovation or navel gazing

Why the pollsters got it wrong

We have a new tradition in politics in which some people glory in the fact that the polls got it wrong. It might begin with ‘all these handsome experts with all their fancy laptops and they can’t even tell us exactly how an election will turn out’, and sometimes it ends with, ‘yet, I knew it all along’. I think that the people who say it most are the ones that are pleased with the result and want to stick it to the people who didn’t predict it: ‘if, like me, they’d looked up from their laptops and spoken to real people, they’d have seen what would happen’.

To my mind, it’s always surprising when so many polls seem to do so well. Think for a second about what ‘pollsters’ do: they know they can’t ask everyone how they will vote (and why), so they take a small sample and use it as a proxy for the real world. To make sure the sample isn’t biased by selection, they develop methods to generate respondents randomly. To try to make the most of their resources, and make sure that their knowledge is cumulative, they use what they think they know about the population to make sure that they get enough responses from a ‘representative’ sample of the population. In many cases, that knowledge comes from things like focus groups or one-to-one interviews to get richer (qualitative) information than we can achieve from asking everyone the same question, often super-quickly, in a larger survey.

This process involves all sorts of compromises and unintended consequences when we have a huge population but limited resources: we’d like to ask everyone in person, but it’s cheaper to (say) get a 4-figure response online or on the phone; and, if we need to do it quickly, our sample will be biased towards people willing to talk to us.* So, on top of a profound problem – the possibility of people not telling the truth in polls – we have a potentially less profound but more important problem: the people we need to talk to us aren’t talking to us. So, we get a misleading read because we’re asking an unrepresentative sample (although it is nothing like as unrepresentative as proxy polls from social media, the word ‘on the doorstep’, or asking your half-drunk mates how they’ll vote).

Sensible ‘pollsters’ deal with such problems by admitting that they might be a bit off: highlighting their estimated ‘margin of error’ from the size of their sample, then maybe crossing their fingers behind their backs if asked about the likelihood of more errors based on non-random sampling. So, ignore this possibility for error at your peril. Yet, people do ignore it despite the peril! Here are two reasons why.

  1. Being sensible is boring.

In a really tight-looking two-horse race, the margin of error alone might suggest that either horse might win. So, a sensible interpretation of a poll might be (say), ‘either Clinton or Trump will get the most votes’. Who wants to hear or talk about that?! You can’t fill a 24-hour news cycle and keep up shite Twitter conversations by saying ‘who knows?’ and then being quiet. Nor will anyone pay much attention to a quietly sensible ‘pollster’ or academic telling them about the importance of embracing uncertainty. You’re in the studio to tell us what will happen, pal. Otherwise, get lost.

  1. Recognising complexity and uncertainty is boring.

You can heroically/ stupidly break down the social scientific project into two competing ideas: (1) the world contains general and predictable patterns of behaviour that we can identify with the right tools; or (2) the world is too complex and unpredictable to produce general laws of behaviour, and maybe your best hope is to try to make sense of how other people try to make sense of it. Then, maybe (1) sounds quite exciting and comforting while (2) sounds like it is the mantra of a sandal-wearing beansprout-munching hippy academic. People seem to want a short, confidently stated, message that is easy to understand. You can stick your caveats.

Can we take life advice from this process?

These days I’m using almost every topic as a poorly-constructed segue into a discussion about the role of evidence in politics and policy. This time, the lesson is about using evidence correctly for the correct purpose. In our example, we can use polls effectively for their entertainment value. Or, campaigners can use them as the best-possible proxies during their campaigns: if their polls tell them they are lagging in one area, give it more attention; if they seem to have a big lead in another area; give it less attention. The evidence won’t be totally accurate, but it gives you enough to generate a simple campaigning strategy. Academics can also use the evidence before and after a campaign to talk about how it’s all going. Really, the only thing you don’t expect poll evidence to do is predict the result. For that, you need the Observers from Fringe.

The same goes for evidence in policymaking: people use rough and ready evidence because they need to act on what they think is going on. There will never be enough evidence to make the decision for you, or let you know exactly what will happen next. Instead, you combine good judgement with your values, sprinkle in some evidence, and off you go. It would be silly to expect a small sample of evidence – a snapshot of one part of the world – to tell you exactly what will happen in the much larger world. So, let’s not kid ourselves about the ability of science to tell us what’s what and what to do. It’s better, I think, to recognise life’s uncertainties and act accordingly. It’s better than blaming other people for not knowing what will happen next.


*I say ‘we’ and ‘us’ but I’ve never conducted a poll in my life. I interview elites in secret and promise them anonymity.

1 Comment

Filed under Academic innovation or navel gazing, Folksy wisdom, Uncategorized

A Stern review for everyone?

The Stern reviewwas commissioned by the government to carry out the review of the Research Excellence Framework (REF) to ensure future university research funding is allocated more efficiently, offers greater rewards for excellent research and reduces the administrative burden on institutions’. In this post, I explain why no single policy can solve these problems uniformly: they affect scholars of different seniority, and different disciplines, very differently. The punchline is at the end.

My initial impression of the Stern review is that it has gone to great lengths to address the unintended consequences of the previous Research Excellence Framework. One of its key aims now is to try to anticipate the potential unintended consequences of its reduction of other unintended consequences! This is remarkably common in policymaking, perhaps summed up by Aaron Wildavsky’s phrase ‘policy as its own cause’: we enter a never-ending process of causing ripple effects when trying to fix previous problems.

The example of non-portability (#sternreview portable)

Take the example of one of the biggest problems:

Problem: there was a large incentive for Universities to ‘game’ the REF towards the end of the cycle: paying for 20% of the time of big name academics, or appointing them at huge salaries, to gain access to their 4 best publications. A policy of rewarding research excellence became a policy to (a) reward big transfers, undermining the efforts of other Universities and reducing their incentive to invest for the long term, and (b) boost the salaries of senior scholars (many of whom were already on 6-figure salaries), often to ridiculous levels.

Solution: non-portability. The idea is that you can move but your former employer gets to use the texts that you published while in your last job. So, there may now be less incentive to buy up the big names in the run up to the next REF.

Unintended consequence: uncertainty for early career researchers (ECRs) or scholars without permanent (open-ended) contracts. Many ECRs have expressed the concern that their incentives may suddenly change, from generating a portfolio of up to 4 excellent publications to secure a permanent post, to perhaps holding back publications and promising their delivery when in post. This could be addressed by the present review (ECRs could be included in the REF but be under no obligation to submit any publications), or perhaps by exempting ECRs since the policy is aimed at senior scholars (but the exemption would also have unintended consequences!).

Interpreting the problem through the lens of precarious positions

We will now enter a phase of debate driven by uncertainty and anxiety about the end result, and a lot of the discussion will be emotionally charged because many people will have spent maybe 8 years in education (and several years in low paid posts after it) and not know what to do next. It is relatively easy for people like me to say that the proposed new system is better, and for senior scholars to look on Stern as a big improvement, because people like me will be rewarded in either system (I will leave it to you to decide what I mean by ‘people like me’). It is more difficult for ECRs who are genuinely uncertain about their prospects.

The punchline: how does this look through the eyes of scholars in different disciplines?

The disciplinary lens is the factor that can often be most important but least discussed, for two key reasons:

  1. The general differences. Scholars operate in different ways. The ‘STEM’ (science, technology, engineering and mathematics) subjects are often described in these terms: you have large teams headed by a senior scholar; there is a hierarchy; you all work on the same research question; you publish many short articles as a team (with senior authors listed at the very start and end of the list of authors); you are increasingly driven by key metrics, such as personal citations and the ‘impact factor’ of journals . The humanities is often described like this: you are a lone scholar; you work on your own research question; you publish single-author work, and the big status symbol is the research monograph (book); these journal or other metrics do not work as well in your discipline. I am verging on caricature here (many ‘STEM’ scholars will work alone, and some humanities or social science scholars will operate laboratory-style teams), but you get the idea.

These differences feed into other practices: only in some subjects can it make sense for a University to ‘poach’ a whole team or unit; only in some subjects do ECRs need to develop their own portfolio of work; in some subjects, a PhD student or ECR effectively works for a senior scholar, while in others the PhD student has a supervisor but can set their own research agenda; in some subjects, it is automatic to include the senior scholar in an article you wrote, in others it would be seen as exploitation of the work of a PhD student or ECR. In some subjects, a CV with your name on team publications is the norm, while in others it would look like you do not have your own ideas.

  1. The REF reinforces these differences. You often find the impression that research exercises and metrics are there for the STEM subjects (or ‘hard sciences’) and not the humanities or social sciences: a process or review for Universities does not take into account the differences in incentives and practices across the disciplines, and some disciplines might lose out.

So, if you follow this debate on twitter, I recommend that you look at the bios of each participant to check their level of seniority and discipline because the Stern review is for all but it will affect us all in very different ways.

See also: James Wilsdon ‘The road to REF 2021: why I welcome Lord Stern’s blueprint for research assessment



Filed under Academic innovation or navel gazing, UK politics and policy

Q. Should PhD students blog? A. Yes.

I wish I could go back and rewrite everything I have published, including my PhD. If I knew then what I know now: I would get to the point quicker and describe its importance to a far wider audience than my supervisor and a few dedicated journal readers. To do so, I would exhibit the skills you develop when you write frequently for an ‘intelligent lay’ audience.

These are the writing traits that I think you develop when just writing for academics:

  1. You assume a specialist audience, familiar with key terms. So, you use jargon as shorthand without explaining its meaning. The downside is that the jargon often doesn’t have a particularly clear meaning. When you blog, you assume a non-specialist audience. You use less jargon, or you explain its meaning and value.
  2. You treat the exercise as a detective novel with a big reveal: a nice, vague opening discussion (passive tense optional), a main body of text to build up the suspense, and finally the big twist at the end. Ta da! Wow, I didn’t see that coming. When you blog, you assume that people will not read your work unless you front-load the reveal. You have a catchy and tweetable title, you provide a hook in the first sentence, and you only have a few hundred words in which to show your work (and encourage people to read the longer report).
  3. Or, you describe your hypotheses in a way that suggests that even you don’t know what will happen. Wow – I confirmed that hypothesis! Who knew? When you blog, it seems more sensible to use the language of hypotheses (or an equivalent) more simply, to explain what factors are most important to your explanation.

You can develop this skill by using a personal blog to describe your research progress and the value of your findings. However, it is also worth blogging in at least two other venues:

  1. Somewhere like the LSE blog, or Democratic Audit, in which the editors will try to summarise your argument in a short opening statement. This is very handy for you: did they summarise the main argument? If so, good. If not, look again to see if you explained it well.
  2. Somewhere like The Conversation, in which the editors will try to mess around with the title (to encourage more traffic) and wording (to make it punchier and quotable). This is a good exercise in which you can think about how far you want to go. Are you confident enough in your research to make such stark statements? Or, do you want to obfuscate and fill the argument with caveats? If the latter, you can think about the extent to which your argument is clear and defendable (it may well be – sometimes caveats and humility can be good!).

I also encourage advanced undergraduates and taught postgraduates to produce a blog post (albeit unpublished) alongside an essay or policy paper, because it is difficult to be concise, and the exercise helps develop a good life skill. Even without the blog exercise, I’d still encourage dissertation students (at the start of their research) to write up their argument/ plan/ work in a half-page document, so that we can see if it adds up to a coherent argument. You can do the same thing with a blog post, with the added (potential) benefit of some feedback from outside sources.

See also: there are resource sites which go into far more aspects of the writing process, such as and

Leave a comment

Filed under Academic innovation or navel gazing, PhD

The Art and Skill of Academic Translation: it’s harder when you move beyond English

I have been writing about the idea of ‘translation’ in terms of ‘knowledge transfer’ or ‘diffusion’, which often suggests that there is a linear process of knowledge production and dissemination: knowledge is held by one profession which has to find the right language to pass it on to another. This approach has often been reflected in the strategies of academic and government bodies. Yet, the process is two-way. Both groups offer knowledge and the potential to have a meaningful conversation that suits both parties. If so, ‘translation’ becomes a way for them to engage in a meaningful way, to produce a common language that they can both ‘own’ and use. Examples include: the need for scientists to speak with policymakers about how the policy process works; the need for ‘complexity’ theorists to understand the limits to policymaker action in Westminster systems; the separate languages of institutions which struggle to come together during public service integration (key to local partnership and ‘joined up government’); and, the difference in the language used by service providers and users. We might also worry about the language we use to maintain interdisciplinary discussion (such as when ‘first order change’ means something totally different in physics and politics).

It’s not the same thing, but translating into another language, such as when conversing in English and Japanese, reinforces the point in an immediately visible way. In both directions, English to Japanese, and vice versa, it is clear that the recipient only receives a version of the original statement – even when people use a highly skilled interpreter. Further, if the statement is quite technical, or designed to pass on knowledge, the gap between original intention and the relayed message is wider still.

This point can be made more strongly in a short lecture using interpretation. As academics, many of us have been to conferences in English, and witnessed a presenter trying to cram in too much information in 15 minutes. They give a long introduction for 10, then race through the slides without explaining them, simply say that they can’t explain what they hoped, or keep going until the chair insists they stop. You don’t really get a good sense of the key arguments.

In another language, you have to reduce your time to less than half, to speak slowly and account for translation (simultaneous translation is quicker, but you still have to speak very slowly). You have to minimise the jargon (and the idioms) to allow effective translation. Or, you need to find the time to explain each specialist word. For example, while I would often provide an 8000 word paper to accompany a lecture/ workshop, this one is 1500. There is no visible theory, although theory tends to underpin what you focus on and how you explain it. It took 40 minutes to present, largely because I left a lot of topics for Q&A. I still had a hard time explaining some things. I predicted some (such as the difference between ‘federalism’ and ‘federacy’, and the meaning of ‘poll tax’ and ‘bedroom tax’) but realised, late on, that I’d struggle to explain others (such as ‘fracking’, or the unconventional drilling used to access and extract shale gas).

This sort of exercise is fantastically useful, to force you to think about the essential points in an argument, keep it short without referring to shorthand jargon, and explain them without assuming much prior knowledge in the audience, in the knowledge that things will just mean different things to different audiences. It is a skill like any other, and it forces on you a sense of discipline (one might develop a comparable skill when explaining complex issues to pre-University students).

Indeed, I have now done it so much, alongside writing short blog posts, that I find it hard to go back from Tokyo to jargon city. Each time I read something dense (on, for example, ‘meta-governance’), I ask myself if I could explain it to an audience whose first language is not English. If not, I wonder how useful it is, or if it is ever translated outside of a very small group.

This is increasingly important in the field of policy theory, when we consider the use of theories, developed in English and applied to places such as the US and UK, and applied to countries around the globe (see Using Traditional Policy Theories and Concepts in Untraditional Ways). If you can’t explain them well, how can you work out if the same basic concepts are being used to explain things in different countries?

Further, we don’t know, until we listen to our audience, what they want to know and how they will understand what we say. Let me give you simple examples from my Hokkaido lecture. One panellist was a journalist from Okinawa. He used what I said to argue that we should learn from the Scots; to develop a national identity-based social movement, and to be like Adam Smith (persevering with a regional accent, and a specific view of the world, in the face of snobbishness and initial scepticism; note that I hadn’t mentioned Adam Smith). Another panelist, a journalist from Hokkaido, argued that the main lesson from Scotland is that you have to be tenacious; the Scots faced many obstacles to self-determination, but they persevered and saw the results, and still persevere despite the setback (for some) of the referendum result (I pointed out that ‘the 45%’ are not always described as tenacious!). Another contributor wondered why Thatcherism was so unpopular in Scotland when we can see that, for example, it couldn’t have saved Scottish manufacturing and was perhaps proved correct after not trying to do so. Others use the Scottish experience to highlight a similar sense of central government imposition or aloofness in Japan (from the perspective of the periphery).

In general, this problem of academic translation is difficult enough when you share a common language, but the need to translate, in two ways, brings it to the top of the agenda. In short, if we take the idea of translation seriously, it is not just about a technical process in which words are turned into a direct equivalent in another language and you expect the audience to be informed or do the work to become informed. It is about thinking again about what we think we know, and how much of that knowledge we can share with other people.

Leave a comment

Filed under Academic innovation or navel gazing, Japan, public policy, Scottish independence, Scottish politics

Reviews of My Books

A review of Understanding Public Policy and Global Tobacco Control in Public Administration: Painter review of 2 Cairney books 2013

A review of Global Tobacco Control in Governance: Kurzer review of GTC in Governance 2014

A review of Understanding Public Policy from an early career academic:

Two reviews of Understanding Public Policy in Political Studies Review:

Richards review in PSR


Kihiko review in PSR

(they are both here

From someone keeping it succinct and numeric:

Leave a comment

Filed under Academic innovation or navel gazing

The Shami Chakrabarti Greeting Someone at a Political Studies Conference Rule

One of the few things funnier than Shami Chakrabarti’s speech (scroll down here) at the Political Studies Association annual dinner was the sight of a succession of men kissing her, politely but clumsily, on each cheek, as they received awards for excellent scholarship. Women received awards too, but they generally had the greeting down to a fine art. It raised, by far, the most important issue of the annual conference for me: how should I greet female colleagues? Men are easy. You shake their hands. In some cases, you get a bone cruncher, but that’s just physical rather than social discomfort. The same goes, almost always, for women I meet as colleagues. However, on a small number of occasions, we hug. I thought I had solved this problem by simply hugging the same people each time. As long as I know what we’re doing, I’d happily greet someone in any way they like. I’d even high 10 someone, up high and down low, and then the bit round the back, if I knew it was always going to be that way. Yet, things change: you sometimes miss your window to hug some people (awkward) and, on very rare occasions (at least for me) you hug someone spontaneously. It’s a fraught situation. So, drawing on historical institutionalism, I propose the Shami Chakrabarti Greeting Someone at a Political Studies Conference Rule, which comes in two parts:

1. The default rule should be handshakes all round, at a Dutch (not Swedish) level of strength and eye contact. Or, if you’re Scottish, it’s OK to say ‘alright?’ in a slightly too loud voice.

2. If, at a critical juncture, you’ve hugged in the past, the default should be that you hug each time you meet, for the rest of your lives. Back slap optional.

1 Comment

Filed under Academic innovation or navel gazing

Journal Article Acceptance(s) After 5 Rejections and 25 months

Update: the title is now less catchy but more accurate. See the italicised bits for the update. I have also added this poster:


You might have to be a glass-is-half-full kind of person to take something positive from this story of publication success after a long run of failure. After 18 months, 5 rejections, 4 substantial redrafts, 2-3 changes of journal direction, and minus 8000 words, we had it accepted (update: add another 7 months,  and three substantial redrafts and additions, for the 2nd article acceptance).

It began with our submission to World Politics, which is a high status journal, in politics and international relations, with a high rejection rate, so this was a gamble. I thought we had done the double: produced something interesting to say about ‘evolutionary’ policymaking, building on work I began for Understanding Public Policy; and, produced a wealth of new information on global tobacco policy, built on work led by Donley Studlar and Hadii Mamudu, and informing Global Tobacco Control. So, HM and I put both together to produce this paper, submitted 11th September 2012:

World Politics Evolutionary Theory International Agreements 11September2012

It was rejected on the 4th December (not a bad turnaround). The rejection came with substantial reviewer comments – World Politics decision letter – which we used to revise the next version substantially. My impression, from this review, was that the combination of evolutionary theory and the case study was not working. In fact, I may have been pushing us into a position that I advise PhD students and early career researchers to avoid: a paper suggesting a new theoretical angle, reinforced by a single case . In my defence, I wasn’t proposing a new theory. Instead, I was trying to present the approach as a reflection of accumulated knowledge, in both theory and case.

Still, it wasn’t working, so we separated the two elements somewhat. I chopped about 3000 words of theory – something made easier by the fact that I had submitted (February 2012) a separate paper on evolutionary theory to Policy and Politics, which was reviewed (July 2012) and accepted after a minor revise-and-resubmit (23 October) then published early 2013 – ‘What is Evolutionary Theory and How Does it Inform Policy Studies?’ Policy and Politics, 41, 2, 279-98 Paywall Green

We hummed and hawed about policy journals before I made the mistake of sending it to Public Administration and Development, partly because we were focusing on contrasts in implementation based on the simple developed/ developing country distinction, partly because it was interdisciplinary, and partly because its description seemed really close to our topic.

Cairney Mamudu Evolution Tobacco Control PAD submission 5Feb2013

It was rejected without going to review, described by the editor as ‘out of scope’.

So, we sent it, almost immediately (21 Feb 2013), to Governance, which had been HM’s (more sensible) preference. Again, this is a high status political/ policy science journal with a high rejection rate, so we were still confident enough to take the usual gamble.

Anonymous Evolution Tobacco Control Governance submission 21Feb2013

It was rejected on 26th May after substantial review (which seems more critical than the reviewers of World Politics, so we were no further forward) –  governance rejection

We figured that we had to do two things based on the reviews: (1) strip out the discussion of evolutionary theory more and focus on the basic political science concepts (implementation, networks, agendas, etc.), shifting back the focus to the case study and evidence so far (particularly since I had now published an article separately on evolutionary theory); (2) be super-clear on key terms (leading/ laggard; developed/ developing) to anticipate future concerns, and clarify the narrative on the origins and role of the FCTC.

By this time, my University had made available some funds for Open Access, and I was keen to go this route, partly because OA seems good, and partly because I had recently co-authored an article in the OA journal Implementation Science and it was a very positive experience.

We chose Globalization and Health – based at the LSE, interdisciplinary, covering our topic and focus – and submitted on 12th September 2013. It was rejected on 29th October, which is a good turnaround, but the reviews were too brief to be useful – except it is still clear that our attempts to address the developed/ developing distinction are still needling our referee audience.

GH rejection letter GH referee 1 GH referee 2

Our solution was twofold: (1) to check with the editor of the next journal if there would be a problem with our approach, and (2) to get away from the developed/ developing sticking point by presenting an even more nuanced account, taking every opportunity to show that we weren’t providing naïve caricatures, and going super-conceptual to describe an ideal-type of a leading implementing country rather than identifying ‘leaders’ and ‘laggards’.

I emailed the editors of the Journal of Public Health Policy in November and got a good assurance on the developed/ developing point. The only problem is that the word limit is 4000, which is about one-third of the length of our original paper. Still, we revised the paper again.

By then, HM reckoned that Tobacco Control was a better fit, since they had begun to publish a series of papers on the ‘endgame’. We submitted there on the 20th December.

TCJ-Endgame_CoverLetter-14Dec2013 2 Cairney Mamudu Checklist cover letter 3 Cairney-Mamudu_22Dec2013

They rejected it on the 8th January 2014 without sending it to review TC rejection

We sent it to the JPHP on the 10th January – 1 CAirney Mamuducover letter JPHP 10JAn14 2 CAirney Mamudu Submitted article JPHP 10JAn14

We got a revise and resubmit on the 17th February – a very decent turnaround indeed. We got the classic binary response: one thought it was great, and one thought it was mince – JPHP reviews 17.2.14

We resubmitted on March 13 – 1 cover and rebuttal letter 2 resubmitted JPHP    – and got the thumbs up by the 27th.

Update, November 2014. We submitted a much better paper on the same theme (more developed theoretical argument, more data, a better refined argument) to Public Administration (special issue on global public policy) in June. After two resubmissions (and, unusually, a referral to a member of the editorial board – to deal with comments made by the third reviewer), we had it accepted in November.

So what did we learn?

    1. It is natural to blame journals, editors and reviewers for these long, drawn out processes – but I need to take some responsibility for the journal choices and the quality of submissions.
    2. Even a rejection can give you useful material for a redraft, as long as it actually goes to review.
    3. It is worth persevering. This is a very unusual case of 5 rejections, but it seem fairly normal to get 1 or 2 before success. For a while, I went on a good run of acceptances-after-revision, then a run of acceptances after rejection. I have almost always published each paper by the end.
    4. I think the article is, in many ways, a far better paper than when it began – but it also changed so much that we reckon we can go back and submit some of the chopped material (the new data) elsewhere.
    5. Final lesson – you need a thick skin for this process, particularly when you get one or two cranky anonymous reviewers, and particularly when you go interdisciplinary and invite comment from people who often don’t respect your discipline.
    6. Final, final, updated lesson: don’t lose your confidence and settle for a second-best result. Our first acceptance was for an article that stripped away a lot of what was good in the original idea (partly to meet the 4000 word limit), and it was rewritten for a public health audience in a way that I don’t entirely like. The Public Administration article (9000 words) is the one I’ll send to people and be proud of. It was accepted more than two years after we first made the mistake to send it to a different journal.




Leave a comment

Filed under Academic innovation or navel gazing, Public health, tobacco policy