From the Phd Chat page
Giving conference papers is a key part of the PhD process, allowing you to: write up your ideas in a shorter format than the PhD; generate useful feedback; and meet people with similar interests. There is also an assumption in the PhD evaluation that a large part of the PhD thesis is publishable.
Realistically, this ‘publishable’ criteria is quite a low bar if you want to continue in the profession as a postdoctoral researcher or lecturer. The people considering proposals or job applications may be sitting with over 100 applications and may quickly glance at it before focusing on the publication record. In my recent experience, people weren’t being shortlisted for entry-level lectureships without 4 publications (usually in recognised peer-reviewed journals) and the successful candidate would often have more.
In other words, it is increasingly unlikely that people will get lectureships straight out of PhDs, and my own experience (PhD completed in 1999, followed by various temporary research and teaching contracts, then a first permanent lectureship in 2004) is now beginning to look like a normal or small gap between PhD and lecturing. Further, getting into post-doc research and teaching positions is very competitive, and you may already need a conference paper/ article record to get something immediately on completion of the PhD.
So how do you do it? Here are some questions that arose:
- What is the difference between a conference paper and an article? The simple answer is that the stakes are lower at the conference, and the paper is often the first of many drafts of an eventually accepted article. Generally, in my field, once your paper is accepted in principle onto a big annual conference, people don’t really monitor its quality – and many of the people that turn up to your panel will not have read it. However, a workshop is a different matter: you don’t want to annoy people when everyone is expected to read everyone’s papers. My advice in that case is to make the paper shorter and punchier, since many of your colleagues will be reading the whole set on the train/ plane to the workshop (the same goes, perhaps, for a panel discussant at an annual conference). You might even get a pat on the back, since people will have read at least one rotten effort from a more senior colleague. It also comes in handy later, when you have to meet the short word limits of major journals.
- Article writing is a skill that develops with practice – but is it part of the PhD training? In some fields, it is taken for granted that the data you produce will be used by your supervisor, perhaps within a research team, and that your name might go in the middle of a very co-authored paper. This seems less common in social science, and in my field you claim ownership of the material, often publish it on your own, and might end up learning by doing. If you are inexperienced, you may want to work with a more experienced colleague to help you through the often-tough process – but should it be your supervisor? I honestly don’t know the answer to that question. It is fraught with difficulty, since there is clearly an imbalance of power in the supervisor/student relationship. I know of supervisors who do it routinely, and sometimes it is bound to look like the supervisor is getting some free research. In my case, I simply offer the possibility of co-authorship to PhD students – suggesting that, if they want 4 articles from their PhD, I could contribute to two (perhaps with both of us taking the lead on one paper). It seems to me to be part of the training process and a way to help PhD students get a leg up. However, it is not for everyone and I wouldn’t push it too hard.
- Work on the hook, structure and coherence. We talked about how to get started, discussing the idea that you should just get writing what you know (to stop you worrying over every word) and edit later. I read about this advice quite a lot. It’s not the advice that I tend to give. Instead, I recommend starting at the start: producing the 150-word abstract, and seeing if you can describe the whole thing in a short, concise way; then, producing the introduction to see if you can ‘hook’ people in with the opening rationale for the study (focusing on the theory or the case), articulate a research question in a concise way, and present a coherent structure in which you will lay out the argument. In my view, the danger with the ‘just write’ advice is that you end up with 12000 words before you try to work out how it all fits together in 8000. The advantage in starting from the start is that you become immediately aware of the need to present a short and punchy piece of work and describe it to people who don’t have the knowledge of the topic. For me, even taking a whole day to write the introduction is worth the effort, since the rest may only take a few days to draft (if you know exactly what you are doing). This is the same sort of advice that I’d give before doing a literature review: it will take half the time, and far fewer words, to explain something if you have a clear research question and small number of objectives from the start.
- Quantity or quality? I also have my own views on this topic, and they are not shared by all of my colleagues. It’s not too controversial to say that skills develop with practice, but there is a big quality/ quantity question when you decide how many times to develop your skills. The ‘quality’ argument is that you should take your time to get the data and the argument right, even if it means a smaller number of outputs. Later in life, this approach will come in handy if you are submitted to the research assessment exercise (which will require 1-4 of your best outputs), but I’m not sure you’ll ever get to a point where you think that the submission is perfect (and then the reviewers will let you know it’s not). The ‘quantity’ arguments are maybe harder to justify: keep it simple and go for one-paper-one-argument; tie the same data to multiple research questions and relevant literatures; and, recognise that the quantity of publications on someone’s CV may have a bewitching effect (particularly if people haven’t read the articles and are not specialist enough to know the status of the publications). I also like the argument that quantity helps produce quality.
- Develop a thick skin – and know the meaning of ‘major revision’. In my experience, conferences are generally OK and people are generally polite and helpful (and often more so for PhD students), but presenting a paper can be a bad experience if you get some arsey colleague determined to make a point. To some extent, this is useful practice for when you submit something to a journal – at least when the journal uses anonymous peer review. It is possible that the comments will be scathing or will appear to be scathing when you read it for the first time. It is difficult not to take reviews personally, but it can be done if you read them once (immediately after receiving the editor’s email) and then wait for a while and read them again. Or, better still, read the comments on someone else’s work (if they let you) and see if they seem so critical. In some cases, you will get a desktop rejection or a post-review rejection, which is just a part of normal academic life. In others, you will be asked to make a major revision. This really just means that the revision is not minor (an extra sentence or two, and fix some typos – this decision is not common). It requires you to look at how the editor boils down the comments and construct a response which addresses most of the reviewer’s concerns. You don’t have to make all of the suggested changes, but you should say why you chose to respond in a particular way (e.g. if reviewers give you contradictory advice). This takes a lot of work, but it is now standard stuff – a substantial post-referee revision is something that you should plan to do each time.
- Choose the journal wisely. It’s an obvious point, but it is easier said than done. In my field, you might consider these factors: country (e.g. some US journals have a reputation for publishing more quantitative work); approach (e.g. some journals will expect a ‘critical’ approach or to make reference to well-established theories or concepts); history (in other words, look through some back issues to see if yours would fit, or if you would even read the articles); status (which is hard to gauge, but some journals have higher rejection rates and/ or it takes longer and more investment to get something in there); and, theory or case (for example, you can tie any case study to some general journals if you use an established theory, or just use a theory to help explain a case in a specialist journal).
See also: 3 Common Reasons Editors Reject Articles Without Review
From the Phd Chat page
Most of us are using qualitative methods, so some of our discussion of common themes won’t be as directly useful to everyone – but many points relate to many PhDs:
- The role of external ‘gatekeepers’ to your research. In some cases, a small number of people may control your access to information and useful people. You might have to build a relationship with them to get that information. In others, you may need to work hard to fill out forms and meet the rules of an organisation. I don’t think we had a case where that wasn’t a potential obstacle. The partial-solution is to make sure that you are in the position for some of it to go wrong, and to give yourself enough time to work out how to get past the gates. In some cases, the process gets easier with experience, but in my last project it took me over a year to set up some of the interviews.
- ‘Rapport’. We talked about how you might build up an understanding with an interviewee, but that there are trade-offs in each strategy. For example, you might be able to refer to a common background, such as gender, ethnicity or schooling (or, in Joanne McEvoy’s case, often the opposite) – but this might lead to your interviewee making the assumption that you know certain things, which means that they won’t explain them. For example, I knew someone from the US whose accent was often an advantage in UK interviews, since interviewees assumed they had to explain far more. There are also important ethical/ political questions about how detached you should be, to gain information, when participants make statements counter to your beliefs or they talk about issues that make them vulnerable: are you a detached scientific observer merely recording the exchange, a participant seeking to influence the exchange, and/or expected to engage in some way with your interviewee (discussed here or here)?
- The bad interview. Most people I have spoken to have had a bad interview: the interviewee has agreed to be interviewed, but says very little (of use) and appears defensive. It becomes easier to deal with them (I think) when you have the experience to work out when to persevere and when to finish the interview quickly. If we are talking about 1 or 2 out of 30, it’s fine to get very little information from some people. We might also learn things from the experience, including: if they sneer at the questions, it might reveal what they think; if they are nervous, it may be because they are not senior enough in an organisation to be confident about giving answers on behalf of it; or, they may simply not want to talk while recorded. Who knows? It’s OK to leave an interview and not know what went wrong, as long as it represents an outlier.
- How many interviews is enough? If you seek a false sense of certainty, my answer is 30. If you want the standard answer, people will say ‘it depends’ (there is a great NCRM discussion here in which the answer can be 1). Specifically, it depends on what you are doing and your approach: some talk about the idea of ‘saturation’, when you are confident that any more interviews won’t get you any more information (or it won’t be worth the extra effort); others, in (say) elite research might discuss the idea of getting enough of a proportion of an identified population (it might be 40, and you might get access to 20). The answer ‘30’ is handy since the number of interviews may have a bewitching effect on people, but you should really reflect on what information you have gathered, how you can demonstrate an adequate amount, and how you relate it to the other kinds of information available to you.
- The idea that you are a ‘gatekeeper’ for other people. In discussions of science we often talk about conducting research in a way that makes it replicable: if someone followed your methods could they produce the same results? Yet, this does not mean that people actually do the replication – which can be rare or non-existent in some fields. In particular, case study research, in which you are piecing together disparate information from limited sources, is difficult to replicate – and people will generally take your results on trust. Similarly, with anonymised interviews, people generally have to trust that you conducted them and reproduced them faithfully. In some cases, you do that with reference to established methods (such as audio recorded interviews, transcribed in their entirety and analysed using something like NVivo). In others, you might take written notes and agree to keep everything hush hush (which tends to be my approach). In such cases, all I can recommend is that, when you present the information, you acknowledge that the outcome is not simply an ‘objective’ account that would likely be produced by someone else (I discuss this issue in relation to policy theories, methods and science in this article and post).
- Thinking about how you fit into the research. A related issue is about the need to reflect on why you are doing the research and what part you play in it. In some cases, this issue is right up front: for example, some feminist studies may have an ‘emancipatory’ aim and be tied up in the identity of the researcher. In some cases, the student may be researching something that relates to their identity, social background, or profession (in the case of people combining a PhD with employment). In others, the link is not obvious, but the issues are similar: there is a need to think about how your aims, assumptions and biases will affect the ways in which you gather and analyse information – and if you can demonstrate that you are anticipating any problems. In some cases, there may be an open process to consider the ethics of the research, when (for example) it involves using an aspect of one’s background to access information not available to others. In others, it is a more straightforward and brief process of reflection, just in case it comes up in the viva.
- The difficulty of saying what ‘mixed methods’ are. The chances are that, if you are doing a PhD now, you have been trained in various qualitative and quantitative methods. You may also have done a course in the philosophy underpinning methods. If you put those two kinds of training together, you may pause before providing the now-standard answer: ‘I am totally doing mixed methods’. Many examiners may not leave it at that. They might want to know how you can marry methods which, potentially, are underpinned by two very different ways of understanding the world. In my view, these problems can be overblown when people claim a particular method for their philosophy when, really, the methods are more flexible. For example, interviews can be used alongside other methods to generate meaning in interpretive research, or simply to generate information to give more depth to surveys. All you really need to do (in my view) is to provide a clear and defendable description of what methods you use and why.
From the Phd Chat page
Probably the most important issue is about how to anticipate problems and incorporate them within the research design. You may find it odd when, at the very beginning of your design process, your supervisor is asking you what you would do if all or part of the process goes wrong – but it is an essential question. Can you salvage a PhD if you end up with far less data than you anticipated? There is an important balance to be struck between being ambitious and realistic. I think it’s good to make ambitious plans to, say, engage with multiple and disparate literatures, interview 50 people or do a significant piece of survey work. It’s also good to consider the end result if, say, only 20 will speak to you. In most cases, I think a good supervisor would ask you to prepare to mitigate risk.
Let’s take a hypothetical example. You want to answer the ‘what is policy?’ question, and you set up three parts: (1) documentary/ textual analysis to identify what policymakers define as policy; (2) interviews with 30-50 practitioners to determine policy in practice; and, (3) a survey to explore policy outcomes from the perspective of service users. This is super-ambitious and, while task 1 is relatively straightforward (at least if the documents are in the public domain), a lot could go wrong with 2 (it takes ages to set up, conduct, transcribe and analyse interviews, and people may not be forthcoming) and 3 (it takes a long time to design and conduct, few may respond to the survey, or data analysis becomes unmanageable). In that case, a supervisor may advise you to focus on either 2 or 3 – or to accept that, when you engage in both, only one of them may work out. The big question is: if one of them goes wrong, can you still get a PhD from what you have? Or, from the beginning, should you put your efforts into fewer tasks? This is not an easy question to answer, but it is important to ask it at the beginning.