From the Phd Chat page
Most of us are using qualitative methods, so some of our discussion of common themes won’t be as directly useful to everyone – but many points relate to many PhDs:
- The role of external ‘gatekeepers’ to your research. In some cases, a small number of people may control your access to information and useful people. You might have to build a relationship with them to get that information. In others, you may need to work hard to fill out forms and meet the rules of an organisation. I don’t think we had a case where that wasn’t a potential obstacle. The partial-solution is to make sure that you are in the position for some of it to go wrong, and to give yourself enough time to work out how to get past the gates. In some cases, the process gets easier with experience, but in my last project it took me over a year to set up some of the interviews.
- ‘Rapport’. We talked about how you might build up an understanding with an interviewee, but that there are trade-offs in each strategy. For example, you might be able to refer to a common background, such as gender, ethnicity or schooling (or, in Joanne McEvoy’s case, often the opposite) – but this might lead to your interviewee making the assumption that you know certain things, which means that they won’t explain them. For example, I knew someone from the US whose accent was often an advantage in UK interviews, since interviewees assumed they had to explain far more. There are also important ethical/ political questions about how detached you should be, to gain information, when participants make statements counter to your beliefs or they talk about issues that make them vulnerable: are you a detached scientific observer merely recording the exchange, a participant seeking to influence the exchange, and/or expected to engage in some way with your interviewee (discussed here or here)?
- The bad interview. Most people I have spoken to have had a bad interview: the interviewee has agreed to be interviewed, but says very little (of use) and appears defensive. It becomes easier to deal with them (I think) when you have the experience to work out when to persevere and when to finish the interview quickly. If we are talking about 1 or 2 out of 30, it’s fine to get very little information from some people. We might also learn things from the experience, including: if they sneer at the questions, it might reveal what they think; if they are nervous, it may be because they are not senior enough in an organisation to be confident about giving answers on behalf of it; or, they may simply not want to talk while recorded. Who knows? It’s OK to leave an interview and not know what went wrong, as long as it represents an outlier.
- How many interviews is enough? If you seek a false sense of certainty, my answer is 30. If you want the standard answer, people will say ‘it depends’ (there is a great NCRM discussion here in which the answer can be 1). Specifically, it depends on what you are doing and your approach: some talk about the idea of ‘saturation’, when you are confident that any more interviews won’t get you any more information (or it won’t be worth the extra effort); others, in (say) elite research might discuss the idea of getting enough of a proportion of an identified population (it might be 40, and you might get access to 20). The answer ‘30’ is handy since the number of interviews may have a bewitching effect on people, but you should really reflect on what information you have gathered, how you can demonstrate an adequate amount, and how you relate it to the other kinds of information available to you.
- The idea that you are a ‘gatekeeper’ for other people. In discussions of science we often talk about conducting research in a way that makes it replicable: if someone followed your methods could they produce the same results? Yet, this does not mean that people actually do the replication – which can be rare or non-existent in some fields. In particular, case study research, in which you are piecing together disparate information from limited sources, is difficult to replicate – and people will generally take your results on trust. Similarly, with anonymised interviews, people generally have to trust that you conducted them and reproduced them faithfully. In some cases, you do that with reference to established methods (such as audio recorded interviews, transcribed in their entirety and analysed using something like NVivo). In others, you might take written notes and agree to keep everything hush hush (which tends to be my approach). In such cases, all I can recommend is that, when you present the information, you acknowledge that the outcome is not simply an ‘objective’ account that would likely be produced by someone else (I discuss this issue in relation to policy theories, methods and science in this article and post).
- Thinking about how you fit into the research. A related issue is about the need to reflect on why you are doing the research and what part you play in it. In some cases, this issue is right up front: for example, some feminist studies may have an ‘emancipatory’ aim and be tied up in the identity of the researcher. In some cases, the student may be researching something that relates to their identity, social background, or profession (in the case of people combining a PhD with employment). In others, the link is not obvious, but the issues are similar: there is a need to think about how your aims, assumptions and biases will affect the ways in which you gather and analyse information – and if you can demonstrate that you are anticipating any problems. In some cases, there may be an open process to consider the ethics of the research, when (for example) it involves using an aspect of one’s background to access information not available to others. In others, it is a more straightforward and brief process of reflection, just in case it comes up in the viva.
- The difficulty of saying what ‘mixed methods’ are. The chances are that, if you are doing a PhD now, you have been trained in various qualitative and quantitative methods. You may also have done a course in the philosophy underpinning methods. If you put those two kinds of training together, you may pause before providing the now-standard answer: ‘I am totally doing mixed methods’. Many examiners may not leave it at that. They might want to know how you can marry methods which, potentially, are underpinned by two very different ways of understanding the world. In my view, these problems can be overblown when people claim a particular method for their philosophy when, really, the methods are more flexible. For example, interviews can be used alongside other methods to generate meaning in interpretive research, or simply to generate information to give more depth to surveys. All you really need to do (in my view) is to provide a clear and defendable description of what methods you use and why.