Please see the Policy Analysis in 750 words series overview before reading the summary. This post started off as 750 words before growing.
‘If the claims of philosophy to a special kind of knowledge can be shown to be without foundation, if they are at best dogmatic or else incoherent, then methodology is an empty and futile pursuit and its prescriptions are vacuous’ (Hindess, 1977: 4).
This book may seem like a weird addition to a series on policy analysis.
One important answer is that the status of research and the framing of the problem result from the exercise of power, rather than the objectivity of analysts and natural superiority of some forms of knowledge.
In other posts on ‘the politics of evidence based policymaking’, I describe some frustrations among many scientists that their views on a hierarchy of knowledge based on superior methods are not shared by many policymakers. These posts can satisfy different audiences: if you have a narrow view of what counts as good evidence, you can focus on the barriers between evidence and policy; if you have a broader view, you can wonder why those barriers seem higher for other forms of knowledge (e.g. Linda Tuhiwai Smith on the marginalisation of indigenous knowledge).
In this post, I encourage you to go a bit further down this path by asking how people accumulate knowledge in the first place. For example, see introductory accounts by Chalmers, entertaining debates involving Feyerabend, and Hindess’ book to explore your assumptions about how we know what we know.
My take-home point from these texts is that we are only really able to describe convincingly the argument that we are not accumulating knowledge!
The simple insight from Chalmers’ introduction is that inductive (observational) methods to generate knowledge are circular:
- we engage inductively to produce theory (to generalise from individual cases), but
- we use theory to engage in any induction, such as to decide what is important to study, and what observations are relevant/irrelevant, and why.
In other words, we need theories of the world to identify the small number of things to observe (to allow us to filter out an almost unlimited amount of signals from out environments), but we need our observations to generate those theories!
Hindess shows that all claims to knowledge involve such circularity: we employ philosophy to identify the nature of the world (ontology) and how humans can generate valid knowledge of it (epistemology) to inform methodology, to state that scientific knowledge is only valid if it lives up to a prescribed method, then argue that the scientific knowledge validates the methodology and its underlying philosophy (1977: 3-22). If so, we are describing something that makes sense according to the rules and practices of its proponents, not an objective scientific method to help us accumulate knowledge.
Further, different social/ professional groups support different forms of working knowledge that they value for different reasons (such as to establish ‘reliability’ or ‘meaning’). To do so, they invent frameworks to help them theorise the world, such as to describe the relationship between concepts (and key concepts such as cause and effect). These frameworks represent a useful language to communicate about our world rather than simply existing independently of it and corresponding to it.
Hindess’ subsequent work explored the context in which we exercise power to establish the status of some forms of knowledge over others, to pursue political ends rather than simply the ‘objective’ goals of science. As described, it is as relevant now as it was then.
How do these ideas inform policy analysis?
Perhaps, by this stage, you are thinking: isn’t this a relativist argument, concluding that we should never assert the relative value of some forms of knowledge over others (like astronomy versus astrology)?
I don’t think so. Rather, it invites us to do two more sensible things:
- Accept that different approaches to knowledge may be ‘incommensurable’.
- They may not share ‘a common set of perceptions’ (or even a set of comparable questions) ‘which would allow scientists to choose between one paradigm and the other . . . there will be disputes between them that cannot all be settled by an appeal to the facts’ (Hindess, 1988: 74)
- If so, “there is no possibility of an extratheoretical court of appeal which can ‘validate’ the claims of one position against those of another” (Hindess, 1977: 226).
- Reject the sense of self-importance, and hubris, which often seems to accompany discussions of superior forms of knowledge. Don’t be dogmatic. Live by the maxim ‘don’t be an arse’. Reflect on the production, purpose, value, and limitations of our knowledge in different contexts (which Spiegelhalter does well).
On that basis, we can have honest discussions about why we should exercise power in a political system to favour some forms of knowledge over others in policy analysis, reflecting on:
- The relatively straightforward issue of internal consistency: is an approach coherent, and does it succeed on its own terms?
- For example, do its users share a clear language, pursue consistent aims with systematic methods, find ways to compare and reinforce the value of each other’s findings, while contributing to a thriving research agenda (as discussed in box 13.3 below)?
- Or, do they express their aims in other ways, such as to connect research to emancipation, or value respect for a community over the scientific study of that community?
- The not straightforward issue of overall consistency: how can we compare different forms of knowledge when they do not follow each other’s rules or standards?
- g. what if one approach is (said to be) more rigorous and the other more coherent?
- g. what if one produces more data but another produces more ownership?
In each case, the choice of criteria for comparison involves political choice (as part of a series of political choices), without the ability – described in relation to ‘cost benefit analysis’ – to translate all relevant factors into a single unit.
- The imperative to ‘synthesise’ knowledge.
Spiegelhalter provides a convincing description of the benefits of systematic review and ‘meta-analysis’ within a single, clearly defined, scientific approach containing high agreement on methods and standards for comparison.
However, this approach is not applicable directly to the review of multiple forms of knowledge.
So, what do people do?
- E.g. some systematic reviewers apply the standards of their own field to all others, which (a) tends to produce the argument that very little high quality evidence exists because other people are doing it wrongly, and (b) perhaps exacerbates a tendency for policymakers to attach relatively low value to such evaluations.
- E.g. policy analysts are more likely to apply different criteria: is it available, understandable, ‘usable’, and policy relevant (e.g. see ‘knowledge management for policy’)?
Each approach is a political choice to include/ exclude certain forms of knowledge according to professional norms or policymaking imperatives, not a technical process to identify the most objective information. If you are going to do it, you should at least be aware of what you are doing.