Deirdre Niamh Duffy (2017) Evaluation and Governing in the 21st Century (Palgrave Pivot)
Duffy’s new book engages with the many uses and abuses of evaluation in UK politics. This variation in evaluation practices relates partly to the vagueness of the term. Put simply, evaluation is about measuring the success of public policies. However, such a simple definition is so broad that it can mean everything and therefore nothing.
In that context, Duffy compares the many ways in which people could, do, and should use evaluation to inform policy and policymaking.
In terms of what evaluation could mean, Duffy identifies common definitions of evaluation as:
- ‘assessment of value/ merit’, involving some sort of combination of the values of the people involved and the methods they use to gather evidence of success
- “a realistic(ic) ‘science’”, focusing primarily on the allegedly appropriate use of scientific methods to measure success
- ‘actionable science’, using measures of success to help improve policy.
In terms of what UK governments actually do, Duffy identifies key phases including:
- A pre-New Labour reluctance to allow outside actors to opine on the success of its policies, combined with a relative lack of sophisticated methodological tools to do so.
- A New Labour era (from 1997), in which ministers were keen to stress a reliance on evidence based policymaking, to focus on ‘what works’ when identifying promising policies and evaluating their success primarily with reference to technical scientific measures rather than, say, their ideological positions.
One can infer from Duffy’s analysis that there is something to be said for the relative honesty of the pre-Labour position in which governments pretty much told people where to go, and staked their claim as the arbiters of their own success.
In contrast, Labour had a tendency to use the language of evidence to try to depoliticise issues, when there should have been more debate on values and the rationale for policies. It also used this approach to put pressure on delivery organisations – and, by extension, the recipients of policy measures – to do what it wanted. In particular, Duffy focuses on evaluation as the setting of benchmarks, and use of league tables based on proxies of success, to put major pressure on the organisation not doing so well. In part, the government is able to do so by encouraging the fear and shame of the actors leading or working for organisations delivering public services. In this context, ‘evaluation becomes less about EBPM and more about influencing and manipulating behaviour’ (p147).
In terms of what a government should do, Duffy focuses on the potential for evaluation practices to produce positive transformations in policy and practice: ‘evaluation can be reclaimed as part of a transformative project using critical theory’ (p148). Since such transformation would be encouraged via open discussion without a fixed agenda, and without a focus on one best way to use methods to evaluate, it must remain a very general aspiration. Consequently,
For some – particularly those seeking an instrumental guidebook on how to do ‘good evaluation’ – this may seem highly problematic. However, from a critical sociological perspective, it is only through remaining open to potential, as yet unknown emergent transformations that the disciplinary and controlling governing effects of knowledge production processes can be unsettled.
As such, Duffy’s book stands out as a critical theoretical take on the role of evidence in policy making and evaluation. I commend to people who want to broaden their horizons.