Monthly Archives: February 2013

Potted History of Needs Assessment and Barnett

This is an excerpt from a forthcoming book (Cairney, P. and McGarvey, N. (2013) Scottish Politics 2nd ed (Basingstolke: Palgrave) used to answer a question (about the Needs Assessment exercise conducted by the Treasury in the 1970s) posed by  Peter Matthews @urbaneprofessor in a recent twitter conversation (I think the 1st ed. has a longer discussion).  The real knowledge on this is held by David Heald http://www.davidheald.com/

….

“Life before Barnett

The modern history of funding settlements demonstrates the incremental and almost accidental side of Scottish politics. This began in 1888 with the Goschen formula, named after the then Chancellor of the Exchequer. The formula is a by-product of the attempts by Goschen to link local revenue to local spending and separate it from funding designated for Imperial finance. Although this overall project failed, the formula itself lasted over 70 years as a means of determining Scottish entitlement from the UK exchequer (Mitchell and Bell, 2002). The figure of 11/80 of England and Wales was a rough estimate of Scotland’s population share at that time, based loosely on Scotland’s contribution to probate duties (taxes levied on the estate of the deceased), but was never recalcu­lated to take Scotland’s (relative) falling population into account.

As the size of the UK state grew, so did the size of the Scottish Office, with the Goschen formula more or less at the heart of its budget settlement. Indeed, although the formula was not used formally from 1959, the culture of accepting Scotland’s existing share as a starting point and adjusting at the margins became well-estab­lished. Therefore, what began as a formula which initially advantaged per capita spending in England and received minimal Scottish support, eventually became a system redistribut­ing money to Scotland as its share of the UK population fell (McLean and McMillan, 2003: 50).

The long-term use of the Goschen formula reinforces the idea of incrementalism and inertia in politics: the existing or default position is difficult to shift. Fundamental change is expensive and likely to undermine a well-established negotiated settlement between competing interests. While the Goschen formula is not something that would have been chosen from scratch by a comprehensively rational decision-maker or a more open process of decision-making, as a default position it was diffi­cult to challenge. We may then ask why this process was eventually replaced. The answer is that a ‘window of opportunity’ (see Kingdon, 1984) came in the 1970s with the prospect of political devolution which drew attention to Scotland’s share of public expenditure.

Barnett and needs assessment 

The high level of UK attention to Scotland’s financial status (particularly among English MPs representing constituencies with ‘comparable needs’) was such that it prompted governmental action. The ‘window of opportunity’ was opened by the prospect of a referendum on devolution. This contributed to the ‘reframing’ of the policy problem – from a technical process to ensure Scotland’s share of resources to a political process providing advantage to Scotland. The Treasury’s response was to commission a Needs Assessment Study to estab­lish the share that each UK territory was ‘entitled’ to (based on indicators of need such as proportions of schoolchildren and older people and population sparsity). This would be used in negotiations with the newly-formed Scottish Assembly, perhaps allowing the issue to return, eventually, to its low-salience status (although Barnett himself disputes this motivation – see Twigger, 1998: 8).

In retrospect we may say that the needs-assessment exercise was doomed to failure (in that it was not officially adopted) for three reasons. First, there is no common definition or consensus on the concept of need. More money spent on one ‘need’ means less on another; it is a political issue involving winners and losers, not a technical issue in which everyone’s problems can be solved. Second, there were problems with the quality of information and its implications.  For example, even when ‘objective factors’ (e.g. population sparsity or age) were taken into account it was never clear if any extra spending would refer to inputs (e.g. number of doctors), outputs (number of operations) or outcomes (equality in levels of health). Third, the outcomes from a needs assessment will always require a political decision which takes into account not only the ‘facts’ but also factors such as the public reaction.  The report itself represented only one aspect of that process. In particular, while the Treasury report in 1979 suggested that Scotland’s greater need was 16% (when at that time the level of extra spending was 22%) there was no rush to close this perceived gap.

Instead, the Barnett formula was introduced on an interim basis. Then, following the negative referendum vote, the needs-assessment agenda was dropped. The Treasury was not inclined to impose a system with little more benefit than the Barnett formula in the immediate aftermath of a refer­endum process seen by many in Scotland as an attempt by the UK Government to thwart home rule. Effectively, the end result was the replace­ment of the Goschen formula with a very similar Barnett formula.

This formula remains in place today in large part because the existing process has several political advantages. First, it satisfies broad coalitions in Scotland and England. In Scotland, it maintains (at least in the short term) historic levels of spending. In England, the ‘Barnett squeeze’ gives the impression that, over time, this advantage will be eroded. Second, it satisfies many governmental interests. For the Scottish Government it traditionally provided a guaranteed baseline and a chance to negotiate extra funding. It allows Scottish control over domes­tic spending, with limited Treasury interference. For the Treasury, it provides an automatic mechanism to calculate territorial shares which represent a small part of its overall budget.

The adoption of the formula therefore represented successful agenda-setting – establishing the principle in fairly secret negotiations and then revealing the details only when the annual process could be presented as a humdrum and automatic process (allocating funding at the margins) which was efficient and had support from all sides within government. Indeed, the level of implicit support for Barnett was so high that there was no serious, sustained chal­lenge to this formula either before or after political devolution in 1999 (perhaps aided by the perception that the Barnett ‘squeeze’ was working – Cairney, 2011a: 208). In fact, the value of Barnett has been reinforcedsince 1999; the trend is towards determining a greater proportion of Scottish Government spend from this process.”

2 Comments

Filed under Scottish politics, Uncategorized

Standing on the Shoulders of Giants?

My new year’s resolution was to make a blog entry for every academic article published from 2013, since the article may be behind a paywall (although if you contact me, I will see you right) and the article’s ideas may be expressed in a relatively inaccessible way (although we don’t all spew jargon-filled group-closure nonsense).  The aim is to get people interested enough to go from a short tweet to a larger blog to the high bar of reading (or the holy grail of citing) the article itself.
This article is called ‘Standing on the Shoulders of Giants’ because I wanted to give the impression that we are discussing the accumulation of scientific knowledge; our aim is to build on the insights and knowledge produced by others rather than start from scratch each time.  As stated, this is fairly uncontroversial and we might find that most people can get behind the project (in fact, they are already doing so, implicitly or explicitly).  The more problematic and debatable part of this task relates to the details: *how* do we do it?
The article focuses on this task in the policy literature, but the themes extend to political, social and, in most cases, the so called ‘hard’ sciences.  In fact, for many of us, it may be reminiscent of postgraduate discussions of the philosophy of science, in which we consider the inadequacy of most explanations of how knowledge is accumulated (from the ‘strawman’ of inductivism to the often-caricatured position of Popper (on falsification), to the idea of paradigm shift made famous by Kuhn and the rather-misleading ‘anything goes’ description of the approach by Feyerabend – a discussion captured neatly by Chalmers).  Many of us will have concluded two things: (1) we believe that we are in the business of accumulating knowledge/ we know much more about the world now than we did in the past, and we have acted accordingly; but, (2) we have no idea *how* that has happened because all of the explanations of knowledge accumulation are problematic, while some suggest that one body of knowledge *replaces* another rather than building on it.
In that broad context, the article (a) outlines three main ways in which scholars address this issue in policy studies and political science; and, (b) highlights the problems that may arise in each case:

1. Synthesis – we combine the insights of multiple theories, concepts or models to produce a single theory (in fact, the article discusses the difference between ‘synthesis’ and ‘super-synthesis’, but I don’t want to undermine my “we don’t all spew jargon-filled group-closure nonsense”).  One key problem is that when we produce a synthetic theory, from a range of other theories or concepts, we have to assume that the component parts of this new hybrid are consistent with each other. Yet, if you scratch the surface of many concepts – such as ‘new institutionalism’ or ‘policy networks’ – you find all sorts of disagreement about the nature of the world, how our concepts relate to it and how we gather knowledge of it. There are also practical problems regarding our assumption that the authors of these concepts have the same thing in mind when they describe things like ‘punctuated equilibrium’.  In other words, imagine that you have constructed a new theory based on the wisdom of five other people.  Then, get those people in the same room and you will find that they will share all sorts of – often intractable – disagreements with each other.  In that scenario, could you honestly state that your theory was based on accumulated knowledge?

2. The ‘Complementary’ Approach. In this case, you accept that people have these differences and so you accommodate them – you entertain a range of theories/ concepts and explore the extent to which they explain the same thing in different ways.  This is a popular approach associated with people like Allison (who compared three different explanations of the Cuban missile crisis) and used by several others to compare policy events.  One key problem with this approach is that it is difficult to do full justice to each theory.  Most theories have associated methods which are labour intensive and costly, putting few in the position to make meaningful comparisons.  Instead, the comparisons tend to be desktop exercises based on a case study and the authors’ ability to consider how each theory would explain it.

3. The ‘Contradictory’ Approach.  In that context, another option is to encourage the independence of such theories. You watch as different research teams produce their own studies and you simply try to find some way to compare and combine their insights.  Of course, it is impossible to entertain an infinite number of theories, so we also need some way to compare them; to select some and reject others.  This is the approach that we may be most familiar with, since it involves coming up with a set of rules or criteria to make sure that each theory can be accepted (at least initially) by the scientific community.  You may see such rules described as follows:

  • A theory’s methods should be explained so that they can be replicated by others.
  • Its concepts should be clearly defined, logically consistent, and give rise to empirically falsifiable hypotheses.
  • Its propositions should be as general as possible.
  • It should set out clearly what the causal processes are.
  • It should be subject to empirical testing and revision.
For me, this is where the task becomes very interesting because, on the one hand, most of us will find these aims to be intuitively appealing – but, on the other, they are incredibly problematic for the following reasons:
  • Few, if any, theories or research projects live up to these expectations.
  • The principles give a misleading impression of most (social?) scientific research which is largely built on trust rather than constant replication by others.
  • Many of the most famous proponents of this approach do something a bit different – such as when they subject their ‘secondary hypotheses’ to rigorous testing but insulate their ‘hard core’ from falsification.
  • The study of complex phenomenon may not allow us to falsify, since we can interpret our findings in very different ways.
  • Few theories are currently popular simply because they adhere to these principles.  In fact, science is much more of a social enterprise than the principles suggest.
Of course, by now you may have identified a key problem with this argument: it is all beginning to sound a bit ‘postpositivist’ (which, in my mind, is still more of a term of abuse than ‘you, my friend, are a positivist’).  However, it does not need to be taken this way.  It is OK to highlight problems with scientific principles and admit that science is about the methods and beliefs accepted by a particular scientific community because, if you like, you can still assert that those principles and beliefs are *correct*. Many, many, people do.  In fact, perhaps we all do it, because we have to find a way to accept some theories, approaches and evidence and reject others.  We seek a way to produce some knowledge ourselves and find a common language and set of principles to make sure that we can compare our knowledge with the knowledge of others.  We seek a way to sift through an almost infinite number of ‘signals’ from our environment, to pay attention to very few and ignore most.  That task requires rules which are problematic but necessary.
All I suggest we do (which is a bit of a bland recommendation) is to reject the unthinking and too-rigid application of rules that hold us all up to a standard that no-one will meet.  Rather, people in different disciplines might discuss and negotiate those rules with each other. This is more of an art than a science.
I also argue that (a) if we are serious about these rules, and the need to submit theories and evidence to rigorous testing; but (b) we accept that most of this is done on trust rather than replication; then (c) we should take on some of that burden ourselves by subjecting our own evidence to a form of testing, in which we consider the extent to which our findings can be interpreted in different, and equally plausible, ways.  The article talks about producing different ‘narratives’ of the same evidence, but I won’t talk about that too much in case you confuse me with the presenter of Jackanory.
Full reference: Cairney, P. (2013) ‘Standing on the Shoulders of Giants: How Do We Combine the Insights of Multiple Theories in Public Policy Studies?’ Policy Studies Journal, 41, 1, 1-21
It is often free here – http://onlinelibrary.wiley.com/doi/10.1111/psj.12000/abstract – or we can both jump through some green access hoops and you’ll get it eventually here https://dspace.stir.ac.uk/handle/1893/15902?mode=full

3 Comments

Filed under public policy