Monthly Archives: August 2016

The Scottish Parliament would be a bit less crap in an independent Scotland and some people care

See also: The Scottish Parliament would be crap in an independent Scotland and almost no-one cares

The Scottish Government made a recent amendment to the Scottish Ministerial Code to restrict the role of MSPs while ‘Parliamentary Liaison Officers’ (PLOs) in the Scottish Parliament. PLOs are not members or the Scottish Government, but they work closely with ministers and sit on committees scrutinising ministers, which blurs the boundary between policymaking and scrutiny.

While previous Labour-led governments made a decent effort to deny that this is a problem (1999-2007), the SNP (from 2007) perfected that denial by allowing PLOs to sit on the very committees scrutinising their ministers.

Now, after some (social and traditional) media and opposition party pressure, its revised guidelines in the 2016 Scottish Ministerial Code – remove a large part of the problem:

PLOs may serve on Parliamentary Committees, but they should not serve on Committees with a substantial direct link to their Cabinet Secretary’s portfolio … At the beginning of each Parliamentary session, or when changes to PLO appointments are made, the Minister for Parliamentary Business will advise Parliament which MSPs have been appointed as PLOs. The Minister for Parliamentary Business will also ensure that PLO appointments are brought to the attention of Committee Conveners. PLOs should ensure that they declare their appointment as a PLO on the first occasion they are participating in Parliamentary business related to the portfolio of their Cabinet Secretary.

The only thing that (I think) remains missing is the stipulation in the 2003 code that PLOs ‘should not table oral Parliamentary Questions on issues for which their minister is responsible’. So, we should still expect the odd question along the lines of, ‘Minister, why are you so great?’.

PLOs in 2016 ministerial code

Leave a comment

Filed under Scottish politics

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

Imagine this as your ‘early intervention’ policy choice: (a) a universal and non-stigmatising programme for all parents/ children, with minimal evidence of effectiveness, high cost, and potential public opposition about the state intervening in family life; or (b) a targeted, stigmatising programme for a small number, with more evidence, less cost, but the sense that you are not really intervening ‘early’ (instead, you are waiting for problems to arise before you intervene). What would you do, and how would you sell your choice to the public?

I ask this question because ‘early intervention’ seems to the classic valence issue with a twist. Most people seem to want it in the abstract: isn’t it best to intervene as early as possible in a child’s life to protect them or improve their life chances?

However, profound problems or controversies arise when governments try to pursue it. There are many more choices than I presented, but the same basic trade-offs arise in each case. So, at the start, it looks like you have lucked onto a policy that almost everyone loves. At the end, you realise that you can’t win. There is no such thing as a valence issue at the point of policy choice and delivery.

To expand on these dilemmas in more depth, I compare cases of Scottish and UK Government ‘families policies’. In previous posts, I portrayed their differences – at least in the field of prevention and early intervention policies – as more difficult to pin down than you might think. Often, they either say the same things but ‘operationalise’ them in very different ways, or describe very different problems then select very similar solutions.

This basic description sums up very similar waves of key ‘families policies’ since devolution: an initial focus on social inclusion, then anti-social behaviour, followed by a contemporary focus on ‘whole family’ approaches and early intervention. I will show how they often go their own ways, but note the same basic context for choice, and similar choices, which help qualify that picture.

Early intervention & prevention policies are valence issues …

A valence (or ‘motherhood and apple pie’) issue is one in which you can generate huge support because the aim seems, to most people, to be obviously good. Broad aims include ‘freedom’ and ‘democracy’. In the UK specific aims include a national health service free at the point of use. We often focus on valence issues to highlight the importance of a political party’s or leader’s image of governing competence: it is not so much what we want (when the main parties support very similar things), but who you trust to get it.

Early intervention seems to fit the bill: who would want you to intervene late or too late in someone’s life when you can intervene early, to boost their life chances at an early stage as possible? All we have to do is work out how to do it well, with reference to some good evidence. Yet, as I discuss below, things get complicated as soon as we consider the types of early intervention available, generally described roughly as a spectrum from primary (stop a problem occurring and focus on the whole population – like a virus inoculation) to secondary (address a problem at an early stage, using proxy indicators to identify high-risk groups), and tertiary (stop a problem getting worse in already affected groups).

Similarly, look at how Emily St Denny and I describe prevention policy. Would many people object to the basic principles?

“In the name of prevention, the UK and Scottish Governments propose to radically change policy and policymaking across the whole of government. Their deceptively simple definition of ‘prevention policy’ is: a major shift in resources, from the delivery of reactive public services to solve acute problems, to the prevention of those problems before they occur. The results they promise are transformative, to address three crises in politics simultaneously: a major reduction in socioeconomic equalities by focusing on their ‘root causes’; a solution to unsustainable public spending which is pushing public services to breaking point; and, new forms of localised policymaking, built on community and service user engagement, to restore trust in politics”.

… but the evidence on their effectiveness is inconvenient …

A good simple rule about ‘evidence-based policymaking’ is that there is never a ‘magic bullet’ to tell you what to do or take the place of judgement. Politics is about making choices which benefit some people while others lose out. You can use evidence to help clarify those choices, but not produce a ‘technical’ solution. A further rule with ‘wicked’ problems is that the evidence is not good enough even to generate clarity about the cause of the problem. Or, you simply find out things you don’t want to hear.

Early intervention seems to be a good candidate for the latter, for three main reasons:

  1. Very few interventions live up to high evidence standards

There are two main types of relevant ‘evidence based’ interventions in this field. The first are ‘family intervention projects’ (FIPs). They generally focus on low income, often lone parent, families at risk of eviction linked to factors such as antisocial behaviour, and provide two forms of intervention: intensive 24/7 support, including after school clubs for children and parenting skills classes, and treatment for addiction or depression in some cases, in dedicated core accommodation with strict rules on access and behaviour; and an outreach model of support and training. The evidence of success comes from evaluation and a counterfactual: this intervention is expensive, but we think that it would have cost far more money and heartache if we had not intervened to prevent (for example) family homelessness. There is generally no randomised control trial (RCT) to establish the cause of improved outcomes, or demonstrate that those outcomes would not have happened without an intervention of this sort.

The second are projects imported from other countries (primarily the US and Australia) based on their reputation for success. This reputation has been generated according to evidential rules associated with ‘evidence based medicine’ (EBM), in which there is relatively strong adherence to a hierarchy of evidence, with RCTs and their systematic review at the top, and the belief that there should be ‘fidelity’ to programmes to make sure that the ‘dosage’ of the intervention is delivered properly and its effect measured. Key examples include the Family Nurse Partnership (although its first UK RCT evaluation was not promising), Triple P (although James Coyne has his doubts!), and Incredible Years (but note the importance of ‘indicated’ versus ‘selective’ programmes, below). In this approach, there may be more quantitative evidence of success, but it is still difficult to know if the project can be transferred effectively and if its success can be replicated in another country with a very different political drivers, problems, and levels of existing services. We know that some interventions are associated with positive outcomes, but we struggle to establish definitively that they caused them (solely, separate from their context).

  1. The evidence on ‘scaling up’ for primary prevention is relatively weak

Kenneth Dodge (2009) sums up a general problem with primary prevention in this field. It is difficult to see much evidence of success because: there are few examples of taking effective specialist projects ‘to scale’; there are major issues around ‘fidelity’ to the original project when you scale up (including the need to oversee a major expansion in well-trained practitioners); and, it is difficult to predict the effect of a programme, which showed promise when applied to one population, to a new and different population.

  1. The evidence on secondary early intervention is also weak

This point about different populations with different motivations is demonstrated in a more recent (published 2014) study by Stephen Scott et al of two Incredible Years interventions – to address ‘oppositional defiant disorder symptoms and antisocial personality character traits’ in children aged 3-7 (for a wider discussion of such programmes see the Early Intervention Foundation’s Foundations for life: what works to support parent child interaction in the early years?).

They highlight a classic dilemma in early intervention: the evidence of effectiveness is only clear when children have been clinically referred (‘indicated approach’), but unclear when children have been identified as high risk using socioeconomic predictors (‘selective approach’):

An indicated approach is simpler to administer, as there are fewer children with severe problems, they are easier to identify, and their parents are usually prepared to engage in treatment; however, the problems may already be too entrenched to treat. In contrast, a selective approach targets milder cases, but because problems are less established, whole populations have to be screened and fewer cases will go on to develop serious problems.

For our purposes, this may represent the most inconvenient form of evidence on early intervention: you can intervene early on the back of very limited evidence of likely success, or have a far higher likelihood of success when you intervene later, when you are running out of time to call it ‘early intervention’.

… so governments have to make and defend highly ‘political’ choices …

I think this is key context in which we can try to understand the often-different choices by the UK and Scottish Governments. Faced with the same broad aim, to intervene early to prevent poor outcomes, the same uncertainty and lack of evidence that their interventions will produce the desired effect, and the same need to DO SOMETHING rather than wait for the evidence that may never arise, what do they do?

Both governments often did remarkably similar things before they did different things

From the late 1990s, both governments placed primary emphasis initially on a positive social inclusion agenda, followed by a relatively negative focus on anti-social behaviour (ASB), before a renewed focus on the social determinants of inequalities and the use of early intervention to prevent poor outcomes.

Both governments link families policies strongly to parenting skills, reinforcing the idea that parents are primarily responsible for the life chances of their children.

Both governments talk about getting away from deficit models of intervention (the Scottish Government in particular focuses on the ‘assets’ of individuals, families, and communities) but use deficit-model proxies to identify families in need of support, including: lone parenthood, debt problems, ill health (including disability and depression), and at least one member subject to domestic abuse or intergenerational violence, as well as professional judgements on the ‘chaotic’ or ‘dysfunctional’ nature of family life and of the likelihood of ‘family breakdown’ when, for example, a child it taken into care.

So, when we consider their headline-grabbing differences, note this common set of problems and drivers, and similar responses.

… and selling their early intervention choices is remarkably difficult …

Although our starting point was valence politics, prevention and early intervention policies are incredibly hard to get off the ground. As Emily St Denny and I describe elsewhere, when policymakers ‘make a sincere commitment to prevention, they do not know what it means or appreciate the scale of their task. They soon find a set of policymaking constraints that will always be present. When they ‘operationalise’ prevention, they face several fundamental problems, including: the identification of ‘wicked’ problems which are difficult to define and seem impossible to solve; inescapable choices on how far they should go to redistribute income, distribute public resources, and intervene in people’s lives; major competition from more salient policy aims which prompt them to maintain existing public services; and, a democratic system which limits their ability to reform the ways in which they make policy. These problems may never be overcome. More importantly, policymakers soon think that their task is impossible. Therefore, there is high potential for an initial period of enthusiasm and activity to be replaced by disenchantment and inactivity, and for this cycle to be repeated without resolution’.

These constraints refer to the broad idea of prevention policy, while specific policies can involve different drivers and constraints. With general prevention policy, it is difficult to know what government policy is and how you measure its success. ‘Prevention’ is vague, plus governments encourage local discretion to adapt the evidence of ‘what works’ to local circumstances.

Governments don’t get away with this regarding specific policies. Instead, Westminster politics is built on a simple idea of accountability in which you know who is in charge and therefore to blame. UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect, particularly in the UK, but also the Scottish, government.

… so the UK Government goes for it and faces the consequences ….

‘Troubled Families’ in England: the massive expansion of secondary prevention?

So, although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable short term outcomes – even if the broader aim is to encourage local discretion and successful long term outcomes.

In the absence of unequivocally supportive evidence (which may never appear), the UK government relied on a crisis (the London riots in 2011) to sell policy, and ridiculous processes of estimation of the size of the problem and performance measurement to sell the success of its solution. In this system, ministers perceive the need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and to do these things far more quickly than the people gathering evidence of more substantive success. There is a lot of criticism of the programme in terms of its lack, or cynical use, of evidence but little of it considers policy from an elected government’s perspective.

…while the Scottish Government is more careful, but faces unintended consequences

This particular UK Government response has no parallel in Scotland. The UK Government is far more likely than its Scottish counterpart to link families policies to a moral agenda in response to crisis, and there is no Scottish Government equivalent to ‘payment by results’ and massive programme expansion. Instead, it continued more modest roll-outs in partnership with local public bodies. Indeed, if we ‘zoom in’ to this one example, at this point in time, the comparison confirms the idea of a ‘Scottish Approach’ to policy and policymaking.

Yet, the Scottish Government has not solved the problems I describe in this post: it has not found an alternative ‘evidence based’ way to ‘scale up’ early intervention significantly and move from secondary/ tertiary forms of prevention to the more universal/ primary initiatives that you might associate intuitively with prevention policy.

Instead, its different experiences have highlighted different issues. For example, its key vehicle for early intervention and prevention is the ‘collaborative’ approach, such as in the Early Years Collaborative. Possibly, it represents the opposite of the UK’s attempt to centralise and performance-manage-the-hell-out-of the direction of major expansion.

Table 1 Three ideal types EBBP

Certainty, with this approach, your main aim is not to generate evidence of the success of interventions – at least not in the way we associate with ‘evidence based medicine’, randomised control trials, and the star ratings developed by the Early Intervention Foundation. Rather, the aim is to train local practitioners to use existing evidence and adapt it to local circumstances, experimenting as you go, and gathering/using data on progress in ways not associated with, for example, the family nurse partnership.

So, in terms of the discussion so far, perhaps its main advantage is that a government does not have to sell its political choices (it is more of a delivery system than a specific intervention) or back them up with evidence of success elsewhere. In the absence of much public, media, or political party attention, maybe it’s a nice pragmatic political solution built more on governance principles than specific evidence.

Yet, despite our fixation with the constitution, some policy issues do occasionally get discussed. For our purposes, the most relevant is the ‘named person’ scheme because it looks like a way to ‘scale up’ an initiative to support a universal or primary prevention approach and avoid stigmatising some groups by offering a service to everyone (in this respect, it is the antithesis to ‘troubled families’). In this case, all children in Scotland (and their parents or guardians) get access to a senior member of a public service, and that person acts as a way to ‘join up’ a public sector response to a child’s problems.

Interestingly, this universal approach has its own problems. ‘Troubled families’ sets up a distinction between troubled/ untroubled to limit its proposed intervention in family life. Its problem is the potential to stigmatise and demoralise ‘troubled’ families. ‘Named person’ shows the potential for greater outcry when governments try to not identify and stigmatise specific families. The scheme is largely a response to the continuous suggestion – made after high profile cases of child abuse or neglect – that children can suffer when no agency takes overall responsibility for their care, but has been opposed as excessive infringement on normal family life and data protection, successfully enough to delay its implementation.

The punchline to early intervention as a valence issue

Problems arise almost instantly when you try to turn a valence issue into something concrete. A vague and widely-supported policy, to intervene early to prevent bad outcomes, becomes a set of policy choices based on how governments frame the balance between ideology, stigma, and the evidence of the impact and cost-effectiveness of key interventions (which is often very limited).

Their experiences are not always directly comparable, but the UK and Scottish Governments have helped show us the pitfalls of concrete approaches to prevention and early intervention. They help us show that your basic policy choices include: (a) targeted programmes which increase stigma, (b) ‘indicated’ approaches which don’t always look like early intervention; (c) ‘selective’ approaches which seem to be less effective despite intervening at an earlier stage, (d) universal programmes which might cross a notional line between the state and the family, and (e) approaches which focus primarily on local experimentation with uncertain outcomes.

None of these approaches provide a solution to the early intervention dilemmas that all governments face, and there is no easy way to choose between approaches. We can make these choices more informed and systematic, by highlighting how all of the pieces of the jigsaw fit together, and somehow comparing their intended and unintended consequences. However, this process does not replace political judgement – and quite right too – because there is no such thing as a valence issue at the point of policy choice and delivery.

 

 

 

 

5 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, Scottish politics, UK politics and policy

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

The UK Government’s ‘troubled families’ policy appears to be a classic top-down, evidence-free, and quick emotional reaction to crisis. It developed after riots in England (primarily in London) in August 2011. Within one week, and before announcing an inquiry into them, then Prime Minister David Cameron made a speech linking behaviour directly to ‘thugs’ and immorality – ‘people showing indifference to right and wrong…people with a twisted moral code…people with a complete absence of self-restraint’ – before identifying a breakdown in family life as a major factor (Cameron, 2011a).

Although the development of parenting programmes was already government policy, Cameron used the riots to raise parenting to the top of the agenda:

We are working on ways to help improve parenting – well now I want that work accelerated, expanded and implemented as quickly as possible. This has got to be right at the top of our priority list. And we need more urgent action, too, on the families that some people call ‘problem’, others call ‘troubled’. The ones that everyone in their neighbourhood knows and often avoids …Now that the riots have happened I will make sure that we clear away the red tape and the bureaucratic wrangling, and put rocket boosters under this programme …with a clear ambition that within the lifetime of this Parliament we will turn around the lives of the 120,000 most troubled families in the country.

Cameron reinforced this agenda in December 2011 by stressing the need for individuals and families to take moral responsibility for their actions, and for the state to intervene earlier in their lives to reduce public spending in the long term:

Officialdom might call them ‘families with multiple disadvantages’. Some in the press might call them ‘neighbours from hell’. Whatever you call them, we’ve known for years that a relatively small number of families are the source of a large proportion of the problems in society. Drug addiction. Alcohol abuse. Crime. A culture of disruption and irresponsibility that cascades through generations. We’ve always known that these families cost an extraordinary amount of money…but now we’ve come up the actual figures. Last year the state spent an estimated £9 billion on just 120,000 families…that is around £75,000 per family.

The policy – primarily of expanding the provision of ‘family intervention’ approaches – is often described as a ‘classic case of policy based evidence’: policymakers cherry pick or tell tall tales about evidence to justify action. It is a great case study for two reasons:

  1. Within this one programme are many different kinds of evidence-use which attract the ire of academic commentators, from an obviously dodgy estimate and performance management system to a more-sincere-but-still-criticised use of evaluations and neuroscience.
  2. It is easy to criticise the UK government’s actions but more difficult to say – when viewing the policy problem from its perspective – what the government should do instead.

In other words, it is useful to note that the UK government is not winning awards for ‘evidence-based policymaking’ (EBPM) in this area, but less useful to deny the politics of EBPM and hold it up to a standard that no government can meet.

The UK Government’s problematic use of evidence

Take your pick from the following ways in which the UK Government has been criticised for its use of evidence to make and defend ‘troubled families’ policy.

Its identification of the most troubled families: cherry picking or inventing evidence

At the heart of the programme is the assertion that we know who the ‘troubled families’ are, what causes their behaviour, and how to stop it. Yet, much of the programme is built on a value judgements about feckless parents, and tipping the balance from support to sanctions, and unsubstantiated anecdotes about key aspects such as the tendency of ‘worklessness’ or ‘welfare dependency’ to pass from one generation to another.

The UK government’s target of almost 120000 families was based speculatively on previous Cabinet Office estimates in 2006 that about ‘2% of families in England experience multiple and complex difficulties’. This estimate was based on limited survey data and modelling to identify families who met five of seven criteria relating to unemployment, poor housing, parental education, the mental health of the mother, the chronic illness or disability of either parent, an income below 60% of the median, and an inability to by certain items of food or clothing.

It then gave locally specific estimates to each local authority and asked them to find that number of families, identifying households with: (1) at least one under-18-year-old who has committed an offense in the last year, or is subject to an ASBO; and/ or (2) has been excluded from school permanently, or suspended on three consecutive terms, in a Pupil Referral Unit, off the school roll, or has over 15% unauthorised absences over three consecutive terms; and (3) an adult on out of work benefits.

If the household met all three criteria, they would automatically be included. Otherwise, local authorities had the discretion to identify further troubled families meeting two of the criteria and other indicators of concerns about ‘high costs’ of late intervention such as, ‘a child who is on a Child Protection Plan’, ‘Families subject to frequent police call-outs or arrests’, and ‘Families with health problems’ linked to mental health, addiction, chronic conditions, domestic abuse, and teenage pregnancy.

Its measure of success: ‘turning around’ troubled families

The UK government declared almost-complete success without convincing evidence. Success ‘in the last 6 months’ to identify a ‘turned around family’ is measured in two main ways: (1) the child no longer having three exclusions in a row, a reduction in the child offending rate of 33% or ASB rate of 60%, and/or the adult entering a relevant ‘progress to work’ programme; or (2) at least one adult moving from out of work benefits to continuous employment. It was self-declared by local authorities, and both parties had a high incentive to declare it: local authorities received £4000 per family payments and the UK government received a temporary way to declare progress without long term evidence.

The declaration is in stark contrast to an allegedly suppressed report to the government which stated that the programme had ‘no discernible effect on unemployment, truancy or criminality’. This lack of impact was partly confirmed by FOI requests by The Guardian – demonstrating that at least 8000 families received no intervention, but showed improvement anyway – and analysis by Levitas and Crossley which suggests that local authorities could only identify families by departing from the DCLG’s initial criteria.

Its investment in programmes with limited evidence of success

The UK government’s massive expansion of ‘family intervention projects’, and related initiatives, is based on limited evidence of success from a small sample of people from a small number of pilots. The ‘evidence for the effectiveness of family intervention projects is weak’ and a government-commissioned systematic review suggests that there are no good quality evaluations to demonstrate (well) the effectiveness or value-for-money of key processes such as coordinated service provision. The impact of other interventions, previously with good reputations, has been unclear, such as the Family Nurse Partnership imported from the US which so far has produced ‘no additional short-term benefit’. Overall, Crossley and Lambert suggest that “the weight of evidence surrounding ‘family intervention’ and similar approaches, over the longue durée, actually suggests that the approach doesn’t work”. There is also no evidence to support its heroic claim that spending £10000 per family will save £65000.

Its faith in sketchy neuroscientific evidence on the benefits of early intervention

The government is driven partly by a belief in the benefits of early intervention in the lives of children (from 0-3, or even before birth), which is based partly on the ‘now or never’ argument found in key reviews by Munro and Allen (one and two).

normal brain

Policymakers take liberties with neuroscientific evidence to emphasise the profound effect of stress on early brain development (measured, for example, by levels of cortisol found in hair samples). These accounts underpinning the urgency of early intervention are received far more critically in fields such as social science, neuroscience, and psychology. For example, Wastell and White find no good quality scientific evidence behind the comparison of child brain development reproduced in Allen’s reports.

Now let’s try to interpret and explain these points partly from a government perspective

Westminster politics necessitates this presentation of ‘prevention’ policies

If you strip away the rhetoric, the troubled families programme is a classic attempt at early intervention to prevent poor outcomes. In this general field, it is difficult to know what government policy is – what it stands for and how you measure its success. ‘Prevention’ is vague, plus governments make a commitment to meaningful local discretion and the sense that local actors should be guided by a combination of the evidence of ‘what works’ and its applicability to local circumstances.

This approach is not tolerated in Westminster politics, built on the simple idea of accountability in which you know who is in charge and therefore to blame! UK central governments have to maintain some semblance of control because they know that people will try to hold them to account in elections and general debate. This ‘top down’ perspective has an enduring effect: although prevention policy is vague, individual programmes such as ‘troubled families’ contain enough detail to generate intense debate on central government policy and performance and contain elements which emphasise high central direction, including sustained ministerial commitment, a determination to demonstrate early success to justify a further rollout of policy, and performance management geared towards specific measurable outcomes – even if the broader aim is to encourage local discretion.

This context helps explain why governments appear to exploit crises to sell existing policies, and pursue ridiculous processes of estimation and performance measurement. They need to display strength, show certainty that they have correctly diagnosed a problem and its solution, and claim success using the ‘currency’ of Westminster politics – and they have to do these things very quickly.

Consequently, for example, they will not worry about some academics complaining about policy based evidence – they are more concerned about their media and public reception and the ability of the opposition to exploit their failures – and few people in politics have the time (that many academics take for granted) to wait for research. This is the lens through which we should view all discussions of the use of evidence in politics and policy.

Unequivocal evidence is impossible to produce and we can’t wait forever

The argument for evidence-based-policy rather than policy-based-evidence suggests that we know what the evidence is. Yet, in this field in particular, there is potential for major disagreement about the ‘bar’ we set for evidence.

Table 1 Three ideal types EBBP

For some, it relates to a hierarchy of evidence in which randomised control trials (RCTs) and their systematic review are at the top: the aim is to demonstrate that an intervention’s effect was positive, and more positive than another intervention or non-intervention. This requires experiments: to compare the effects of interventions in controlled settings, in ways that are directly comparable with other experiments.

As table 1 suggests, some other academics do not adhere to – and some reject – this hierarchy. This context highlights three major issues for policymakers:

  1. In general, when they seek evidence, they find this debate about how to gather and analyse it (and the implications for policy delivery).
  2. When seeking evidence on interventions, they find some academics using the hierarchy to argue the ‘evidence for the effectiveness of family intervention projects is weak’. This adherence to a hierarchy to determine research value also doomed to failure a government-commissioned systematic review: the review applied a hierarchy of evidence to its analysis of reports by authors who did not adhere to the same model. The latter tend to be more pragmatic in their research design (and often more positive about their findings), and their government audience rarely adheres to the same evidential standard built on a hierarchy. In the absence of someone giving ground, some researchers will never be satisfied with the available evidence and elected policymakers are unlikely to listen to them.
  3. The evidence generated from RCTs is often disappointing. The so-far-discouraging experience of the Family Nurse Partnership has a particularly symbolic impact, and policymakers can easily pick up a general sense of uncertainty about the best policies in which to invest.

So, if your main viewpoint is academic, you can easily conclude that the available evidence does not yet justify massive expansion in the troubled families programme (perhaps you might prefer the Scottish approach of smaller scale piloting, or for the government to abandon certain interventions altogether).

However, if you are a UK government policymaker feeling the need to act – and knowing that you always have to make decisions despite uncertainty – you may also feel that there will never be enough evidence on which to draw. Given the problems outlined above, you may as well act now than wait for years for little to change.

The ends justify the means

Policymakers may feel that the ends of such policies – investment in early intervention by shifting funds from late intervention – may justify the means, which can include a ridiculous oversimplification of evidence. It may seem almost impossible for governments to find other ways to secure the shift, given the multiple factors which undermine its progress.

Governments sometimes hint at this approach when simplifying key figures – effectively to argue that late intervention costs £9bn while early intervention will only cost £448m – to reinforce policy change: ‘the critical point for the Government was not necessarily the precise figure, but whether a sufficiently compelling case for a new approach was made’.

Similarly the vivid comparison of healthy versus neglected brains provides shocking reference points to justify early intervention. Their rhetorical value far outweighs their evidential value. As in all EBPM, the choice for policymakers is to play the game, to generate some influence in not-ideal circumstances, or hope that science and reason will save the day (and the latter tends to be based on hope rather than evidence). So, the UK appeared to follow the US’ example in which neuroscience ‘was chosen as the scientific vehicle for the public relations campaign to promote early childhood programs more for rhetorical, than scientific reasons’, partly because a focus on, for example, permanent damage to brain circuitry is less abstract than a focus on behaviour.

Overall, policymakers seem willing to build their case on major simplifications and partial truths to secure what they believe to be a worthy programme (although it would be interesting to find out which policymakers actually believe the things they say). If so, pointing out their mistakes or alleging lies can often have a minimal impact (or worse, if policymakers ‘double down’ in the face of criticism).

Implications for academics, practitioners, and ‘policy based evidence’

I have been writing on ‘troubled families’ while encouraging academics and practitioners to describe pragmatic strategies to increase the use of evidence in policy.

Palgrave C special

Our starting point is relevant to this discussion – since it asks what we should do if policymakers don’t think like academics:

  • They worry more about Westminster politics – their media and public reception and the ability of the opposition party to exploit their failures – than what academics think of their actions.
  • They do not follow the same rules of evidence generation and analysis.
  • They do not have the luxury of uncertainty and time.

Generally, this is a useful lens through which we should view discussions of the realistic use of evidence in politics and policy. Without being pragmatic – to recognise that policymakers will never think like scientists, and always face different pressures – we might simply declare ‘policy based evidence’ in all cases. Although a commitment to pragmatism does not solve these problems, at least it prompts us to be more specific about categories of PBE, the criteria we use to identify it, if our colleagues share a commitment to those criteria, what we can reasonably expect of policymakers, and how we might respond.

In disciplines like social policy we might identify a further issue, linked to:

  1. A tradition of providing critical accounts of government policy to help hold elected policymakers to account. If so, the primary aim may be to publicise key flaws without engaging directly with policymakers to help fix them – and perhaps even to criticise other scholars for doing so – because effective criticism requires critical distance.
  2. A tendency of many other social policy scholars to engage directly in evaluations of government policy, with the potential to influence and be influenced by policymakers.

It is a dynamic that highlights well the difficulty of separating empirical and normative evaluations when critics point to the inappropriate nature of the programmes as they interrogate the evidence for their effectiveness. This difficulty is often more hidden in other fields, but it is always a factor.

For example, Parr noted in 2009 that ‘despite ostensibly favourable evidence … it has been argued that the apparent benign-welfarism of family and parenting-based antisocial behaviour interventions hide a growing punitive authoritarianism’. The latter’s most extreme telling is by Garrett in 2007, who compares residential FIPs (‘sin bins’) to post-war Dutch programmes resembling Nazi social engineering and criticises social policy scholars for giving them favourable evaluations – an argument criticised in turn by Nixon and Bennister et al.

For present purposes, note Nixon’s identification of ‘an unusual case of policy being directly informed by independent research’, referring to the possible impact of favourable evaluations of FIPs on the UK Government’s move way from (a) an intense focus on anti-social behaviour and sanctions towards (b) greater support. While it would be a stretch to suggest that academics can set government agendas, they can at least enhance their impact by framing their analysis in a way that secures policymaker interest. If academics seek influence, rather than critical distance, they may need to get their hands dirty: seeking to understand policymakers to find alternative policies that still give them what they want.

5 Comments

Filed under Prevention policy, public policy, UK politics and policy

The Scottish Parliament would be crap in an independent Scotland and almost no-one cares

Here is a four-step plan to avoid having to talk about how powerless the Scottish Parliament tends to be, in comparison to the old idea of ‘power sharing’ with the Scottish Government:

  1. Find something the SNP Government is doing and point out how wrong it is.
  2. Have the opposition parties pile in, taking their chance to bemoan the SNP’s power hoarding.
  3. Have the SNP point out that Labour used to do this sort of thing, so it’s hypocritical to complain now.
  4. Convince the public that it’s OK as long as all of the parties would have done it, or if they have been doing it for a long time.

This pretty much sums up the reaction to the SNP’s use of Parliamentary Liaison Officers (PLOs) on Scottish parliamentary committees: the MSP works closely with a minister and sits on the committee that is supposed to hold the minister to account. The practice ensures that there is no meaningful dividing line between government and parliament, and reinforces the sense that the parliament is not there to provide effective scrutiny and robust challenge to the government. Instead, plenary is there for the pantomime discussion and committees are there to have run-of-the-mill humdrum scrutiny with minimal effect on ministers.

The use of PLOs on parliamentary committees has become yet another example in which the political parties – or, at least, any party with a chance of being in government – put themselves first before the principles of the Scottish Parliament (set out in the run up to devolution). Since devolution, the party of government has gone further than you might expect to establish its influence on parliament: controlling who convenes (its share of) committees and which of its MSPs sit on committees, and moving them around if they get too good at holding ministers to account or asking too-difficult questions. An MSP on the side of government might get a name for themselves if they ask a follow-up question to a minister in a committee instead of nodding appreciatively – and you don’t want that sort of thing to develop. Better to keep it safe and ask your MSPs not to rock the boat, or move them on if they cause a ripple.

So, maybe the early founders of devolution wanted MSPs to sit on the same committees for long periods, to help them develop expertise, build up a good relationship with MSPs from other parties, and therefore work effectively to hold the government to account. Yet, no Scottish government has been willing to let go, to allow that independent role to develop. Instead, they make sure that they have at least one key MSP on each committee to help them agree the party line that all their MSPs are expected to follow. So, this development, of parliamentary aides to ministers corresponding almost exactly with committee membership, might look new, but it is really an extension of longstanding practices to curb the independent power of parliaments and their committees – and the party in government has generally resisted any reforms (including those proposed by the former Presiding Officer Tricia Marwick) to challenge its position.

Maybe the only surprise is that ‘new politics’ seems worse than old Westminster. In Westminster committees, some MPs can make a career as a chair, and their independence from government is far clearer – something that it is keen to reinforce with initiatives such as MPs electing chairs in secret ballots. In comparison, the Scottish Parliament seems like a far poorer relation to its Scottish Government counterpart – partly because of complacency and a lack of continuous reform.

Almost no-one cares about this sort of thing

What is not surprising is the general reaction to the Herald piece on the 15th August – and the follow up on the 16th – which pointed out that the SNP was going further than the use of PLOs it criticised while in opposition.

So, future Scottish Cabinet Secretary Fiona Hyslop – quite rightly – criticised this practice in 2002, arguing that it went against the government’s Scottish Ministerial Code. Note the Labour-led government’s ridiculous defense, which it got away with because (a) almost no-one cares, and (b) the governing parties dominate the parliament.

hyslop 2

Then, in 2007, the SNP government’s solution was to remove the offending section from that Code. Problem solved!

MPAs to PLOs 2003 and 2007

Now, its defence is that Labour used to do it and the SNP has been doing it for 9 years, so why complain now? It can get away with it because almost no-one cares. Of those who might care, most only care if it embarrasses one of the parties at the expense of another. When it looks like they might all be at it, it’s OK. Almost no-one pays attention to the principle that the Scottish Parliament should have a strong role independent of government, and that this role should not be subject to the whims of self-interested political parties.

So, I feel the need to provide a reason for SNP and independence supporters to care more about this, and here goes:

  1. Most people voted No in the 1st referendum on Scottish independence.
  2. There might be a 2nd referendum but it would be silly to expect a Yes vote this time without new and better arguments built more on actual plans rather than the generation of positivity and hope. For a political project to work, you really need to tell people what you will do if you win.
  3. One of those arguments needs to be about political reform. The ‘architects of devolution’ recognised this need to offer political alongside constitutional reform, producing the sense of ‘new politics’ that we now use to show that Scottish politics fell quite short of expectations. The mistake was to assume that they had cracked it in 1999 and never needed to reform again. Instead, institutions need to be changing continuously in light of experience. So, the previous SNP White Paper (p355) was rubbish on this issue because it pretty much said that it would keep things as they were because they were working OK.

p355 Scotland's Future

It is complacent nonsense, treating the Scottish political system as an afterthought, and it might just come back to bite the SNP in the bum. The implicit argument that The Scottish Parliament would be just as crap in an independent Scotland as it is now, and almost no-one cares is poor. Or, to put it in terms of the standard of partisan debate on twitter: shitey whataboutery might make you feel good on twitter, but it won’t win you any votes in the next referendum.

 

See also: Lucy Hunter Blackburn’s Patrick Harvie highlights close links between ministerial aides and parliamentary committees

5 Comments

Filed under Scottish independence, Scottish politics

The Politics of Evidence-based Policymaking in 2500 words

Here is a 2500 word draft of an entry to the Oxford Research Encyclopaedia (public administration and policy) on EBPM. It brings together some thoughts in previous posts and articles

Evidence-based Policymaking (EBPM) has become one of many valence terms that seem difficult to oppose: who would not want policy to be evidence based? It appears to  be the most recent incarnation of a focus on ‘rational’ policymaking, in which we could ask the same question in a more classic way: who would not want policymaking to be based on reason and collecting all of the facts necessary to make good decisions?

Yet, as we know from classic discussions, there are three main issues with such an optimistic starting point. The first is definitional: valence terms only seem so appealing because they are vague. When we define key terms, and produce one definition at the expense of others, we see differences of approach and unresolved issues. The second is descriptive: ‘rational’ policymaking does not exist in the real world. Instead, we treat ‘comprehensive’ or ‘synoptic’ rationality as an ideal-type, to help us think about the consequences of ‘bounded rationality’ (Simon, 1976). Most contemporary policy theories have bounded rationality as a key starting point for explanation (Cairney and Heikkila, 2014). The third is prescriptive. Like EBPM, comprehensive rationality seems – initially – to be unequivocally good. Yet, when we identify its necessary conditions, or what we would have to do to secure this aim, we begin to question EBPM and comprehensive rationality as an ideal scenario.

What is ‘evidence-based policymaking?’ is a lot like ‘what is policy?’ but more so!

Trying to define EBPM is like magnifying the problem of defining policy. As the entries in this encyclopaedia suggest, it is difficult to say what policy is and measure how much it has changed. I use the working definition, ‘the sum total of government action, from signals of intent to the final outcomes’ (Cairney, 2012: 5) not to provide something definitive, but to raise important qualifications, including: there is a difference between what people say they will do, what they actually do, and the outcome; and, policymaking is also about the power not to do something.

So, the idea of a ‘sum total’ of policy sounds intuitively appealing, but masks the difficulty of identifying the many policy instruments that make up ‘policy’ (and the absence of others), including: the level of spending; the use of economic incentives/ penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organisational change; and, the levels of resources/ methods dedicated to policy implementation and evaluation (2012: 26). In that context, we are trying to capture a process in which actors make and deliver ‘policy’ continuously, not identify a set-piece event providing a single opportunity to use a piece of scientific evidence to prompt a policymaker response.

Similarly, for the sake of simplicity, we refer to ‘policymakers’ but in the knowledge that it leads to further qualifications and distinctions, such as: (1) between elected and unelected participants, since people such as civil servants also make important decisions; and (2) between people and organisations, with the latter used as a shorthand to refer to a group of people making decisions collectively and subject to rules of collective engagement (see ‘institutions’). There are blurry dividing lines between the people who make and influence policy and decisions are made by a collection of people with formal responsibility and informal influence (see ‘networks’). Consequently, we need to make clear what we mean by ‘policymakers’ when we identify how they use evidence.

A reference to EBPM provides two further definitional problems (Cairney, 2016: 3-4). The first is to define evidence beyond the vague idea of an argument backed by information. Advocates of EBPM are often talking about scientific evidence which describes information produced in a particular way. Some describe ‘scientific’ broadly, to refer to information gathered systematically using recognised methods, while others refer to a specific hierarchy of methods. The latter has an important reference point – evidence based medicine (EBM) – in which the aim is to generate the best evidence of the best interventions and exhort clinicians to use it. At the top of the methodological hierarchy are randomized control trials (RCTs) to determine the evidence, and the systematic review of RCTs to demonstrate the replicated success of interventions in multiple contexts, published in the top scientific journals (Oliver et al, 2014a; 2014b).

This reference to EBM is crucial in two main ways. First, it highlights a basic difference in attitude between the scientists proposing a hierarchy and the policymakers using a wider range of sources from a far less exclusive list of publications: ‘The tools and programs of evidence-based medicine … are of little relevance to civil servants trying to incorporate evidence in policy advice’ (Lomas and Brown 2009: 906).  Instead, their focus is on finding as much information as possible in a short space of time – including from the ‘grey’ or unpublished/non-peer reviewed literature, and incorporating evidence on factors such as public opinion – to generate policy analysis and make policy quickly. Therefore, second, EBM provides an ideal that is difficult to match in politics, proposing: “that policymakers adhere to the same hierarchy of scientific evidence; that ‘the evidence’ has a direct effect on policy and practice; and that the scientific profession, which identifies problems, is in the best place to identify the most appropriate solutions, based on scientific and professionally driven criteria” (Cairney, 2016: 52; Stoker 2010: 53).

These differences are summed up in the metaphor ‘evidence-based’ which, for proponents of EBM suggests that scientific evidence comes first and acts as the primary reference point for a decision: how do we translate this evidence of a problem into a proportionate response, or how do we make sure that the evidence of an intervention’s success is reflected in policy? The more pragmatic phrase ‘evidence-informed’ sums up a more rounded view of scientific evidence, in which policymakers know that they have to take into account a wider range of factors (Nutley et al, 2007).

Overall, the phrases ‘evidence-based policy’ and ‘evidence-based policymaking’ are less clear than ‘policy’. This problem puts an onus on advocates of EBPM to state what they mean, and to clarify if they are referring to an ideal-type to aid description of the real world, or advocating a process that, to all intents and purposes, would be devoid of politics (see below). The latter tends to accompany often fruitless discussions about ‘policy based evidence’, which seems to describe a range of mistakes by policymakers – including ignoring evidence, using the wrong kinds, ‘cherry picking’ evidence to suit their agendas, and/ or producing a disproportionate response to evidence – without describing a realistic standard to which to hold them.

For example, Haskins and Margolis (2015) provide a pie chart of ‘factors that influence legislation’ in the US, to suggest that research contributes 1% to a final decision compared to, for example, ‘the public’ (16%), the ‘administration’ (11%), political parties (8%) and the budget (8%). Theirs is a ‘whimsical’ exercise to lampoon the lack of EBPM in government (compare with Prewitt et al’s 2012 account built more on social science studies), but it sums up a sense in some scientific circles about their frustrations with the inability of the policymaking world to keep up with science.

Indeed, there is an extensive literature in health science (Oliver, 2014a; 2014b), emulated largely in environmental studies (Cairney, 2016: 85; Cairney et al, 2016), which bemoans the ‘barriers’ between evidence and policy. Some identify problems with the supply of evidence, recommending the need to simplify reports and key messages. Others note the difficulties in providing timely evidence in a chaotic-looking process in which the demand for information is unpredictable and fleeting. A final main category relates to a sense of different ‘cultures’ in science and policymaking which can be addressed in academic-practitioner workshops (to learn about each other’s perspectives) and more scientific training for policymakers. The latter recommendation is often based on practitioner experiences and a superficial analysis of policy studies (Oliver et al, 2014b; Embrett and Randall’s, 2014).

EBPM as a misleading description

Consequently, such analysis tends to introduce reference points that policy scholars would describe as ideal-types. Many accounts refer to the notion of a policy cycle, in which there is a core group of policymakers at the ‘centre’, making policy from the ‘top down’, breaking down their task into clearly defined and well-ordered stages (Cairney, 2016: 16-18). The hope may be that scientists can help policymakers make good decisions by getting them as close as possible to ‘comprehensive rationality’ in which they have the best information available to inform all options and consequences. In that context, policy studies provides two key insights (2016; Cairney et al, 2016).

  1. The role of multi-level policymaking environments, not cycles

Policymaking takes place in less ordered and predictable policy environments, exhibiting:

  • a wide range of actors (individuals and organisations) influencing policy in many levels and types of government
  • a proliferation of rules and norms followed in different venues
  • close relationships (‘networks’) between policymakers and powerful actors
  • a tendency for certain beliefs or ‘paradigms’ to dominate discussion
  • policy conditions and events that can prompt policymaker attention to lurch at short notice.

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policymakers at the ‘top’ to its use by a wide range of influential actors in a multilevel policy process. It shows scientists that they are competing with many actors to present evidence in a particular way to secure a policymaker audience. Support for particular solutions varies according to which organisation takes the lead and how it understands the problem. Some networks are close-knit and difficult to access because bureaucracies have operating procedures that favour particular sources of evidence and some participants over others, and there is a language – indicating what ways of thinking are in good ‘currency’ – that takes time to learn. Well-established beliefs provide the context for policymaking: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic ‘crises’ can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift – but major policy change is rare.

  1. Policymakers use two ‘shortcuts’ to deal with bounded rationality and make decisions

Policymakers deal with ‘bounded rationality’ by employing two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, beliefs, habits, and familiar reference points to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing.

Framing refers to the ways in which we understand, portray, and categorise issues. Problems are multi-faceted, but bounded rationality limits the attention of policymakers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), and responsible for policy, how much attention they pay, and what kind of solution they favour. Scientific evidence plays a part in this process, but we should not exaggerate the ability of scientists to win the day with evidence. Rather, policy theories signal the strategies that actors use to increase demand for their evidence:

  • to combine facts with emotional appeals, to prompt lurches of policymaker attention from one policy image to another (True, Jones, and Baumgartner 2007)
  • to tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (Jones, Shanahan, and McBeth 2014)
  • to interpret new evidence through the lens of the pre-existing beliefs of actors within coalitions, some of which dominate policy networks (Weible, Heikkila, and Sabatier 2012)
  • to produce a policy solution that is feasible and exploit a time when policymakers have the opportunity to adopt it (Kingdon 1984).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, it can take years to produce support for an ‘evidence-based’ policy solution, built on its technical and political feasibility (will it work as intended, and do policymakers have the motive and opportunity to select it?).

EBPM as a problematic prescription

A pragmatic solution to the policy process would involve: identifying the key venues in which the ‘action’ takes place; learning the ‘rules of the game’ within key networks and institutions; developing framing and persuasion techniques; forming coalitions with allies; and engaging for the long term (Cairney, 2016: 124; Weible et al, 2012: 9-15). The alternative is to seek reforms to make EBPM in practice more like the EBM ideal.

Yet, EBM is defendable because the actors involved agree to make primary reference to scientific evidence and be guided by what works (combined with their clinical expertise and judgement). In politics, there are other – and generally more defendable – principles of ‘good’ policymaking (Cairney, 2016: 125-6). They include the need to legitimise policy: to be accountable to the public in free and fair elections, consult far and wide to generate evidence from multiple perspectives, and negotiate policy across political parties and multiple venues with a legitimate role in policymaking. In that context, we may want scientific evidence to play a major role in policy and policymaking, but pause to reflect on how far we would go to secure a primary role for unelected experts and evidence that few can understand.

Conclusion: the inescapable and desirable politics of evidence-informed policymaking

Many contemporary discussions of policymaking begin with the naïve belief in the possibility and desirability of an evidence-based policy process free from the pathologies of politics. The buzz phrase for any complaint about politicians not living up to this ideal is ‘policy based evidence’: biased politicians decide first what they want to do, then cherry pick any evidence that backs up their case. Yet, without additional thought, they put in its place a technocratic process in which unelected experts are in charge, deciding on the best evidence of a problem and its best solution.

In other words, new discussions of EBPM raise old discussions of rationality that have occupied policy scholars for many decades. The difference since the days of Simon and Lindblom (1959) is that we now have the scientific technology and methods to gather information in ways beyond the dreams of our predecessors. Yet, such advances in technology and knowledge have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policymaking at multiple levels (partly to uphold a principled commitment, in many political system, to devolve or share power).  Policy theories do not tell us what to do about these limits to EBPM, but they help us to separate pragmatism from often-misplaced idealism.

References

Cairney, Paul (2012) Understanding Public Policy (Basingstoke: Palgrave)

Cairney, Paul (2016) The Politics of Evidence-based Policy Making (Basingstoke: Palgrave)

Cairney, Paul and Heikkila, Tanya (2014) ‘A Comparison of Theories of the Policy Process’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Paul Cairney, Kathryn Oliver, and Adam Wellstead (2016) ‘To Bridge the Divide between Evidence and Policy: Reduce Ambiguity as Much as Uncertainty’, Public Administration Review, Early view, DOI:10.1111/puar.12555

Embrett, M. and Randall, G. (2014) ‘Social determinants of health and health equity policy research: Exploring the use, misuse, and nonuse of policy analysis theory’, Social Science and Medicine, 108, 147-55

Haskins, Ron and Margolis, Greg (2015) Show Me the Evidence: Obama’s fight for rigor and results in social policy (Washington DC: Brookings Institution Press)

Kingdon, J. (1984) Agendas, Alternatives and Public Policies 1st ed. (New York, NY: Harper Collins)

Lindblom, C. (1959) ‘The Science of Muddling Through’, Public Administration Review, 19: 79–88

Lomas J. and Brown A. (2009) ‘Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health’, Milbank Quarterly, 87, 4, 903–926

McBeth, M., Jones, M. and Shanahan, E. (2014) ‘The Narrative Policy Framework’ in Sabatier, P. and Weible, C. (eds.) Theories of the Policy Process 3rd edition (Chicago: Westview Press)

Nutley, S., Walter, I. and Davies, H. (2007) Using evidence: how research can inform public services (Bristol: The Policy Press)

Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014a) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’ BMC health services research, 14 (1), 2. http://www.biomedcentral.com/1472-6963/14/2

Oliver, K., Lorenc, T., & Innvær, S. (2014b) ‘New directions in evidence-based policy research: a critical analysis of the literature’, Health Research Policy and Systems, 12, 34 http://www.biomedcentral.com/content/pdf/1478-4505-12-34.pdf

Kenneth Prewitt, Thomas A. Schwandt, and Miron L. Straf, (Editors) (2012) Using Science as Evidence in Public Policy http://www.nap.edu/catalog.php?record_id=13460

Simon, H. (1976) Administrative Behavior, 3rd ed. (London: Macmillan)

Stoker, G. (2010) ‘Translating experiments into policy’, The ANNALS of the American Academy of Political and Social Science, 628, 1, 47-58

True, J. L., Jones, B. D. and Baumgartner, F. R. (2007) Punctuated Equilibrium Theory’ in P. Sabatier (ed.) Theories of the Policy Process, 2nd ed (Cambridge, MA: Westview Press)

Weible, C., Heikkila, T., deLeon, P. and Sabatier, P. (2012) ‘Understanding and influencing the policy process’, Policy Sciences, 45, 1, 1–21

 

6 Comments

Filed under Evidence Based Policymaking (EBPM)