Policy in 750 Words: Implementation science and policy implementation studies

What is the difference between the study of policy implementation and ‘implementation science’? Here are some simple distinctions and their implications:

1. The starting point for discussion, or the problem to address.

In policy studies, we tend to start with a focus on government policy. Classic studies in the 1970s/80s focused on a gap between a central government’s policy and its outcomes: the ‘implementation gap’ from a ‘top-down’ perspective (part of the focus on a policy cycle).

In implementation science, the starting point is research evidence and a commitment to see it reflected in policy and practice. Here, the ‘gap’ is between evidence and action from a researcher perspective. Someone might ask: if we have all of this evidence on what is the policy problem, why is there not enough attention to it? If we have evidence on ‘what works’ to solve the problem, why is there such a gap between the evidence and policy or practice?

2. The next step: solutions to pursue and debates on those solutions.

In policy studies, top-down approaches pursued an understanding of how to understand or reduce implementation gaps (see perfect implementation). These conditions include the need for: clear policy aims; the policy to work as intended if implemented; a skilful and compliant delivery body or bureaucracy; sufficient political, financial, and organisations resources to deliver; minimal reliance on the support of others; and, no external factors to undermine the policy.

This perspective was challenged somewhat by ‘bottom-up’ perspectives in. Some studies challenged the idea that the ‘top’ could ensure that its policies were carried out in this way. Rather, implementation takes place in networks of governmental and non-governmental actors, or people make policy as they carry it out, and the outcomes are not in the control of the centre (compare the latter with complex policymaking systems, but don’t confuse them with evidence-policy systems). Some also argued that a top-down approach was inappropriate because other ‘centres’ have a democratic mandate, while key services or professions need the autonomy to deliver according to their principles and training without facing the sporadic unintended consequences of top-down initiatives (compare with similar points about multi-centric policymaking or multi-level governance).

In implementation science, people might also focus on top-down ways to close implementation gaps in policy or the practices of specific organisations (e.g. to change how healthcare works), as well as how to encourage the influential actors who can foster the diffusion of evidence-informed innovations, or topics such as knowledge brokerage to improve the supply and demand for evidence, as part of a focus on ‘research on research use’. If so, challenges to perspectives could relate to topic such as how to establish what we mean by ‘the evidence’, in the context of debates on the appropriateness of a hierarchy of evidence based on research methods (e.g. see What Works Now).

3. The potentially useful overlaps between policy implementation studies and implementation science.

Perhaps the most obvious connection is that IS researchers might be concerned about a policy implementation gap if they like the policy, and may therefore wonder how the better provision of evidence might help to close an implementation gap. Efforts might include to update policymaker knowledge on the size and urgency of the problem, or evaluations of what works in practice. The latter may help to address a government’s reluctance to act without enough certainty about the likely outcome. More generally, the IS story can contribute to a government’s story of its rationalist or ‘evidence based’ approach. In each activity there may be a common focus on a ‘theory of change’, albeit raising different questions (e.g. How does policy change? How does the provision of evidence impact policy?).

There are also overlaps in relation to the idea of ‘scaling up’ policy innovations. I argue that there is high congruence between a top-down approach to policy and a hierarchy-of evidence-approach to implementation science, in which the aim is to foster the uniform spread of policy solutions that ‘work’. Alternatives include a more bottom-up and co-productive approach to policy based more on governance principles that evidential hierarchies, or hybrid approaches that encourage policy actors to learn from the delivery of policy and update their practices.

Both activities also raise normative debates and dilemmas. In policy studies, we see debates about how to make things happen. For example, to reduce policy ambiguity should one central authority decide what the policy means? Or should we foster more collaborative policymaking in which many actors work together to make sense of policy aims in their context? Should governments drive a mandate from the centre, or does that imperative undermine the meaningful delivery of locally appropriate outcomes? In implementation science, we see some recognition that policy and practice should not simply be driven by the knowledge of an elite profession of researchers. The meaningful co-production of knowledge requires us to rethink what it means to identify a knowledge-policy gap (whose knowledge should we value during collaboration?). The meaningful co-production of policy entails the need to incorporate far more than scientific evidence into the design of policy and practice.

Finally, they both use the word ‘implementation’, which – I reckon – is one of the biggest barriers to initial understanding of what they both seek to do (and why).

See also: 500, 1000, 750. You will rarely see the word ‘implementation’ there. However, these concepts focus on the limits to policymaker resources and complexity of policymaking environments, and therefore signal that the ‘implementation gap’ makes limited sense in the real world or is a feature rather than a bug in policymaking systems. They also suggest that the life of an implementation scientist is tricky if they seek to connect evidence to a policy process that no-one really understands (although we know that it does not correspond to simple models of comprehensive rationality and the policy cycle).

What does policymaking look like? (look for the image on implementation)

Policy in 500 words: uncertainty versus ambiguity (they are very different things)

Policy Analysis in 750 Words: How to deal with ambiguity (should you impose meaning from the top down?)

Ansell, C., Sørensen, E., & Torfing, J. (2017). Improving policy implementation through collaborative policymaking. Policy & Politics45(3), 467-486 https://doi.org/10.1332/030557317X14972799760260 (skip to the focus on collaboration)

Leave a comment

Filed under 750 word policy analysis

Leave a comment