Tag Archives: Evaluation

The Politics of Evidence

This is a draft of my review of Justin Parkhurst (2017) The Politics of Evidence (Routledge, Open Access)

Justin Parkhurst’s aim is to identify key principles to take forward the ‘good governance of evidence’. The good governance of scientific evidence in policy and policymaking requires us to address two fundamentally important ‘biases’:

  1. Technical bias. Some organisations produce bad evidence, some parts of government cherry-pick, manipulate, or ignore evidence, and some politicians misinterpret the implications of evidence when calculating risk. Sometimes, these things are done deliberately for political gain. Sometimes they are caused by cognitive biases which cause us to interpret evidence in problematic ways. For example, you can seek evidence that confirms your position, and/ or only believe the evidence that confirms it.
  2. Issue bias. Some evidence advocates use the mantra of ‘evidence based policy’ to depoliticise issues or downplay the need to resolve conflicts over values. They also focus on the problems most conducive to study via their most respected methods such as randomised control trials (RCTs). Methodological rigour trumps policy relevance and simple experiments trump the exploration of complex solutions. So, we lose sight of the unintended consequences of producing the ‘best’ evidence to address a small number of problems, and making choices about the allocation of research resources and attention. Again, this can be deliberate or caused by cognitive biases, such as to seek simpler and more answerable questions than complex questions with no obvious answer.

To address both problems, Parkhurst seeks pragmatic ways to identify principles to decide what counts as ‘good evidence to inform policy’ and ‘what constitutes the good use of evidence within a policy process’:

‘it is necessary to consider how to establish evidence advisory systems that promote the good governance of evidence – working to ensure that rigorous, sys­tematic and technically valid pieces of evidence are used within decision-making processes that are inclusive of, representative of and accountable to the multiple social interests of the population served’ (p8).

Parkhurst identifies some ways in which to bring evidence and policy closer together. First, to produce evidence more appropriate for, or relevant to, policymaking (‘good evidence for policy’):

  1. Relate evidence more closely to policy goals.
  2. Modify research approaches and methods to answer policy relevant questions.
  3. Ensure that the evidence relates to the local or relevant context.

Second, to produce the ‘good use of evidence’, combine three forms of ‘legitimacy’:

  1. Input, to ensure democratic representative bodies have the final say.
  2. Throughput, to ensure widespread deliberation.
  3. Output, to ensure proper consideration the use of the most systematic, unbiased and rigorously produced scientific evidence relevant to the problem.

In the final chapter, Parkhurst suggests that these aims can be pursued in many ways depending on how governments want to design evidence advisory systems, but that it’s worth drawing on the examples of good practice he identifies. Parkhurst also explores the role for Academies of science, or initiatives such as the Cochrane Collaboration, to provide independent advice. He then outlines the good governance of evidence built on key principles: appropriate evidence, accountability in evidence use, transparency, and contestability (to ensure sufficient debate).

The overall result is a book full of interesting discussion and very sensible, general advice for people new to the topic of evidence and policy. This is no mean feat: most readers will seek a clearly explained and articulate account of the subject, and they get it here.

For me, the most interesting thing about Parkhurst’s book is the untold story, or often-implicit reasoning behind the way in which it is framed. We can infer that it is not a study aimed primarily at a political science or social science audience, because most of that audience would take its starting point for granted: the use of evidence is political, and politics involves values. Yet, Parkhurst feels the need to remind the reader of this point, in specific (“it is worth noting that the US presidency is a decidedly political role”, p43) and general circumstances (‘the nature of policymaking is inherently political’, p65). Throughout, the audience appears to be academics who begin with a desire for ‘evidence based policy’ without fully thinking through the implications, either about the lack of a magic bullet of evidence to solve a policy problem, how we might maintain a political system conducive to democratic principles and good evidence use, how we might design a system to reduce key ‘barriers’ between the supply of evidence by scientists and its demand by policymakers, and why few such designs have taken off.

In other words, the book appeals primarily to scientists trained outside social science, some of whom think about politics in their spare time, or encounter it in dispiriting encounters with policymakers. It appeals to that audience with a statement on the crucial role of high quality evidence in policymaking, highlights barriers to its use, tells scientists that they might be part of the problem, but then provides them with the comforting assurance that we can design better systems to overcome at least some of those barriers. For people trained in policy studies, this concluding discussion seems like a tall order, and I think most would read it with great scepticism.

Policy scientists might also be sceptical about the extent to which scientists from other fields think this way about hierarchies of scientific evidence and the desire to depoliticise politics with a primary focus on ‘what works’. Yet, I too hear this language regularly in interdisciplinary workshops (often while standing next to Justin!), and it is usually accompanied by descriptions of the pathology of policymaking, the rise of post-truth politics and rejection of experts, and the need to focus on the role of objective facts in deciding what policy solutions work best. Indeed, I was impressed recently by the skilled way in which another colleague prepared this audience for some provocative remarks when he suggested that the production and use of evidence is about power, not objectivity. OMG: who knew that policymaking was political and about power?!

So, the insights from this book are useful to a large audience of scientists while, for a smaller audience of policy scientists, they remind us that there is an audience out there for many of the statements that many of us would take for granted. Some evidence advocates use the language of ‘evidence based policymaking’ strategically, to get what they want. Others appear to use it because they believe it can exist. Keep this in mind when you read the book.

Parkhurst

3 Comments

Filed under Evidence Based Policymaking (EBPM)

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

 

 

1 Comment

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

We need better descriptions than ‘evidence-based policy’ and ‘policy-based evidence’: the case of UK government ‘troubled families’ policy

Here is the dilemma for ‘evidence-based’ ‘troubled families’ policy: there are many indicators of ‘policy based evidence’ but few (if any) feasible and ‘evidence based’ alternatives.

Viewed from the outside, TF looks like a cynical attempt to produce a quick fix to the London riots, stigmatise vulnerable populations, and hoodwink the public into thinking that the central government is controlling local outcomes and generating success.

Viewed from the inside, it is a pragmatic policy solution, informed by promising evidence which needs to be sold in the right way. For the UK government there may seem to be little alternative to this policy, given the available evidence, the need to do something for the long term and to account for itself in a Westminster system in the short term.

So, in this draft paper, I outline this disconnect between interpretations of ‘evidence based policy’ and ‘policy based evidence’ to help provide some clarity on the pragmatic use of evidence in politics:

cairney-offshoot-troubled-families-ebpm-5-9-16

See also:

Governments think it’s OK to use bad evidence to make good policy: the case of the UK Government’s ‘troubled families’

Early intervention policy, from ‘troubled families’ to ‘named persons’: problems with evidence and framing ‘valence’ issues

In each of these posts, I note that it is difficult to know how, for example, social policy scholars should respond to these issues – but that policy studies help us identify a choice between strategies. In general, pragmatic strategies to influence the use of evidence in policy include: framing issues to catch the attention or manipulate policymaker biases, identifying where the ‘action’ is in multi-level policymaking systems, and forming coalitions with like-minded and well-connected actors. In other words, to influence rather than just comment on policy, we need to understand how policymakers would respond to external evaluation. So, a greater understanding the routine motives of policymakers can help produce more effective criticism of its problematic use of evidence. In social policy, there is an acute dilemma about the choice between engagement, to influence and be influenced by policymakers, and detachment to ensure critical distance. If choosing the latter, we need to think harder about how criticism of PBE makes a difference.

4 Comments

Filed under agenda setting, Evidence Based Policymaking (EBPM), Prevention policy, public policy, UK politics and policy

The politics of implementing evidence-based policies

This post by me and Kathryn Oliver appeared in the Guardian political science blog on 27.4.16: If scientists want to influence policymaking, they need to understand it . It builds on this discussion of ‘evidence based best practice’ in Evidence and Policy. There is further reading at the end of the post.

Three things to remember when you are trying to close the ‘evidence-policy gap’

Last week, a new major report on The Science of Using Science: Researching the Use of Research Evidence in Decision-Making suggested that there is very limited evidence of ‘what works’ to turn scientific evidence into policy. There are many publications out there on how to influence policy, but few are proven to work.

This is because scientists think about how to produce the best possible evidence rather than how different policymakers use evidence differently in complex policymaking systems (what the report describes as the ‘capability, motivation, and opportunity’ to use evidence). For example, scientists identify, from their perspective, a cultural gap between them and policymakers. This story tells us that we need to overcome differences in the languages used to communicate findings, the timescales to produce recommendations, and the incentives to engage.

This scientist perspective tends to assume that there is one arena in which policymakers and scientists might engage. Yet, the action takes place in many venues at many levels involving many types of policymaker. So, if we view the process from many different perspectives we see new ways in which to understand the use of evidence.

Examples from the delivery of health and social care interventions show us why we need to understand policymaker perspectives. We identify three main issues to bear in mind.

First, we must choose what counts as ‘the evidence’. In some academic disciplines there is a strong belief that some kinds of evidence are better than others: the best evidence is gathered using randomised control trials and accumulated in systematic reviews. In others, these ideas have limited appeal or are rejected outright, in favour of (say) practitioner experience and service user-based feedback as the knowledge on which to base policies. Most importantly, policymakers may not care about these debates; they tend to beg, borrow, or steal information from readily available sources.

Second, we must choose the lengths to which we are prepared to go ensure that scientific evidence is the primary influence on policy delivery. When we open up the ‘black box’ of policymaking we find a tendency of central governments to juggle many models of government – sometimes directing policy from the centre but often delegating delivery to public, third, and private sector bodies. Those bodies can retain some degree of autonomy during service delivery, often based on governance principles such as ‘localism’ and the need to include service users in the design of public services.

This presents a major dilemma for scientists because policy solutions based on RCTs are likely to come with conditions that limit local discretion. For example, a condition of the UK government’s license of the ‘family nurse partnership’ is that there is ‘fidelity’ to the model, to ensure the correct ‘dosage’ and that an RCT can establish its effect. It contrasts with approaches that focus on governance principles, such as ‘my home life’, in which evidence – as practitioner stories – may or may not be used by new audiences. Policymakers may not care about the profound differences underpinning these approaches, preferring to use a variety of models in different settings rather than use scientific principles to choose between them.

Third, scientists must recognise that these choices are not ours to make. We have our own ideas about the balance between maintaining evidential hierarchies and governance principles, but have no ability to impose these choices on policymakers.

This point has profound consequences for the ways in which we engage in strategies to create impact. A research design to combine scientific evidence and governance seems like a good idea that few pragmatic scientists would oppose. However, this decision does not come close to settling the matter because these compromises look very different when designed by scientists or policymakers.

Take for example the case of ‘improvement science’ in which local practitioners are trained to use evidence to experiment with local pilots and learn and adapt to their experiences. Improvement science-inspired approaches have become very common in health sciences, but in many examples the research agenda is set by research leads and it focuses on how to optimise delivery of evidence-based practice.

In contrast, models such as the Early Years Collaborative reverse this emphasis, using scholarship as one of many sources of information (based partly on scepticism about the practical value of RCTs) and focusing primarily on the assets of practitioners and service users.

Consequently, improvement science appears to offer pragmatic solutions to the gap between divergent approaches, but only because they mean different things to different people. Its adoption is only one step towards negotiating the trade-offs between RCT-driven and story-telling approaches.

These examples help explain why we know so little about how to influence policy. They take us beyond the bland statement – there is a gap between evidence and policy – trotted out whenever scientists try and maximise their own impact. The alternative is to try to understand the policy process, and the likely demand for and uptake of evidence, before working out how to produce evidence that would fit into the process. This different mind-set requires a far more sophisticated knowledge of the policy process than we see in most studies of the evidence-policy gap.  Before trying to influence policymaking, we should try to understand it.

Further reading

The initial further reading uses this table to explore three ways in which policymakers, scientists, and other groups have tried to resolve the problems we discuss:

Table 1 Three ideal types EBBP

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer

You can also explore these links to discussions of EBPM, policy theory, and specific policy fields such as prevention

  1. My academic articles on these topics
  2. The Politics of Evidence Based Policymaking
  3. Key policy theories and concepts in 1000 words
  4. Prevention policy

 

2 Comments

Filed under ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based best practice: 4 messages

Well, it’s really a set of messages, geared towards slightly different audiences, and summed up by this table:

Table 1 Three ideal types EBBP.JPG

  1. This academic journal article (in Evidence and Policy) highlights the dilemmas faced by policymakers when they have to make two choices at once, to decide: (1) what is the best evidence, and (2) how strongly they should insist that local policymakers use it. It uses the case study of the ‘Scottish Approach’ to show that it often seems to favour one approach (‘approach 3’) but actually maintains three approaches. What interests me is the extent to which each approach contradicts the other. We might then consider the cause: is it an explicit decision to ‘let a thousand flowers bloom’ or an unintended outcome of complex government?
  2. I explore some of the scientific  issues in more depth in posts which explore: the political significance of the family nurse partnership (as a symbol of the value of randomised control trials in government), and the assumptions we make about levels of control in the use of RCTs in policy.
  3. For local governments, I outline three ways to gather and use evidence of best practice (for example, on interventions to support prevention policy).
  4. For students and fans of policy theory, I show the links between the use of evidence and policy transfer.

Further reading (links):

My academic articles on these topics

The Politics of Evidence Based Policymaking

Key policy theories and concepts in 1000 words

Prevention policy

10 Comments

Filed under 1000 words, ESRC Scottish Centre for Constitutional Change, Evidence Based Policymaking (EBPM), Prevention policy, Scottish politics, UK politics and policy

How can you tell the difference between policy-based-evidence and evidence-based-policymaking?

‘Policy-based evidence’ (PBE) is a great phrase because you instantly know what it means: a policymaker decided what they wanted to do, then sought any old evidence to back up their policy.

‘Evidence-based policymaking’ (EBPM) is a rotten phrase largely because no one knows or agrees what it means. So, you can never be sure if a policy was ‘evidence-based’ even when policymakers support apparently well-evidence programmes.

The binary distinction only works politically, to call something you don’t like PBE when it doesn’t meet a standard of EBPM we can’t properly describe.

To give you a sense of what happens when you try to move away from the binary distinction, consider this large number of examples. Which relate to PBE and which to EBPM?

  1. The evidence on problems and solutions come first, and policymakers select the best interventions according to narrow scientific criteria (e.g. on the effectiveness of an intervention).
  2. The evidence comes first, but policymakers do not select the best interventions according to narrow scientific criteria. For example, their decision may be based primarily on economic factors such as value for money (VFM).
  3. The evidence comes first, but policymakers do not select the best interventions according to narrow scientific criteria or VFM. They have ideological or party political/ electoral reasons for rejecting evidence-based interventions.
  4. The evidence comes first, then policymakers inflate the likely success of interventions. They get ahead of the limited evidence when pushing a programme.
  5. Policymakers first decide what they want to do, then seek evidence to back up their decisions.
  6. Policymakers recognise a problem, the evidence base is not well developed, but policymakers act quickly anyway.
  7. Policymakers recognise a problem, the evidence is highly contested, and policymakers select the recommendations of one group of experts and reject those of another.
  8. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on evidence (on its effectiveness) from randomised control trials.
  9. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on evidence (on its effectiveness) from qualitative data (feedback from service users and professional experience).
  10. Policymakers recognise a problem after it is demonstrated by scientific evidence, then select a solution built on their personal experience and assessment of what is politically feasible.
  11. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on evidence (on its effectiveness) from randomised control trials.
  12. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on evidence (on its effectiveness) from qualitative data (feedback from service users and professional experience).
  13. Policymakers recognise a problem without the use of scientific evidence, then select a solution built on their personal experience and assessment of what is politically feasible.

I could go on, but you get the idea. Things get more confusing when you find combinations of these descriptions in fluid policies, such as when a programme is built partially on promising evidence (e.g. from pilots), only to be rejected or supported completely when events prompt policymakers to act more quickly than expected. Or, an evidence-based programme may be imported without enough evaluation to check if it works as intended in a new arena.

In a nutshell, the problem is that neither description captures these processes well. So, people use the phrase ‘evidence informed’ policymaking – but does this phrase help us much more? Have a look at the 13 examples and see which ones you’d describe as evidence informed.

Is it 12?

If so, when you use the phrase ‘evidence-informed’ policy or policymaking, which example do you mean?

For more reading on EBPM see: https://paulcairney.wordpress.com/ebpm/

8 Comments

Filed under Evidence Based Policymaking (EBPM), public policy

The politics of evidence-based policymaking: focus on ambiguity as much as uncertainty

There is now a large literature on the gaps between the production of scientific evidence and a policy or policymaking response. However, the literature in key fields – such as health and environmental sciences – does not use policy theory to help explain the gap. In this book, and in work that I am developing with Kathryn Oliver and Adam Wellstead, I explain why this matters by identifying the difference between empirical uncertainty and policy ambiguity. Both concepts relate to ‘bounded rationality’: policymakers do not have the ability to consider all evidence relevant to policy problems. Instead, they employ two kinds of shortcut: ‘rational’, by pursuing clear goals and prioritizing certain kinds and sources of information, and ‘irrational’, by drawing on emotions, gut feelings, deeply held beliefs, and habits to make decisions quickly. This takes place in a complex policymaking system in which policymaker attention can lurch from issue to issue, policy is made routinely in subsystems, and the ‘rules of the game’ take time to learn.

The key problem in the health and environmental sciences is that studies focus only on the first short cut. They identify the problem of uncertainty that arises when policymakers have incomplete information, and seek to solve it by improving the supply of information and encouraging academic-practitioner networks and workshops. They ignore the importance of a wider process of debate, coalition formation, lobbying, and manipulation, to reduce ambiguity and establish a dominant way to frame policy problems. Further, while scientific evidence cannot solve the problem of ambiguity, persuasion and framing can help determine the demand for scientific evidence.

Therefore, the second solution is to engage in a process of framing and persuasion by, for example, forming coalitions with actors with the same aims or beliefs, and accompanying scientific information with simple stories to exploit or adapt to the emotional and ideological biases of policymakers. This is less about packaging information to make it simpler to understand, and more about responding to the ways in which policymakers think – in general, and in relation to emerging issues – and, therefore, how they demand information.

In the book, I present this argument in three steps. First, I bring together a range of insights from policy theory, to show the huge amount of accumulated knowledge of policymaking on which other scientists and evidence advocates should draw. Second, I discuss two systematic reviews – one by Oliver et al, and one that Wellstead and I developed – of the literature on ‘barriers’ to evidence and policy in health and environmental studies. They show that the vast majority of studies in each field employ minimal policy theory and present solutions which focus only on uncertainty. Third, I identify the practical consequences for actors trying to maximize the uptake of scientific evidence within government.

My conclusion has profound implications for the role of science and scientific experts in policymaking. Scientists have a stark choice: to produce information and accept that it will have a limited impact (but that scientists will maintain an often-useful image of objectivity), or to go beyond one’s comfort zone, and expertise, to engage in a normative enterprise that can increase impact at the expense of objectivity.

1 Comment

Filed under Evidence Based Policymaking (EBPM), Public health, public policy