Tag Archives: science

I know my audience, but does my other audience know I know my audience?

‘Know your audience’ is a key phrase for anyone trying to convey a message successfully. To ‘know your audience’ is to understand the rules they use to make sense of your message, and therefore the adjustments you have to make to produce an effective message. Simple examples include:

  • The sarcasm rules. The first rule is fairly explicit. If you want to insult someone’s shirt, you (a) say ‘nice shirt, pal’, but also (b) use facial expressions or unusual speech patterns to signal that you mean the opposite of what you are saying. Otherwise, you’ve inadvertently paid someone a compliment, which is just not on. The second rule is implicit. Sarcasm is sometimes OK – as a joke or as some nice passive aggression – and a direct insult (‘that shirt is shite, pal’) as a joke is harder to pull off.
  • The joke rule. If you say that you went to the doctor because a strawberry was growing out of your arse and the doctor gave you some cream for it, you’d expect your audience to know you were joking because it’s such a ridiculous scenario and there’s a pun. Still, there’s a chance that, if you say it quickly, with a straight face, your audience is not expecting a joke, and/ or your audience’s first language is not English, your audience will take you seriously, if only for a second. It’s hilarious if your audience goes along with you, and a bit awkward if your audience asks kindly about your welfare.
  • Keep it simple stupid. If someone says KISS, or some modern equivalent – ‘it’s the economy, stupid’, the rule is that, generally, they are not calling you stupid (even though the insertion of the comma, in modern phrases, makes it look like they are). They are referring to the value of a simple design or explanation that as many people as possible can understand. If your audience doesn’t know the phrase, they may think you’re calling them stupid, stupid.

These rules can be analysed from various perspectives: linguistics, focusing on how and why rules of language develop; and philosophy, to help articulate how and why rules matter in sense making.

There is also a key role for psychological insights, since – for example – a lot of these rules relate to the routine ways in which people engage emotionally with the ‘signals’ or information they receive.

Think of the simple example of twitter engagement, in which people with emotional attachments to one position over another (say, pro- or anti- Brexit), respond instantly to a message (say, pro- or anti- Brexit). While some really let themselves down when they reply with their own tweet, and others don’t say a word, neither audience is immune from that emotional engagement with information. So, to ‘know your audience’ is to anticipate and adapt to the ways in which they will inevitably engage ‘rationally’ and ‘irrationally’ with your message.

I say this partly because I’ve been messing around with some simple ‘heuristics’ built on insights from psychology, including Psychology Based Policy Studies: 5 heuristics to maximise the use of evidence in policymaking .

Two audiences in the study of ‘evidence based policymaking’

I also say it because I’ve started to notice a big unintended consequence of knowing my audience: my one audience doesn’t like the message I’m giving the other. It’s a bit like gossip: maybe you only get away with it if only one audience is listening. If they are both listening, one audience seems to appreciate some new insights, while the other wonders if I’ve ever read a political science book.

The problem here is that two audiences have different rules to understand the messages that I help send. Let’s call them ‘science’ and ‘political science’ (please humour me – you’ve come this far). Then, let’s make some heroic binary distinctions in the rules each audience would use to interpret similar issues in a very different way.

I could go on with these provocative distinctions, but you get the idea. A belief taken for granted in one field will be treated as controversial in another. In one day, you can go to one workshop and hear the story of objective evidence, post-truth politics, and irrational politicians with low political will to select evidence-based policies, then go to another workshop and hear the story of subjective knowledge claims.

Or, I can give the same presentation and get two very different reactions. If these are the expectations of each audience, they will interpret and respond to my messages in very different ways.

So, imagine I use some psychology insights to appeal to the ‘science’ audience. I know that,  to keep it on side and receptive to my ideas, I should begin by being sympathetic to its aims. So, my implicit story is along the lines of, ‘if you believe in the primacy of science and seek evidence-based policy, here is what you need to do: adapt to irrational policymaking and find out where the action is in a complex policymaking system’. Then, if I’m feeling energetic and provocative, I’ll slip in some discussion about knowledge claims by saying something like, ‘politicians (and, by the way, some other scholars) don’t share your views on the hierarchy of evidence’, or inviting my audience to reflect on how far they’d go to override the beliefs of other people (such as the local communities or service users most affected by the evidence-based policies that seem most effective).

The problem with this story is that key parts are implicit and, by appearing to go along with my audience, I provoke a reaction in another audience: don’t you know that many people have valid knowledge claims? Politics is about values and power, don’t you know?

So, that’s where I am right now. I feel like I ‘know my audience’ but I am struggling to explain to my original political science audience that I need to describe its insights in a very particular way to have any traction in my other science audience. ‘Know your audience’ can only take you so far unless your other audience knows that you are engaged in knowing your audience.

If you want to know more, see:

Kathryn Oliver and I have just published an article on the relationship between evidence and policy

How far should you go to secure academic ‘impact’ in policymaking? From ‘honest brokers’ to ‘research purists’ and Machiavellian manipulators

Why doesn’t evidence win the day in policy and policymaking?

The Science of Evidence-based Policymaking: How to Be Heard

When presenting evidence to policymakers, engage with the policy process that exists, not the process you wish existed

 

 

Leave a comment

Filed under Academic innovation or navel gazing, agenda setting, Evidence Based Policymaking (EBPM), Psychology Based Policy Studies, public policy, Storytelling

Why the pollsters got it wrong

We have a new tradition in politics in which some people glory in the fact that the polls got it wrong. It might begin with ‘all these handsome experts with all their fancy laptops and they can’t even tell us exactly how an election will turn out’, and sometimes it ends with, ‘yet, I knew it all along’. I think that the people who say it most are the ones that are pleased with the result and want to stick it to the people who didn’t predict it: ‘if, like me, they’d looked up from their laptops and spoken to real people, they’d have seen what would happen’.

To my mind, it’s always surprising when so many polls seem to do so well. Think for a second about what ‘pollsters’ do: they know they can’t ask everyone how they will vote (and why), so they take a small sample and use it as a proxy for the real world. To make sure the sample isn’t biased by selection, they develop methods to generate respondents randomly. To try to make the most of their resources, and make sure that their knowledge is cumulative, they use what they think they know about the population to make sure that they get enough responses from a ‘representative’ sample of the population. In many cases, that knowledge comes from things like focus groups or one-to-one interviews to get richer (qualitative) information than we can achieve from asking everyone the same question, often super-quickly, in a larger survey.

This process involves all sorts of compromises and unintended consequences when we have a huge population but limited resources: we’d like to ask everyone in person, but it’s cheaper to (say) get a 4-figure response online or on the phone; and, if we need to do it quickly, our sample will be biased towards people willing to talk to us.* So, on top of a profound problem – the possibility of people not telling the truth in polls – we have a potentially less profound but more important problem: the people we need to talk to us aren’t talking to us. So, we get a misleading read because we’re asking an unrepresentative sample (although it is nothing like as unrepresentative as proxy polls from social media, the word ‘on the doorstep’, or asking your half-drunk mates how they’ll vote).

Sensible ‘pollsters’ deal with such problems by admitting that they might be a bit off: highlighting their estimated ‘margin of error’ from the size of their sample, then maybe crossing their fingers behind their backs if asked about the likelihood of more errors based on non-random sampling. So, ignore this possibility for error at your peril. Yet, people do ignore it despite the peril! Here are two reasons why.

  1. Being sensible is boring.

In a really tight-looking two-horse race, the margin of error alone might suggest that either horse might win. So, a sensible interpretation of a poll might be (say), ‘either Clinton or Trump will get the most votes’. Who wants to hear or talk about that?! You can’t fill a 24-hour news cycle and keep up shite Twitter conversations by saying ‘who knows?’ and then being quiet. Nor will anyone pay much attention to a quietly sensible ‘pollster’ or academic telling them about the importance of embracing uncertainty. You’re in the studio to tell us what will happen, pal. Otherwise, get lost.

  1. Recognising complexity and uncertainty is boring.

You can heroically/ stupidly break down the social scientific project into two competing ideas: (1) the world contains general and predictable patterns of behaviour that we can identify with the right tools; or (2) the world is too complex and unpredictable to produce general laws of behaviour, and maybe your best hope is to try to make sense of how other people try to make sense of it. Then, maybe (1) sounds quite exciting and comforting while (2) sounds like it is the mantra of a sandal-wearing beansprout-munching hippy academic. People seem to want a short, confidently stated, message that is easy to understand. You can stick your caveats.

Can we take life advice from this process?

These days I’m using almost every topic as a poorly-constructed segue into a discussion about the role of evidence in politics and policy. This time, the lesson is about using evidence correctly for the correct purpose. In our example, we can use polls effectively for their entertainment value. Or, campaigners can use them as the best-possible proxies during their campaigns: if their polls tell them they are lagging in one area, give it more attention; if they seem to have a big lead in another area; give it less attention. The evidence won’t be totally accurate, but it gives you enough to generate a simple campaigning strategy. Academics can also use the evidence before and after a campaign to talk about how it’s all going. Really, the only thing you don’t expect poll evidence to do is predict the result. For that, you need the Observers from Fringe.

The same goes for evidence in policymaking: people use rough and ready evidence because they need to act on what they think is going on. There will never be enough evidence to make the decision for you, or let you know exactly what will happen next. Instead, you combine good judgement with your values, sprinkle in some evidence, and off you go. It would be silly to expect a small sample of evidence – a snapshot of one part of the world – to tell you exactly what will happen in the much larger world. So, let’s not kid ourselves about the ability of science to tell us what’s what and what to do. It’s better, I think, to recognise life’s uncertainties and act accordingly. It’s better than blaming other people for not knowing what will happen next.

 

*I say ‘we’ and ‘us’ but I’ve never conducted a poll in my life. I interview elites in secret and promise them anonymity.

1 Comment

Filed under Academic innovation or navel gazing, Folksy wisdom, Uncategorized