Can Behavioral Science Predict Election Outcomes? We’re About To Find Out

Lessons learned: 1) Pollsters need a really thick skin to put their predictions out there 2) Trust the data, and 3) The best questions to answer are the ones you write yourself.

 

 

Editor’s Note: Anyone who has the dubious honor of being my friend on Facebook knows I am a political junkie. Elections are like sports to me, and the Presidential election is the Super Bowl. RealClearPolitics, FiveThirtyEight, The Hill, Politico and The Drudge Report are not just daily reading for me, I check them several times a day. I post things I think are interesting on Facebook and enjoy debating with my friends about them and taking a stab at being an amateur political prognosticator. I’m not very good at it (I should probably stick to market research industry trend analysis), but we all have our hobbies and trying my hand at predicting election outcomes is mine.

As we all know, this election is unique in many ways and many folks far smarter and more experienced than I have struggled to get a handle on it via traditional polling. This is compounded in a variety of ways, and has been a trend now for several years with some major failures to accurately predict outcomes in both the US and in other nations. One of the most unlikely pundits (and another daily read for me!) to emerge this year is Scott Adams, the creator of the Dilbert comic strip. Scott is a student of behavioral science (or persuasion as he terms it) and so far, his insights, especially around the Trump phenomenon, have been pretty accurate.  Others have also started to look at political races through the lens of behavioral economics, but to date, no one has combined that with quantitative research. Well, leave it to BrainJuicer to change that.

They just launched System1 Politics, which is their new weekly tracker of the US Presidential Election through their The 3 Fs framework, and it’s intensely interesting. Like Scott Adams, the BrainJuicer crew believe that the dynamics of the race and even it’s outcome can best be measured by understanding how voters are responding to the candidates on an emotional, System 1 level. Now they are putting skin in the game and are going to publicly experiment to see if they can call the horse race. It’s a bold move.

Tom Ewing of BrainJuicer reached out to see if I wanted to post something on the blog about it. Since I didn’t have time to do an interview, Tom came up with the idea of interviewing himself. I think that is pretty fun, and I think you will too. Below is Tom Ewing interviewing Tom Ewing on the launch of System1 Politics. Enjoy!    

 

By Tom Ewing

What is System1 Politics? Is it a polling company?

No. We have the same goal as opinion polls – predicting what’s going to happen in elections – but we’re taking a very different route to it. In a nutshell, opinion polls use claimed behavior – how people say they will vote – to predict actual behavior – how people actually will vote. We measure the basic mental shortcuts guiding a decision to predict the outcome.

Does that work?

It certainly works for consumer decisions. Fame, Feeling and Fluency – which are the measures we use – are what we use in our branding work to assess current strength and predict future growth. What we believe is that these factors are likely to lie behind political decisions, too. As with every BrainJuicer project it’s all about finding the best and most predictive proxy for people’s fast “System 1” decision-making.

But choosing someone for the most powerful job in the world is a bit different from choosing a brand of soap!

It’s a more important decision, but that doesn’t mean the factors behind it are all that different.  And this isn’t because we live in an age of “post-fact politics” – that’s about the media’s responsibility to check facts and challenge politicians. As far as decision making goes, we live, and we always have lived, in a world of pre-fact politics.

Even with highly considered System 2 decisions, we tend to be guided initially by a set of mental shortcuts which let us know whether something is a good choice or not. And those are Fame, Feeling and Fluency. Fame is the Availability Heuristic – how readily a particular option comes to mind. Feeling is the Affect Heuristic – how happy you feel with an option. Fluency is the trickiest – it’s Processing Fluency, basically how easy an option is to recognize. And we follow the Ehrenberg-Bass school of thought, which is that Fluency is all down to what Professor Byron Sharp calls distinctive assets – things like logos, colors, images, words and phrases. The more distinctive they are, and the more embedded in your memory they are, the quicker they are to process.

With Donald Trump, for instance, we realized he was pretty much a cert for the Republican nomination when we realized that just his hair was more recognizable than any of the other candidates in the race.

Let’s talk about Trump. You’re running a wave of data each week measuring Fame, Feeling and Fluency and updating your prediction accordingly. Is he going to win?

He has a very good chance. Right now he and Hillary Clinton  are absolutely neck and neck on our measures. They each have an advantage and as it happens they cancel one another out right now.

On Fame they are completely level. Fame was a massive advantage for Trump in the Republican Primary – it was the main reason we called it for Trump in January, back when commentators were still assuming the party would unite behind Rubio or even Jeb Bush. Trump just took up a lot more mental space. But Clinton is just as well-known, and of course now it’s a two horse race they’re mentioned in the same breath anyway. We’ve stopped measuring Fame because they’ve maxed it out.

Feeling is a different matter. Hillary Clinton’s scores on Feeling have generally been bad, but Trump’s have been awful, and the gap is high enough that it counts as an advantage for her. Fame doesn’t change much, but Feeling moves around quite a lot – and back in June, just after her brush with the FBI, Clinton’s Feeling took a nasty dip. She’s recovered since then and is currently well ahead of Trump, but it shows that it can change.

As for Fluency, that changes more slowly than Feeling, and Donald Trump has led on it in every wave we’ve done. He’s been a master of simple phrases and simple images – like “crooked Hillary” and the border wall – that have been big distinctive assets for him, and done a lot to define the election on his terms.

So Clinton wins on Feeling, Trump wins on Fluency. The polls are still showing a narrow Clinton lead, but on these real fundamentals, it’s neck and neck.

What does Trump need to do?

Trump needs to either make people feel better about him, or worse about Clinton, or a bit of both. At the moment – despite being very unpopular – they’re both on a Feeling high, so it might be he hasn’t got much room to make people like him better, and that’s why he’s focusing on damaging her reputation.

What does Clinton need to do?

She is also at the top of her range on Feeling, so may not have much room to improve there. She needs more Fluency, but so far Trump has been much better at coming up with memorable imagery and phrases. There’s one class of distinctive asset where Clinton has an advantage, though – she’s more strongly associated with former presidents and with the trappings of office. If being “presidential” helps when it comes to decision day, that’s good for her. You can see that play out in her tactics too – trying to reinforce the idea that Trump just isn’t presidential.

Why doesn’t your prediction match the polls? Do you think there are “shy Trumpies” who want to support him but don’t dare say so in public?

It’s possible – though remember this is a weird election in that Clinton is also hugely disliked. There are likely to be shy voters on both sides. But this is one reason we don’t ask voting intention questions. Our whole raison d’etre is – can we get to the outcome without asking people what they’ll do? So there’s no question where you might moderate your answer to be more socially acceptable.

What I will say is that our model moved from Clinton to Neck-And-Neck two weeks ago now, when she was several points ahead in the polls and before they began to tighten. One hypothesis we have is that emotion is a little ahead of polling response, because on a gut, System 1 level you feel you want to vote for or against someone, and then you wait until something happens that gives you a System 2 justification to publicly declare that. So we’re looking to test that.

Isn’t this far too simple a model for the US electoral system, where the Electoral College is what really matters, not national opinion?

With an unlimited budget we’d definitely do 50 different state-wide studies of Fame, Feeling and Fluency. (We’d also buy an office unicorn.) So I know how complex the electoral system is. But that’s one reason we didn’t want to do polling. Polling is all about a behavioral fiction – how would you vote if the election was tomorrow? But the election isn’t tomorrow! (Unless it’s November 7th) And you’re not in the voting booth. It’s claimed future behavior, which is exactly what we try to avoid asking in every other study we do. We think that by understanding the fundamentals behind the decision you can avoid that. In the long run that would hopefully let you predict elections several weeks or even months before they actually happen.

Why get involved in political work?

System1 Politics was born at 10 PM on May 7th, 2015, when Big Ben chimed and the BBC’s UK election exit poll revealed that the pollsters had called the UK general election completely wrong. John Kearon was watching and thought, look, there has to be a better way of doing this. System1 Politics is us trying to work one out in public. Also, let’s face it, these are exciting topics!

What have you learned?

I’ve learned three things.

Firstly, I have massively enhanced respect for pollsters. You need a really thick skin to put your predictions out there, week in week out, and take the flack from people who think you’ve skewed them or made them up or are just angry because their side is losing. It’s a hell of a job.

Secondly, I have to trust the data. I am a left-leaning British guy so it’s fairly obvious that I have a strong preference in this election. But you have to put that completely aside, and you also have to put hunches aside. We used behavioral methods to assess the Brexit vote, and it came up as Leave several weeks before the referendum. But we didn’t make a big thing about it because our hunch was that we’d got it wrong. We hadn’t, and we’ve learned from that: stand by the data and don’t make a prediction unless it backs you up.

And thirdly, I’ve learned from politicians that the best questions to answer are the ones you write yourself.

Thanks, Tom.

It was a pleasure, Tom.

 

Editor’s Post Script: 

As you can see, the System1 Politics analysis is roughly in line with the average of the latest traditional polls. It will be interesting to see how things trend from here on in!

 

rcp-polls

Please share...

7 responses to “Can Behavioral Science Predict Election Outcomes? We’re About To Find Out

  1. Interesting. We measured with our Cognitive Analytics (system 1 & system 2) from what people said on Hillary and Trump’s Facebook commentaries by modeling them as Brands. The emotional promise of Hillary’s brand persona and the emotional attractiveness was higher than Trump’s . Interestingly all other metrics we have were very close.(neck to neck) So Hillary seems to sway the hearts over the mind… This was done a week ago, discounting her recent health concerns.and the sentiments. ..

  2. We agree that “claimed future behaviour” collected by pollsters is in fact highly problematic. However, the behavioural science problem identified can be resolved in another way, not only by reverting to the three constructs described in the article.

    The other way (which we chose) is to flip the implicit question: How “the others” will vote. It has been shown over and over that people are better predictors of others’ behaviour than of their own.

    To the Editor: Evidently this is quant, too, as the answer to the question is a percentage number. We believe that this version avoids the “wrong” system 1 mentioned above, but not by polling another system 1 response. Our approach activates participants’ System 2 for more thoughtfulness and foresightedness.

    It will be interesting to see how two different behavioural quant approaches will work out.

    As shown in the link below, Prediki’s political market, “Wahlfieber” (using only German-speaking participants) actually saw Trump catch up to Hillary in the past few weeks. Now slightly ahead, however all is happening in the “too close to call” range.

    So here is where we stick our neck out, Prediki’s 24×7 live forecast:
    http://www.wahlfieber.com/en/market/USA-2016-PW–prosidentschaftswahl-in-den-usa-2016/

  3. Why “called wrong” – even in the naive sense of the word? As I wrote above, the prediction market did say on 14th September that Trump was slightly ahead of Hillary.

    BTW: Naive: In talking about a proper prediction markt forecast, we should never refer to outright “right” or “wrong” as results are stated as likelihoods for certain outcomes. Researchers need to be precise with their language, not insinuate falsely the 100% certainty expressed by 0/1. It is self-evidently impossible to predict any future for human action with absolute certainty, as there is no such thing as predestination. Right or wrong, correctly defined as methodic reliability, is falsifiable by checking the forecast occurrence percentages compared to the frequency of their empirical occurrence.

    Admittedly not a light fare but necessary for the progress of the art.

  4. I think Andrea was referring to our model, not yours!

    And yes, we called it wrong. We used a market share model to predict vote share, but ignored the very real possibility that vote share would not match the Electoral College, and that was a big mistake.

    As for what next? I think Andrea’s question was rhetorical, but I’m going to answer it anyway! We’re running some investigations to see if swing state FFF outcomes would have made for a better prediction. We’ve run a post-election wave to see if the “shy Trump” voters had an impact on the model. And we’re going to get to work on the French and German elections in 2017.

Join the conversation