Brand Strategy

December 5, 2016

A Debrief on “Predicting Election 2016”: What Worked and What Didn’t

Tony Jarvis reviews the seminal ARF/GreenBook Sponsored Forum on the implications of the learnings from the election polling for research.

A Debrief on “Predicting Election 2016”: What Worked and What Didn’t
Tony Jarvis

by Tony Jarvis

0

“We got it wrong.” Cliff Young, President Ipsos Public Affairs; “I respectfully disagree!” Raghavan Mayur, TechnoMetrica Market Intelligence (The IBD/TIPP Poll).  These were just two deductions from a 2 hour forum involving a blue ribbon group of researchers evaluating Election Polling in the most disruptive election in the US in recent history (Clue???)

Four (4) insightful and thought-provoking formal presentations from 4 commercial research companies set the scene for an even more diverse discussion by an expert panel on the merits and/or failings of the surveys, the resulting data quality, the subsequent predictive modeling and the implications for product and brand research.  The producer of the event, Lenny Murphy – Executive Editor & Producer, GreenBook challenged the presenters to take a hard look at improving predictive accuracy.

These presentations were delivered in webinar format and moderated by Dana Stanley, Director of Operations for GreenBook (and a former pollster himself) and focused on alternative approaches to survey based polling.

 

Jared Schreiber – Co-Founder & CEO, Infoscout focused on the critical importance of the undecideds (~14%) who were primarily “economy” driven and who chose Trump by a wide margin (~50% – ~38%) according to a massive exit poll.  Research has consistently demonstrated that for many product users as well as voters false pessimism exists which often confounds an expected result (or vote).  This was clearly a factor in the final Election results.

Dr. Aaron Reid – Founder and Chief Behavioral Scientist, Sentient Decision Science, Inc. reminded that Pollsters cannot rely heavily on what people say to truly understand how they feel and consequently what they will do.  He suggested that traditional research methods are no longer at the level of predictability that is required especially when there is an accentuated “social desirability” factor in a fractious political campaign.  Sentinent did examine the differences between Hillary Clinton, HRC, and Trump in the swing states in relation to “what they said” (pre-election) versus “what they did” (post – vote).  This underlined the classic respondent “lying factor” which pervades all research to various degrees.  Research during the primaries showed that Bernie Sanders earned “conviction” and “genuine” attributes among conservative voters raising the question could Bernie have captured many more disaffected Republicans?

Tom Anderson – Founder, OdinText, whose company uses unaided text analytics to understand messaging, suggested that this method is a key to making “better” predications compared to the difficulty for structured surveys to predict actual behaviour in view of “social desirability” responses.  His company identified that HRC was in trouble when evaluating the major differences on candidate attributes and their issues as well as a potential turn-out and rural versus urban issues.  Similar to some other research, Trump showed more consistent messaging and more resonance among those surveyed.

Taylor Schreiner – VP, Research, TubeMogul posed, “Why Nate Silver’s Problems are Advertisers’ Problems.”  (Nate Silver, master poll interpreter, founder and editor-in-chief of FiveThirtyEight).  The fundamental questions Taylor raised:

  • What do the confidence intervals offered really mean? To which could be added, did they actually have any mathematical foundation? (Rhetorical question!)
  • Which polls/predictive models are better? (Notably in an Electoral College environment when most were wrong!)
  • Causality – events will always have a sales effect. How were they included in the polling models?  g. The importance of including the weather as a variable in predicting consumption of soup.
  • Importance of repeated tactics trials. Systematic repetition and variation of marketing dimensions to show what works – conversions or sales and what does not.
  • Expense of precision that is never that precise. “Precision is not free.”

 

The panel that followed was live at ARF headquarters while still being a webcast for the hundreds of virtual attendees.  It was moderated by Chris Bacon – EVP, Research & Innovation: Global Research, Quality & Innovation, The ARF and the audience participation added significant dimensions to the presentations.

Gary Langer – President, Langer Research Associates and former Director of Polling, ABC News noted several fundamental principles and issues including:

  • Polling – it’s an estimate!
  • ABC never had a “leader” within the sampling error or design effects and ultimately predicted a Trump win.
  • Likely Electoral College results are derived by predictionalists not pollsters!
  • The universe of actual voters is unknown unlike universes for many products and services.
  • Clinton did win the popular vote by ~2%.
  • The vast share of the public thinks the country is broken.
  • There were many gaps across geo-demography’s of samples affecting representativeness.

Raghavan Mayur – President of TechnoMetrica Market Intelligence (The IBD/TIPP Poll) emphasized the myth of looking at poll averages – The Brexit case!  He also reminded that bad data cannot overcome great analytics and a possible democratic bias of those executing the polls.  In the end he suggested that Trump’s win was primarily driven by the “enthusiasm” of both Republicans and independents versus HRC’s “lack of inspiration”.  Raghavan did believe there was value in perspectives from 2004, 2008 & 2012 unlike many of the panelists.

Cliff Young – President, Ipsos Public Affairs opined that limited resources needed to be focused on the battleground States; and that ultimately turn-out measurement and models were as important as for voter intentions.  “Were we using a hammer when a screwdriver was needed?” he asked in relation to using the right research tools.

Matthew Oczkowski – Head of Product, Cambridge Analytics, the analytical team for the Trump campaign echoed the vital importance of turn-out modelling in swing States.  He mentioned that internally they had run thousands of highly localized surveys, while also synthesizing that data with social media analytics and other data analytics to create highly targeted models at the MSA level that allowed them to focus on delivering the right message to the right person at the right time (sound familiar marketers?) and thus drive voters to the polls where they needed them in critical swing states and the “flips” of Wisconsin, Pennsylvania and Michigan.

Rick Bruner – VP, Research & Analytics, Viant Inc. emphasized the huge challenge in “getting it right” especially when 2016 was so different from the past which is traditionally used as part of predictions for the future.

Melanie Courtright – EVP, Global Client Services, Research Now, strongly pointed out that samples suffer from both sampling error and non-sampling error.  With representative probability samples virtually impossible to achieve, notably on-line, such surveys report completion rates not response rates – another fundamental issue.  For an Election poll knowing the critical dimensions needed for a representative sample is extremely difficult and are likely well beyond geo-demographics.

The value and importance of combining different data streams (qualitative & quantitative), implicit models and methods as a “hybrid” approach to better predictions for future election polling was generally agreed.  However there is a need to establish best practices in each element of the art of polling.  Establishing representative probability samples based on all the key respondent attributes for an election (very tough) and measuring response rates (versus co-operation rates from non-rep samples) were also highlighted as a meaningful way to understand the real range of error and achieve quality data.

 

Conclusion?

There was certainly confusion in the marketplace regarding the result of two potentially different outcomes – The Electoral College (State by State) versus the overall popular vote.  Is it unrealistic to expect pin-point accuracy for an Election poll or prediction?  Clearly there was a cacophony of compounding complex factors and errors.  Most polls indicated that there was going to be ~2% difference with most in the wrong direction.

Stay tuned for 2020!!

Editor’s Note: Want to watch the whole event online? Here you go!

0

electionspollingpredictive modeling

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*