Research Technology (ResTech)

May 17, 2013

Pew Research Discusses Their Experiments With Google Consumer Surveys

Scott Keeter of Pew Research Center talked about Pew’s experiments with Google Consumer Surveys at the AAPOR annual conference in Boston.

Leonard Murphy

by Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

0

Scott Keeter of Pew Research Center discussed Pew’s experiments with Google Consumer Surveys at the AAPOR annual conference in Boston today. While Pew Research remains committed to rigorous, probability-based sampling for all major work, Scott took pains to point out, the organization wants to look at non-probability sampling for particular purposes. Working with Google was a partnership rather than a blind test because Pew wanted to understand the methodology “under the hood”.

Google Consumer Surveys (GCS) are sampled from online publisher websites who have a “survey wall” for access to content instead of a paywall. GCS uses quota sampling and then weighting on inferred demographics.

Pew is not ready to make a bottom-line judgement on GCS but discussed “fit for purpose”, with different purposes including: national point estimates, tracking change over time, quick reaction measurement, pretesting question wording, open-end testing, diverse question formats and associations between variables. If GCS is biased in a systematic way, can you use it for trending? GCS is weakest at associations between variables, given the limit of two-question surveys.

Looking at 52 comparisons of RDD samples, the point differences average 6.5 points and median differences are 3.5. Some of these are mode differences, some are population differences. GCS sample has an older, more highly educated population than RDD sample. Correlations by age for inferred demographics line up with RDD estimates, even though there is slippage in the inferred demographics. Individuals might not be who Google thinks they are, but overall the demographic breakdowns worked for the questions tested.

Pew has asked people about volunteering and GCS under reports this considerably more than RDD but GCS is closer to the Current Population Benchmark than RDD, perhaps due to social desirability bias. GCS sample is less religious, which is a demonstrated Internet mode effect.

Some observations:

  • In the past month, Pew ran three surveys which provided reliability over time with stability of estimates.
  • For quick reaction surveys, the first presidential debate didn’t line up as well as the second debate did with RDD sample.
  • Pew tested an open-ended question on Google for fielding as an open-ended question on the phone but providing interviewers coding categories identified from the most common responses on GCS.

Google Consumer Surveys produces results quickly, cheaply and timely for specific events. It allows for the use of multiple question types. Unfortunately, because of the reliance on non-probability sampling it is difficult to predict when it works well and when it doesn’t. Google Consumer Surveys is a work in progress.

Pew Research plans to continue to use GCS for quick reaction polls; for testing of survey questions including wording, order and format; as well as testing open ended questions to help inform development of closed-end questions.

Pew is interested in exploring how well it can measure media use at various times of day and hopes to explore types of non-probability methods to see how they might supplement traditional probability-based surveys, even though they won’t be using GCS for national point estimates.

In the question and answer session, Pew was asked if national point estimates were off in any specific way. Scott said that some were off by a little and some off by a lot and those that did worse tended to suffer from mode effects or vague questions. For instance, a question about looking online for health information in the past several months showed an enormous disparity, but posing the question differently produced closer estimates. Pew was unable to come up with a good theory for why certain questions had big differences.

Another question asked, “Have you shown here that we can apply weighting and modeling to non-probability panel surveys and get somewhat similar estimates to probability surveys? Is that a sign that we are good modelers even though we don’t know anything about validity?”

Scott answered that the GCS modeling is very light and optional: in the interface you can turn the weights off, and if you do it makes a point or two difference. GCS does use quota-based sampling for building the sample and that can’t be changed.

A panelist said, “I came away from this experience with non probability modeling realizing that if you are a sampling from a relatively large and heterogeneous frame of people visiting websites then your need for modeling may be greatly reduced, but if you have an opt-in panel you have to do more modeling.”

 

Editor’s Note: Reg Baker has his take on this topic on TheSurveyGeek blog as well. It’s a bit more skeptical, as you’d fully expect from Reg.  Check it out.

0

googlesamplesurveys

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Leonard Murphy

From Rockstar Dreams to AI Insights: The Journey of Hamish Brocklebank

CEO Series

From Rockstar Dreams to AI Insights: The Journey of Hamish Brocklebank

Dive into the CEO Series with guest Hamish Brocklebank, CEO of Brox.AI. Explore his path from music ...

AI Integration and the Future of Marketing Insights with Alex Hunt, CEO of Behaviorally

CEO Series

AI Integration and the Future of Marketing Insights with Alex Hunt, CEO of Behaviorally

Explore the power of AI in marketing with behaviorally's CEO, Alex Hunt. Learn how to leverage predi...

The Next Wave of Disruptive Technology that Changes Everything

Research Technology (ResTech)

The Next Wave of Disruptive Technology that Changes Everything

There have been a few big inflection points of societal disruption driven by technology in the last 50 years: One was the introduction of the Internet...

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

Quantifying the Impact of Insight Innovation

Insights Industry News

Quantifying the Impact of Insight Innovation

We previously announced the milestone of our Insight Innovation Exchange (IIEX) conference series’ 10th anniversary, celebrating a decade of identifyi...

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*