CEO Series

March 22, 2019

Online Sample Quality – 4 years later

Four years after the provocative article & observations from the insides of the online sample machinery.

Online Sample Quality – 4 years later
Scott Weinberg

by Scott Weinberg

0

Editor’s Note: Keen-eyed readers of this blog will have noticed several articles recently about sample quality, or sometimes the lack thereof, and what to do about it.  Here, Scott Weinberg adds his POV on the pervasiveness of bad actors and actions that need to be taken to help researchers deal with the issue.  Good researchers must always be on their guard.  The good news is that there are solutions based on things like blockchain that could help significantly; they need to become more pervasively employed.


In January 2015, GreenBook published my observations of several years of a front row seat within the online sample business. That article broke some glass, opened doors, generated more articles, gave me my 15 minutes, etc…life moved on.

Except…a few times a year, still; someone reaches out to me, either to thank me or to commiserate. Moreover, three panel firms reached out to me along the way, to show how they’re tackling the quality issue(s).

So, what triggered this follow up? 3 months ago, the head of a (non-USA) MR firm reached out to me. Their firm is experiencing issues finding quality sample sources. In fact, they were running their own in-house set of parallels on a tracker, various sources (Google, Survata, trad, programmatic) to approach the issue with rigor. We got to talking.

Sidebar: they were at an industry conference last year and said a sales rep was proudly offering ‘$1 CPI’s.’ I guess it had to happen. Maybe it’s been happening for a long time; I don’t monitor CPI’s anymore.

Right then, during their RoR tracker source scrutiny, numerical scores came in consistently. So that’s good. On a subsequent regular study soon after, the supplier (a major) presented a data set infested with ridiculousness (at least 1/3 of the records were bogus). The bad apples were only spotted via open end comments by the way. I was forwarded the supplier’s apology slash explanation. An excerpt is below:

“Our sincere apologies for the fraud that occurred on these projects. We have gone ahead and blocked those users on our end so they won’t be able to complete any more surveys.

We can confirm that some of those completes are from the same person creating multiple accounts on our panel, using different IP addresses and completing the study over and over again. The user has different information on file for each account and the user passed all of our security checks as well as a third parties security checks that we use for additional insurance against this type of behavior. While other cases are of users who were casual in their responses.

We have gone ahead and blocked those users on our end so they won’t be able to complete any more surveys. Also, we are currently researching the recruitment source this user came in from and trying to find more information there to make sure this individual or similar other ones can no longer enter and create multiple accounts without being detected.”

A Few Points

  • Note the issue was spotted ONLY via open end comment questions. Most MR surveys don’t have even one. I’ve advocated forever: have at least one OE! For this very reason. Also, I received the raw data file of the bad apples, including IP addresses, which are different (the gibberish repeats though).
  • The “user has different information on file for each account” and “passed security checks / 3rd party checks.” Yes. Well. At least they’re trying? I’ve talked to the folks involved with identity verifications and related checks, and they’re passionate & committed to this topic, as much or more than anyone. And their tech works. I absolutely advocate their usage, if this stuff matters to you too. One bad apple finding their way in will spoil the bunch, yes, ergo add an OE or something similar to ID them. And use these other firms to support your efforts. At least have a chat with them and let them make their case to you.
  • Good quality costs. The downward spiral of CPI pricing ensures you’ll receive the inverse. So pay the extra buck or three to your panel provider(s)! Stress why you’re doing this budgetary alignment to your customers, be they internal or external. Otherwise, this is like going to a restaurant and ordering the cheapest item on the menu, expecting it to also be the best item on the menu. Why do we do this?
  • It’s far too easy to blame sample suppliers for this larger issue. Too easy, and misguided. Are you managing upward communication with your buyers, with the simple messaging of ‘quality costs here, just like everything else in life?’ I know we do when discussing the project plan in general. Kudos to those who emphasize with their buyers that care in opinion sourcing (may) mean paying a premium for it. In lieu of that, deliverables may be delayed for manual QA along the way. Automation is great in many ways, including QA, but the old school eyeball QA method is still hard to beat.

Thank you for reading this follow up. Looking forward to reading your comments. Always happy to chat with others interested in this topic. Lately, I’ve been getting up to speed on what may be a new methodology for the MR / Insights space. Hope to be writing about that later this year. Thank you!

0

market research fraudsample quality

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Scott Weinberg

Research Methodologies

Is Online Sample Quality A Pure Oxymoron?

Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.

Scott Weinberg

Scott Weinberg

Consumer Behavior

How Widespread Is Geofencing In Market Research?

Apparently the number of mobile fieldwork suppliers with fully functional geofencing is amazingly low. Why?

Scott Weinberg

Scott Weinberg

Research Methodologies

Raise Your Hand If The Truth Starts At .05

I was taught, .05 is an arbitrarily agreed to compromise that splits the chances of making a Type 1 and Type 2 error.

Scott Weinberg

Scott Weinberg

Research Technology (ResTech)

Mobile Research Quality: Absolute vs. Relative

Defining mobile research quality, in absolute and relative contexts.

Scott Weinberg

Scott Weinberg

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*