Editor’s Note: Keen-eyed readers of this blog will have noticed several articles recently about sample quality, or sometimes the lack thereof, and what to do about it. Here, Scott Weinberg adds his POV on the pervasiveness of bad actors and actions that need to be taken to help researchers deal with the issue. Good researchers must always be on their guard. The good news is that there are solutions based on things like blockchain that could help significantly; they need to become more pervasively employed.
In January 2015, GreenBook published my observations of several years of a front row seat within the online sample business. That article broke some glass, opened doors, generated more articles, gave me my 15 minutes, etc…life moved on.
Except…a few times a year, still; someone reaches out to me, either to thank me or to commiserate. Moreover, three panel firms reached out to me along the way, to show how they’re tackling the quality issue(s).
So, what triggered this follow up? 3 months ago, the head of a (non-USA) MR firm reached out to me. Their firm is experiencing issues finding quality sample sources. In fact, they were running their own in-house set of parallels on a tracker, various sources (Google, Survata, trad, programmatic) to approach the issue with rigor. We got to talking.
Sidebar: they were at an industry conference last year and said a sales rep was proudly offering ‘$1 CPI’s.’ I guess it had to happen. Maybe it’s been happening for a long time; I don’t monitor CPI’s anymore.
Right then, during their RoR tracker source scrutiny, numerical scores came in consistently. So that’s good. On a subsequent regular study soon after, the supplier (a major) presented a data set infested with ridiculousness (at least 1/3 of the records were bogus). The bad apples were only spotted via open end comments by the way. I was forwarded the supplier’s apology slash explanation. An excerpt is below:
“Our sincere apologies for the fraud that occurred on these projects. We have gone ahead and blocked those users on our end so they won’t be able to complete any more surveys.
We can confirm that some of those completes are from the same person creating multiple accounts on our panel, using different IP addresses and completing the study over and over again. The user has different information on file for each account and the user passed all of our security checks as well as a third parties security checks that we use for additional insurance against this type of behavior. While other cases are of users who were casual in their responses.
We have gone ahead and blocked those users on our end so they won’t be able to complete any more surveys. Also, we are currently researching the recruitment source this user came in from and trying to find more information there to make sure this individual or similar other ones can no longer enter and create multiple accounts without being detected.”
A Few Points
- Note the issue was spotted ONLY via open end comment questions. Most MR surveys don’t have even one. I’ve advocated forever: have at least one OE! For this very reason. Also, I received the raw data file of the bad apples, including IP addresses, which are different (the gibberish repeats though).
- The “user has different information on file for each account” and “passed security checks / 3rd party checks.” Yes. Well. At least they’re trying? I’ve talked to the folks involved with identity verifications and related checks, and they’re passionate & committed to this topic, as much or more than anyone. And their tech works. I absolutely advocate their usage, if this stuff matters to you too. One bad apple finding their way in will spoil the bunch, yes, ergo add an OE or something similar to ID them. And use these other firms to support your efforts. At least have a chat with them and let them make their case to you.
- Good quality costs. The downward spiral of CPI pricing ensures you’ll receive the inverse. So pay the extra buck or three to your panel provider(s)! Stress why you’re doing this budgetary alignment to your customers, be they internal or external. Otherwise, this is like going to a restaurant and ordering the cheapest item on the menu, expecting it to also be the best item on the menu. Why do we do this?
- It’s far too easy to blame sample suppliers for this larger issue. Too easy, and misguided. Are you managing upward communication with your buyers, with the simple messaging of ‘quality costs here, just like everything else in life?’ I know we do when discussing the project plan in general. Kudos to those who emphasize with their buyers that care in opinion sourcing (may) mean paying a premium for it. In lieu of that, deliverables may be delayed for manual QA along the way. Automation is great in many ways, including QA, but the old school eyeball QA method is still hard to beat.
Thank you for reading this follow up. Looking forward to reading your comments. Always happy to chat with others interested in this topic. Lately, I’ve been getting up to speed on what may be a new methodology for the MR / Insights space. Hope to be writing about that later this year. Thank you!