Research Methodologies

January 20, 2015

Is Online Sample Quality A Pure Oxymoron?

Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.

Scott Weinberg

by Scott Weinberg

0

6a00d83420cedf53ef014e8a87325d970d-800wi

 

Editor’s Note: It must be something in the air, because the topics of panels, online sample, and the interface of technology and quality has been a hot topic lately. So far this year alone I have engaged in four different advisory conversations with investors on the this topic, which has never happened before.  It’s no surprise though: online sampling is now the backbone of market research globally. Whether we are engaging respondents on mobile devices or PC’s, the same principles apply: personal online access is ubiquitous globally, and programmatic buying for ad delivery, predictive analytics, and online panels/sampling are BIG business. REALLY BIG business, and it’s only going to get bigger.

 

That being the case, issues around quality and how we ensure it is the primary factor while the industry continues to maximize the mix of speed, cost, and value will only grow in importance over the next few years. And that brings us to Scott Weinberg’s call to action post today. Scott doesn’t pull any punches and his concerns harken back to Ron Seller’s post a few years ago on the “Dirty Little Secrets” of online panels.  I believe we have made progress in this area and that some suppliers remain clear leaders in the quality arena, but this is an issue we shouldn’t take our eyes off of and Scott reminds us of why.
By Scott Weinberg 

 

I attended a CASRO conference in New Orleans back in late ’08 or early ’09. The topic was ‘Online Panel Quality.’ I’ve often thought about that conference: the speakers, the various sessions I attended. I recall attending the session about ‘satisficing’ which at the time was being newly introduced into the MR space (the word itself goes back decades); I thought that was an interesting expression for a routine occurrence. Mostly however I remember the hand wringing over recruitment techniques, removing duplicates, digital fingerprinting measures and related topics du jour. And I remember thinking to myself, for 2 days non-stop: ‘are you kidding me?’ Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.

Allow me to explain where I’m coming from. My academic training is in I/O Psychology. Part of that training involves deep dives into survey design. Taking a 700-level testing & measurements course for a semester is a soupcon more rigorous than hearing ‘write good questions.’ For example, we spent weeks examining predictive validity, both as a measurement construct, and also how it has held up in courtrooms. More to the point, when you’re administering written IQ tests, or psych evals, or (in particular) any written test used for employment selection, you are skating on thin ice, legally speaking. You open yourself up to all kinds of discrimination claims. Compare writing a selection instrument that will withhold a courtroom challenge with writing a csat or ‘loyalty’ survey. Different animals, perhaps, but both are Q & A formats. A question is presented, and a reply is requested. However, the gulf in education in constructing MR type surveys is visible to anyone viewing the forest in addition to the trees.

An MR leader in a huge tech company said something interesting on a call I remember vividly. He asked: ‘when is the last time you washed your rental car?’ The context here pertained to online sample. And he was one of the few, very few really, that I’ve encountered in the last 12 years I’ve been in that space, who openly expressed the problem. The problem is this: why would you ever wash your rental car? Why change the oil? Why care for it at all? You use it for a day, or a week, and you return it. Online respondents are no different. You use them for 5 minutes, or 20, and return them. If we actually cared about them, the surveys we offer them wouldn’t be so stupefyingly, poorly written. I’ve seen literally hundreds of surveys that have been presented to online panelists. I’ve been a member of numerous panels as well. Half of these surveys are flat out laughable. Filled with errors. Missing a ‘none of the above’ option. Requiring one to evaluate a hotel or a restaurant they’ve never been to. Around a quarter consist of nothing but pages of matrices. Matrices are the laziest type of survey writing. Sure, we can run data reductions on them and get our eigenvalues to the decimal point. Good for us. And the remaining quarter? If you’re an online panelist, they’re simple boring. Do I really want to answer 30 questions about my laundry detergent? For a dollar? Ever think about who is really taking these surveys? Sidebar: do you know who writes good surveys? Marketing people using DIY survey software. Short & to the point surveys. 3 minutes. MR practitioners hate to hear it, or even think about it, but that’s reality. I’ve seen plenty of these surveys by ‘non-experts.’ They’re not only fine, but they get good & useful data from their quick hit surveys.

Since you’ve made it this far, time to bring up the bad news. I’ve been accumulating a lot of stories the last 12 years. I’ll share a few. These all happened, and I’m not identifying any person or firm so please don’t ask.

  • Having admin rights to a live commercial panel, I found a person with 37 accounts (there was a $1 ‘tell a friend’ recruitment carrot). Also found people with with multiple accounts and a staggering number of points, to the point of impossibility.
  • The sales rep who claimed to be able to offer a ‘bi-polar panel’ and sold a project requiring thousands of completes of respondents with a bi-polar or schizophrenic diagnosis.
  • The other sales reps I know personally (at least 5) who make $20,000-$30,000 per month selling sample projects. Hey, Godspeed, right? Thing is, not a one could tell you what a standard deviation is, let alone the rudimentary aspects of sampling theory. Don’t believe me? Ask them. Clearly, knowing these items are not a barrier to success in this space. Just a pet peeve of mine.
  • Basically, this entire system works via highly paid deli counter employees. ‘We can offer you 2 lbs of sliced turkey, a pound and a half of potato salad, and an augment of coleslaw, for this CPI.’ Slinging sample by the pound, and let the overworked and underappreciated sample managers handle the cleanup and backroom topoffs.
  • The top 10 global MR firm who finally realized their years-long giant tracker was being filled largely with river sample, which was strictly prohibited.
  • Chinese hacker farms have infiltrated several major panels. I know this for a fact (as do many others). You can digital fingerprint and whatnot all day long, they get around it. They get around encrypted URLs. Identity corroboration. You name it, they get around it.
  • The needle in a haystack b2b project that was magically filled overnight, the day before it was due.
  • Biting my tongue when senior MR execs explained to me their research team insists on 60 minute online surveys, and they’re powerless to flush out their headgear.
  • Biting my tongue when receiving 64-cell sampling plans. The myopic obsession with filling demographic cells at the exclusion of any other attributes, such as: who are these respondents? You’re projecting them out to non-panelists as if they’re one and the same?
  • A team of interns inside every major panel, taking the surveys, guessing the end client, and sharing that with the sales team in a weekly update.
  • Watching two big global panels merge and scrutinize for overlap/duplicates, stretching across 12 countries. USA had 18% overlap, the rest (mostly Europe) had 10%. Is this bad? No idea. Maybe it’s normal.
  • Most online studies are being at least partially filled with river sample (is anyone surprised by this?).
  • Infiltration of physician panels by non-physicians.
  • The origin of the original ‘Survey Police’ service
  • Visiting the big end client for the annual supplier review and watching them (literally) high-five each other as to who wrote the longest online survey. The ‘winner’s’ was 84 questions. We had performed a drop-off analyses, which fell on deaf ears.

Lastly, and for me the saddest of my observations, are the new mechanics of sample purchasing. The heat & light on sample quality that peaked about 4 years ago has been in steady decline. In the last couple years, sample quality is simply assumed. End client project sponsors assume their suppliers have it covered. The MR firms assume their suppliers have it covered. And the sad part? The sample buyers at MR firms, and I’ve seen this countless times, do not receive trickle-down executive support for paying a bit more for the sample supplier who actually is making an effort and investment to boost their sample quality, via validation measures for example. There are exceptions to this, or were, in the form of CPI premiums, but no widespread market acceptance to pay a buck or three more. In fact, the buying mechanics are simple, get 3-4 bids, line them up, and go with the cheapest CPI, assuming the feasibility is there. This happens daily, and has for years. And by cheaper, I’m talking 25 cents cheaper. Or 3 cents. That’s what this comes down to. So chew on this: why would a sample supplier pour money down the quality rabbit hole? Quality is not winning them orders. Margin is. Anyone working behind the scenes has also seen this movie, many times. Incidentally, there’s nothing wrong with buying on price, we all do this in our daily lives. The point is this: if you’re going to enforce or even expect rigorous sample quality protocols from your suppliers, then give your in-house sample buyers the latitude to reduce your project margins. I won’t hold my breath on this, but that’s what it takes.

I could go on but more is not necessarily better. This is the monster we’ve created: $2 and $3 CPIs has a ripple effect. How can a firm possibly invest in decent security architecture, with prices like this? How can we expect them to? If you’re buying $2 sample, why not go to the source and spend 50 cents?

Now that I’ve thoroughly depressed you, one may wonder, is there any good news? I remember telling my colleague 5 years ago ‘if a firm with a bunch of legitimate web traffic, like Google, ever got in this racket, they would upend this space.’  I didn’t think that would actually happen, but there you go (that one may still be depressing to some). I also believe that ‘invite-only’ panels give the best shot at good, clean sample. When you open your front door to anyone with a web connection, and tell them there’s money to be made, well, see above. More recently I’ve become a convert to smartphone-powered research. Many problems are removed. It has its own peculiarities, but from a data integrity perspective, it’s hard to beat. Lastly, and I could do a whole other riff on this: when we design surveys with no open end comment capture, you’re hoisting an ‘open for business’ sign to fraudulent activity. Yes you can add the ‘please indicate the 4th option in this question’ but both bots and human pros spot red herrings like that. It’s much more difficult to fake good, in-context open ended verbiage. Yes it takes a bit more work on the back end, and there are many solutions that can assist with this, one in particular. And the insights you can now share via this qual(ish) add-on is a nice change of pace relative to the presentation of trendlines and decimal points.

That’s all for now. Thank you for reading.

0

online researchsample qualitysurvey designsurveys

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Scott Weinberg

Online Sample Quality – 4 years later

CEO Series

Online Sample Quality – 4 years later

Four years after the provocative article & observations from the insides of the online sample machinery.

Scott Weinberg

Scott Weinberg

Consumer Behavior

How Widespread Is Geofencing In Market Research?

Apparently the number of mobile fieldwork suppliers with fully functional geofencing is amazingly low. Why?

Scott Weinberg

Scott Weinberg

Research Methodologies

Raise Your Hand If The Truth Starts At .05

I was taught, .05 is an arbitrarily agreed to compromise that splits the chances of making a Type 1 and Type 2 error.

Scott Weinberg

Scott Weinberg

Research Technology (ResTech)

Mobile Research Quality: Absolute vs. Relative

Defining mobile research quality, in absolute and relative contexts.

Scott Weinberg

Scott Weinberg

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*