Editor’s Note: When I was just starting this adventure in social media, Ron Sellers sent me a copy of the first Dirty Little Secrets of Online Panels report. It was chock full of the type of information that I felt all in the MR community should know and I did my best to help it get visibility. That was also one of the reasons that I asked Ron early on to join me as a contributing author on the GreenBook Blog; he has a real gift for cutting through to the heart of issues and taking a pragmatic approach to addressing them. When Ron told me he was going to revisit the issue of panel quality I thought it was a great idea and support the effort 100%.
That said, I need to post a disclaimer here: GreenBook has no official position on this project nor have we endorsed it. We support having an open and honest dialogue about all issues that impact our industry and believe that only through total transparency can we achieve that. In that spirit we support Ron’s efforts to share what he has found through his research and hope that this can be used to further the conversation about the future of market research. If you’re an online sample provider you might not like what you read here, but I hope that you’ll use it as an opportunity to engage with the industry on these issues and help us all develop new practices that support our collective vision of a dynamic and growing field.
By Ron Sellers
Online access panels. Love them or hate them – the reality is if you’re in quantitative research, you probably will use them sooner or later.
In 2009, Grey Matter Research ran a little internal test on a few panels we had used or were considering. We arranged for a selection of mystery shoppers to sign up for each panel and be typical respondents for a month.
What we found encouraged us to expand our test to include 12 major panels, and take the findings public. The result was the report Dirty Little Secrets of Online Panels, which burned up Twitter feeds and LinkedIn comments, and was requested by researchers from as far away as Finland, Japan, and South Africa.
Well, we’re at it again. A few panel mergers, plus requests about panels we didn’t include the first time, and it’s time for More Dirty Little Secrets of Online Panel Research. e-Rewards. Toluna. Clear Voice. Surveyhead. Opinion Outpost. MySurvey. These and six more were evaluated from the perspective of the typical panel member.
Why should researchers care much what panel members are experiencing? We pay a panel provider or a panel broker, get our N size, toss out the obvious cheaters, and use the data. Right? Well…
Imagine you’ve crafted a relatively short, engaging questionnaire that respects my time as a respondent. However, yours is the tenth questionnaire in a row that I’ve completed that morning, and many of the others were long, boring, and irrelevant. I’m tired and inattentive. Now just how reliable is your data?
Or let’s say that I’ve attempted 12 different questionnaires this morning before trying yours. One of them asked me ten minutes’ worth of questions before telling me I wasn’t qualified (and tossing me out with no reward). One of them froze when I was mostly done. Another one told me I wasn’t qualified and kicked me out before I could answer a single question. Two more were actually called “surveys” but were trying to get me to compare car insurance rates. Five of them were already closed by the time I tried to respond, even though the invitations were all sent yesterday or today. I disqualified for two more because I don’t own a pet, even though I stated in my panel profile that I have no pets. I’m tired, I’m frustrated, I’m annoyed, and now I’m evaluating a new product concept that you really hope I’ll like. Now just how reliable is your data?
These aren’t just hypothetical situations – these are real situations we found in our work with these panels. Plus, multiple other problems:
- The panel that gave us opportunities to complete 50 to 60 questionnaires in a row, non-stop
- The panel on which over four out of ten studies were closed within less than 24 hours after invitations were sent, and which closed some studies in as little as one or two hours
- The panel that sent two of our panelists 61 survey invitations in just one month
- The panel that pays its respondents the equivalent of $2.67 per hour
- The panel that sent one of our panelists 15 survey invitations over a two-day period
- The panel that carries advertising on its website – are panelists seeing your competitors’ ads before they answer your surveys?
Of course, there were also much better situations, such as the panel that actually prevents panelists from completing more than one questionnaire per week…the panel that actually pays an average of over $8 an hour to respondents as incentives…and panels that invite people to eight or ten surveys a month, rather than 50 or 60. It’s not all bad news.
It’s very easy to gloss over fieldwork or let someone else worry about it. Let’s face it, finding and interviewing respondents is not the most exciting element of research, no matter whether it’s RDD dialing, focus group recruiting, or access panel interviewing.
But always keep in mind that you are depending on these people to give you input that you will use in critical business decisions. Paying them pennies, giving them boring, lengthy, or irrelevant surveys, frustrating them with multiple closed studies, and bombarding them with opportunity after opportunity is most definitely not how you want to treat people upon whom you are depending for your success. And if you or your research vendor are not paying attention, this is exactly what may be happening in your research.
This post has addressed some of the problems that exist in panels. In my next blog post, I’ll focus on what we as researchers can do to avoid some of these pitfalls and give our research a better opportunity for success.
And if you’d like a copy of More Dirty Little Secrets of Online Panel Research, shoot me an e-mail at ron AT greymatterresearch.com (in the normal e-mail format, of course).