By Phil Moyer, Senior Director, Crowd Operations, InCrowd
The latest Greenbook Research Industry Trends (GRIT) Report delves into the topic of sample quality, and whether technology is improving the situation or making matters worse. Many new vendors and methods are now available to access physician samples and Greenbook raises an important, and timely, issue facing the life sciences market research industry.
“Is sample getting better or worse?” the GRIT report asks.
Perhaps unsurprisingly, the researchers found that answers to “sample quality” depended at least in part on professional affiliation.
“If you are in a full service firm, focus facility or a corporate researcher, you are far more likely (42% and 43% of respondents) to say ‘worse’,” the report said. “If you work for a data collection provider or sample provider, then it’s the opposite with 56% and 46% saying “better’.” This is likely due to how well your organization is leveraging technology to improve sample quality.
The “worse” camp is right to worry. Many who responded to GRIT “have a strong sense that there are only professional survey takers and fraudulent bots that are taking all the surveys because there is a race to the bottom in terms of cost.”
Those who see sample quality improving point out that technological advances don’t just cut costs, they can directly address many of the new and longstanding challenges to gathering reliable, high quality data.
With so many new entrants into the sample industry, and all vying for the attention of some of the busiest professionals we know – doctors, if you are in the life sciences – it is always wise to ask tough questions about your sample. Here are a few things to watch for:
- Fraudulent Respondents: Are they really physicians? Technology allows people to pose as doctors or write code to infiltrate online recruitment channels. Yet, technology can also be used to stop them. Professional databases of medical licenses and NPI numbers can be accessed for instant validation, to ensure that the physicians joining your panel are who they say they are. That’s just the first step. The second step is personal validation, or knowledge-based authentication, based on public databases. The respondent is asked questions like, “What car were you driving in 2011,” or, “What road did you live on in Boston?” Answering these questions is easy for a real person, but quickly exposes bots and human imposters.
- Repeat Respondents: One fear many market researchers have is that, with a highly specialized target audience, they are getting the same individuals responding to their surveys. In this case, technology can be the market researcher’s best friend. For example, there are many physician databases, sample exchanges, and online communities that can be connected for wide and deep reach, and market researchers should make sure their program is taking advantage of the most basic of technologies to do this —the API.In addition, platforms can utilize powerful algorithms that allow a randomized yet controlled survey invitation flow. Typically, traditional online surveys will send out one mass invitation to all eligible respondents even though the desired n size is significantly smaller. This means that a large portion of the sample will have a bad experience (see a closed survey) or get successive surveys. A good sampling algorithm, on the other hand, can send a small, targeted batch of invitations, monitor the responses and recalculate every 15 or 30 minutes to an hour to determine how many more to send based on a wide variety of factors, including when they last responded and how quickly. This allows for very precise sampling and response rates, which keeps respondents happy and reliable.
- Lazy Respondents: Every survey has outliers—those members who aren’t engaged or are providing nonsense answers. This is where automated and manual survey-level sample validation is critical. The right software can easily spot speeding (going through the survey too quickly), straight-lining (answering the same letter all the way), suspicious IP addresses (multiple respondents from the same IP address), and garbage open-ends (just random letters). None of these things by themselves may be sufficient to disqualify a response set, but software and human quality assurance specialists work hand-in-hand to flag suspicious respondents and compare the data to that respondent’s history, and either kick them out or set them aside for further review. Our firm believes strongly in doing this in real time, which greatly reduces the chance that organizations will receive questionable responses.
Sample quality is the fundamental building block for actionable insights. Any doubts about the validity of your respondents or their answers will compromise your ability to make solid decisions and strategic recommendations, and undermine your entire market research project. With technology, market research is making huge strides in all aspects, including sample quality. Just make sure you know what questions to ask.