Unearth the power of declared data
Insights That Work

6 “Back to Basics” Steps Researchers Should Practice

With all the current buzz topics in Market Research, it's also important to focus on strong fundamentals including sample and data quality. With over 20 years in the research industry, Brian Lamar provides some practical advice for maintaining data and sample quality.

I’ve been fortunate to work in research for over 20 years and have seen the industry evolve in so many ways.  I started at a small market research company in Lexington Kentucky as telephone interviewer while I was an undergrad at the University of Kentucky. Each day I came in for work, my manager would brief me on the studies that I was going to work on that day ensuring I knew the questionnaire as well as possible. We’d go through the questionnaire thoroughly, we’d role play, and she’d point out areas that the client wanted additional focus. The amount of preparation seemed like overkill to me, but I played along and occasionally would have my feedback incorporated into the questionnaire. These meetings went on for a couple of years – every single day. Every single study. And not with just me – all of us interviewers had to go through this process. I think I still have most of an utility questionnaire memorized.

Later, I managed a telephone tracking study at a different company in New York as a project manager. Like most project managers most days were very hectic with all of the different tasks that you do to support clients. About once a month I would go over to the phone center and monitor the telephone interviews. I would sit in a briefing similar to what I went through my initial role as a telephone interviewer. This briefing was at an entirely another level though. The supervisor would have an entire room full of interviewers, and they’d review the questionnaire(s) similar to what I did, only these interviewers were much more detail oriented and critical and did a very thorough QC of every study before it launched. They’d not just offer suggestions, but they’d campaign for changes and talk about how important it would be to make these changes. Each month I would receive this lengthy list of changes, and it would be frustrating to go through them and determine which suggestions were important enough to bring to the client’s attention. Typically 1-2 changes would be made, making the language more consumer (not research) friendly, more logical flow, and other improvements. Looking back on it, the process that was created long before me added a lot of value to the research.

In 2001, like most clients, this client decided to transition their research from telephone to online, including the tracking study I managed. When we initially moved the work online, we had an entire team of people review the questionnaire and offer insights on its design, keeping both new technologies in mind as well as making improvements to the respondent experience. We had internal experts discuss the advantages and disadvantages of online research, and we implemented them. We did a side by side test for over a year and were in constant communication with the client on a questionnaire/design standpoint. Rest assured, we made a lot of mistakes back then and were far from perfect. Sweepstakes as an incentive seem ridiculous nowadays. We transitioned phone surveys to online without pushing back on interview length and didn’t think long-term as much as we should have. But we had a large, diverse group of people who focused on the quality of the research as well as advocates for the respondents. Nearly all companies did back then, and the client was very involved in these discussions and decisions. They had transparency throughout the entire process that made the research more successful.

From 2001 and 2013 (when online research moved from infancy to maturity) I had a variety of positions almost exclusively in online research from project management to sales to analysis and was somewhat removed from the quality assurance processes. I know a process still existed and were important, but I wasn’t as involved. One of my current roles is to assist clients in data quality review. I review data; I review questionnaires; I review research designs at a much broader level than I did early in my career. Instead of managing or seeing 5-10 studies per week, I have the opportunity to review much more than that across a wide range of objectives and topics. Perhaps it’s the nature of this role, but I feel like the systems we, and lots of companies put in place back in my telephone and early online research days, are now non-existent. I also take a lot of surveys from non-clients as I’m a member of numerous different panels just to see what new types of innovations and research in the marketplace. According to a recent GRIT report, about half of all surveys are not designed for mobile devices, which is completely unacceptable. I can personally testify how frustrating these surveys are. Online research has made a lot of technology investments in the past few years, and many of these innovations have made improvements. But we certainly haven’t figured out how to best use this technology to improve survey design and the respondent experience – at least not yet.

I see a lot of bad research unfortunately both in my day job to evaluate data quality as well as when I take surveys in my spare time. I see screeners that are obvious for any respondent to enter the survey and even surveys without screeners entirely. I’m not sure if all researchers understand the importance of a “none of these” any longer. Respondents routinely answer the same question over and over as they’re routed from sample provider to sample provider. And this bad research isn’t from companies you would expect – they’re from names all of you have heard of: big brands or big market research companies along with small businesses or individuals using DIY tools.

At some point along the way, I feel like we’ve lost scrutiny over questionnaire and research design and that is the point of my writing this. A lot of other people have written similar blogs, and while nothing I say may be unique, it needs to be said over and over again until things improve. I heard someone recently say that “the market has spoken” when discussing sample and data quality meaning that clients, market research firms, and researchers have accepted lower quality in so many areas. Perhaps as an industry we have, but I feel like a lot of driving principals of research I described above are now non-existent. Do companies still have a thorough QC process? Do clients review online surveys? How many people are involved in questionnaire design? Just last week I led a round-table discussion on data quality and multiple brands admitted to not reviewing surveys and not looking at respondent-level data. Honestly, it makes me sad and if you’ve read this far you should be sad or angry as well.

Perhaps these data quality controls exist at some companies – I bet at the successful ones they do. I’d love to hear from you as data quality, and ultimately clients making better business decisions because of survey research is a goal of mine.

Having said all of this, I can’t discuss all of these challenges without a few words of advice for researchers:

  1. I urge you to take your online surveys. All of them. Have your team take them as well. Have someone not associated with the study or even market research test it as well. I think you’ll be amazed at what you find.
  2. Use technology to assist. Programming companies have done a great job at implementing techniques to help with the data quality process. They can identify/flag speeders. They can summarize data quality questions. They can provide respondent scores based upon open-ended questions. Become familiar with them and utilize them!
  3. Everyone else in the process should take your survey. The client shouldn’t be the only person expected to take the survey, but so should the market research firm, the sample team, the analyst, a QC team, everyone involved. Believe me, you’ll make a lot of recommendations around LOI and mobile-design if you do this. Join a panel and take a few surveys each week and odds are, you’ll want to write a blog like this as well.
  4. Know where your sample is coming from and demand transparency. Most sample providers are transparent and will answer any question, but you have to ask the question. How do they recruit? How does the survey get to the respondent? Are they ever routed? Do you prescreen? These are just a few questions you should understand about respondents to your survey.
  5. Ask respondent satisfaction and feedback. Are you getting feedback from respondents about the survey design? Insights can be obtained this way as well.
  6. Don’t remove yourself from the quality assurance role like I did for so many years. Regardless of where you are in the market research process, make sure you understand the quality steps throughout the entire process and ensure there are no gaps.


Please share...

5 responses to “6 “Back to Basics” Steps Researchers Should Practice

  1. Good advice, Brian. I also advocate for the Mom test – have your mom take the survey to see if she can understand it. That usually makes it clear where the survey “fails” – either boredom, bad wording of a question, redundant, etc.

  2. Couldn’t agree with you more, Brian. Proud that we played a small part in launching your MR career in those early days and that you have developed such a passion for high quality standards. Client decision-makers depend on our due-dilgence to get it right. And while times and technology have changed, the principles of ensuring data quality have not. Keep spreading the word!

  3. 100% agree. My concern is…is a battle lost? Because all big MR companies (specially in the US) consider survey data as a commodity (super fast and cheap). Most of them use automated tools and APIs to launch studies without any test at all. But the reality is that designing a good questionnaire is an art, and it is key to understand that respondents are not robots.

Join the conversation