Dynata: One Company. More Data. More insights. More results.

Research on Research Respondents: What Have We Learned?

Part of our experience was hosting a roundtable with people who participate in MR studies. We did this to further our ongoing conversation about what our industry is like for them. Our goal is to create a more open dialogue between our side of the industry and theirs.

roundtable discussion

As always, IIeX was a great experience for the entire Recollective team and for me, personally.  We made some great connections, learned a lot and participated in collaborative problem solving.

Part of the experience for us was hosting a roundtable with people who participate in marketing research studies. We did this to further our ongoing conversation about what our industry is like for them. Our goal is to create a more open dialogue between our side of the industry and theirs; the hope being that we can work towards some foundational shifts in our thinking and find some solutions to our shared problems with quality and trust.

First, as a reminder, this wouldn’t have been possible without the help and support of our industry. I’d like to give a personal thank you to:

While Jessica Broome and I conceptualized and brought this to life we never would have been able to do this without help…  lots and lots of help!

To bring you up to speed – we first conducted an online community among people who participated in multiple kinds of research. To read the initial blog posting about those findings, please follow this link.

Next, we ran a 1,500 person quant study to further explore and validate what we learned in Phase 1, but also to explore a few theories we came up with. Our thinking was that we might want to be considering or profiling things like creativity levels, empathy levels and learning styles of the participants as we are developing our questionnaires and think through how we recruit for different kinds of methodologies. We were fortunate enough to be able to present these findings at IIeX. If you missed our workshop, but would like to see the presentation you can read through our findings here.

The roundtable discussion, with actual research participants, proved to be as fruitful as the first two phases.  We had six participants recruited through a variety of sources and incentivized by Tango Card. Each of the participants was, again, recruited to have participated in multiple kinds of research. We learned so much from them, but there are definitely themes that have emerged across these three phases.

Some of them are:

  • They are all what we consider to be professional respondents. There is definite cause and effect going on, though. They do not feel informed or respected by the way we screen or the information we provide them around how we select them.
  • The screening takes up too much of their time – so, they feel misled and therefore they try to ‘game’ the system.
  • Additionally, they often feel in the dark about why we ask what we do, so they try to give us the answers they think we want. This creates an environment where, at the point of screening, they fudge…  not lie, but ‘fudge’.  Sometimes, though, this is because they can’t come up with accurate answers to the ‘impossible’ questions we ask.

It’s my fundamental belief that we, as an industry, already know this and just don’t know what to do about it.  So, I throw out a few thoughts and hopefully a foundation for some solutions..

If we can’t beat them, join them

  • Instead of trying to trick them through complicated and impossible to answer screening questions, can we be more transparent about what we are looking for upfront?
  • We know they don’t like to participate in exercises where they feel like they don’t have a lot to add, so if we told them what we need and why (as much as we can), maybe they wouldn’t feel like screening was a chess match and it was more of a conversation. We could, potentially, also avoid scenarios where they think we want an expert on a topic so they educate themselves before the research to be helpful, when we actually wanted a novice.
  • It’s also obvious that these aren’t inherently manipulative or dishonest people, they want to help – so let’s tell them how they can in a straightforward fashion.
  • While there are few things more important to them than the monetary incentives, there are some things that are equally as important – they love seeing the results, it doesn’t have to be the final results, but just enough to even know where they sit in the group or in the survey. We definitely could be doing more of this.
  • We need to talk to them more – about us; who we are, why we need them, why the truth is important…  we can do this in an automated fashion – embedding video into screeners or surveys. Let’s entertain them and help them understand us.
  • Ultimately – it comes down to respect, transparency and cooperation. These things are the foundation for all good relationships.  This is no different.

Jessica Broome will have more on this and we’ve got Phase 4, 5 and 6 in the hopper. In the meantime, we’d love to hear from you about what you’d like to learn, so please reach out and let’s further the conversation.

Please share...

4 responses to “Research on Research Respondents: What Have We Learned?

  1. These findings really confirm something that I’ve felt during my now long career in market research, namely that we should be more transparent about what we are trying to figure out and ask respondents for their assistance. Particularly toward the end of a focus group, it’s been my practice to “let my hair down” and say something along the lines of “Look, here’s who our client is and here’s what we are trying to figure out…” Clients tend to resist this sort of disclosure having bought into the idea that research purity demands respondents be blind. Respondents are eager to be helpful. Why keep them guessing?

  2. Hi Kerry:

    One of the more frustrating things that has happened with surveys over the last few years is the the option for” prefer not to answer” or “don’t know” has been eliminated from many surveys which forces response to questions that aren’t applicable from a respondent’s pot of view and perhaps more importantly. prove a participant isn’t a good fit for the survey. It’s frustrating on both sides. Secondly, many companies use “survey methods” to channel down on product fit which forces choices that often lead to a bad fit or pressure to buy. Lastly, many people move through repetitive task buying like the grocery store on auto-pilot and as such never see competitive profiles. It is a first impressions game when they do make a change, often conditional upon labeling. If consumers are happy with their buying choices, they are often seeing competitive products for the first time in a survey. That puts a lot of pressure on the graphics and descriptions and when they are images only, it’s not a true measure of intent.

    Before Greenfield online, we had people eagerly signing up for panels and surveys without incentives, and it changed things, not necessarily for the worse but it created two distinct sets of respondents, those who wanted incentives and a higher percentage of people who love or hate products. We lost many of those who wanted to be a part of the improvement process. To be fair, there needed to be a feedback process for that respondent base, but we never got that far and we largely ignored the experienced buyers in favor of younger buyers who were more savvy at manipulating the system.

    Added to that mix today, is the invasive nature of marketing, ( in news feeds, pop-ups on the phone, store recognition that you are in the store, etc.) that feels more like big brother than a way to be informed. It’s in many ways like being “touched” on a 24/7 basis. Five years ago I did a SINGLE Google search on diabetes for my dad and I am still getting ads for diabetic supplies.

    The thing with surveys is that there is so much color behind the choices. They were great before we had metrics but their role today has changed and the bucketing responses leaves out a lot of the pathway. Our relationships with companies should be more interactive and less in the way of one-way communication. None of out other online relationships are driven by one party. Dynamic surveys are a new concept but they offer much value for both parties. People do want to respond, when they are given an opportunity to explain and especially when they are allowed to defined what is important. The very basis for research is being open to new perspectives but that gets lost in old tools. Time is precious to most people, so is being heard. They will invest in a process where ideas can be developed far faster than a process where we eliminate choices. I agree with your findings and hope that collaboration can happen on processes that are more reflective of how we think and less about what we will tolerate.

  3. Amusing, most of these problems would be resolved by face to face interviewing!

    Is the rush to low cost and fast turnaround online panel research, at what seems like increasingly questionable quality, the real issue?

Join the conversation

Kerry Hecht

Kerry Hecht

Founder & Chief Executive Officer, Echo Qualitative Project Support