Research Now SSI: Scale Matters, Accuracy Matters, Confidence Matters
Shopper Intelligence

All The Reasons Your Surveys Cannot Be Trusted

Questions that require value judgments or reporting beliefs are susceptible to bias because these are inherently subjective.

trust-fall (1)

 

 

Editor’s Note: Today we have the capability to capture behavioral data via experiments as well as observed/monitored at a scale and cost efficiency unimaginable just a few years ago. The concurrent growth of new thinking and learning in the fields  of neuroscience, psychology, sociology, data science, etc… certainly call into question the efficacy and best uses cases of tools used in the past to deliver insight into human decision making. In today’s post (an admittedly provocative one!), Anoaur El Haji pushes the debate forward again. Many readers will have strong opinions about the premise here, but it is an important discussion as we continue to explore the evolution of our industry from a methodological and contextual perspective.

 

By Anouar El Haji

Being human means being quite curious about other humans. Specifically, we’re very interested in why people do the things they do. The better we understand each other’s motivations, the better we can serve each other. We know that at the most basic level (1) beliefs and (2) preferences determine to a great extent what people do. These are the informational building blocks that shape human decision-making.

(Un)fortunately, beliefs and preferences are hidden by default. Despite all the technological advances, humanity hasn’t even come close to inventing a device that would allow you to read anyone’s mind. So instead we resort what seems to be the next best thing: simply ask people about what they believe and value, what researchers call survey research.

Survey research only makes sense if and only if people honestly report their beliefs and preferences. The value of survey research is directly linked to this fundamental assumption. It’s a fact of life, however, that we’ve the ability to misrepresent ourselves. And often there are reasons to do so. For example, your willingness to pay for a new luxury watch will probably depend on who’s asking. You might overstate the amount to impress friends while you would downplay it to negotiate a good deal with the salesman.

Because people are free to misrepresent themselves, it raises the question whether surveys can provide an accurate view about what people truly believe and value. Sadly, there’s solid evidence that surveys are unreliable and give a skewed picture. The problem is so systematic that there’s a whole body of scientific studies focused on what’s called the hypothetical bias.

The root of this problem is that talk is cheap. In a survey, there are no consequences to misrepresenting yourself. The problem becomes even worse because we like to tell what people want to hear, also known as the Hawthorne effect. The end result is that survey measurements of beliefs and preferences are often significantly biased. Compare this to making a purchase. If you buy something that you don’t want, you’re going to regret making that decision. So there’s a strong incentive to make decisions that correspond to your true beliefs and preferences. Actions speak louder than words.

This doesn’t mean that all survey questions cannot be trusted. There are no reasons to misrepresent, for example, your gender or highest completed level of education. In fact, the answers to these type of questions can be verified objectively. However, questions that require value judgments or reporting beliefs are susceptible to bias because these are inherently subjective.

The science of humans is the only field in which the subject matter is able to talk back. So it’s quite tempting to simply trust what people claim about themselves. This shouldn’t, however, prevent researchers from maintaining a high standard to get a reliable view of the world.

Please share...

24 responses to “All The Reasons Your Surveys Cannot Be Trusted

  1. Consumer attitudes are relative and not absolutes. If samples are representative of your population then their opinions can be used to understand their motivations, opinions and behavior, so that marketing and PR actions can affect these attitudes, opinions and behavior. We routinely used A&U studies to understand our consumers at Cover Girl and implement effective marketing programs that predictably moved the needle. Likewise at JD Power and later at Insights & Solutions, we measured Customer Satisfaction and Vehicle Quality that proved predictive of sales. There is no reason for consumers to lie in surveys and I have found that they try to be honest, as they are trying to help manufacturers of the products they buy. Incentivizing those that are not your customers (e.g., paid panelists) and using their responses at predictive of those that are your customers is another issue.

  2. One could summarize this post in the phrase, “talk is cheap.” So is writing, when its based on opinion unsupported by fact. However, the author presents a strawman argument that doesn’t apply to any research professionals I know and respect.

    The argument may apply to b-school grads using DIY research tools who think after a course in school that they understand research. That is a problem.

    What are the holes here?

    (1) People lie. Of course they do, sometimes. However, the problem in much research is that respondents simply don’t care about what marketing people want to know and think is important. So there’s no incentive to lie. They may not remember what the question writer wants to know, but that’s different from lying and the distribution of errors is more likely to be random.

    (2) Questioners are entitles to ask any question they want. Bad questions produce trash, not answers. There’s a way to approach sensitive topics to get good data. Veterans know this; most newbies don’t.

    (3) Market experiments are inherently better than other research methods. Again, not true, as you are taking consumers out of context and extrapolating what they say/do in the experiment back to the real world. Every tool has its place. The practitioner and client need to know what is best to use in each situation.

    (4) Hyperbole in promoting a particular technique will garner followers. If that works, I’d be disappointed in the intelligence of the marketing community.

    The research profession doesn’t need hype; it needs credibility. Hype simply subtracts from that.

  3. Hi Anouar,

    Thanks for writing this post. Interestingly, the issue is both more profound and less stark than you have represented it here.

    You have described the “won’t say” issue facing traditional research very well. but perhaps more pernicious is the “can’t say” issue. That is, people often can’t tell you the true drivers of their behavior because they don’t have conscious access to that information.

    And yet, I would simultaneously argue that conscious measures are more useful than you suggest in your post. Beyond accurate reporting of demographic data, conscious measures can be very useful in revealing whys behind behavior. In fact, in our models we consistently find that a combination of conscious and non-conscious measures are more accurate in predicting future behavior than either type of measure alone.

    I would be interested in hearing your solutions to the problem as well. How does the research industry best deal with this “can’t say/won’t say” issue?

    Aaron

  4. There may be bias at an individual level, however if 60% people say they believe surveys can not be trusted (and there is no other sources of bias) then surely that is interesting to note….and cross reference with other data sources.

    The big problem comes when the analysts or business person believes it is really “true” on its own, and in an absolute sense. Beliefs and Values are always subjective and therefore are only “true” for the holder of the belief.

  5. I don’t think it is a conscious misrepresentation in most cases, but more that people really don’t think a lot about why they buy products or even why they like certain colors or styles more than others. We ask a lot of questions and make assertions based on a combination of answers that seemingly (and often) point to a conclusion. The danger in that is that we don’t account for the “behind the scenes” exposure that colors most people lives. There are simply too many perspectives and too many pitches that pass before us each day for a snapshot approach to behaviors. Communication has changed dramatically over the last ten years and the shear volume of information makes it impossible to predict what exposures will occur much less how they impact an individual’s value proposition.

    Most people have some factors that non-negotiable but many more that are tipping points. Their decisions are logical to a point but largely driven at the POS and driven by emotion once the basics have been met. For that very reason there is more mission shopping.

    The result for surveys is that they can tell what they want but not what they will buy. Even when they have bought, they have options like Ebay when a product is not what they wanted or newer versions are available; so in fact your are measuring two variables at any point, not one, meaning what they wanted and how they used it. It’s not a static point anymore. Ultimately, that makes survey data very transient.

    If you look at purchase data, especially trending and couple that with more refined qualitative response, you can get a good idea of what isn’t working. In order to understand what will work a more complex type of usability is pretty applicable. Surveys these days still work well in understanding what has been done and people can answer those questions just as they did in the past. Perhaps the best use of them is in understanding how well employees or bots or whatever you use for customer interaction is working in terms of providing the buyer with a way to understand how the product will fit their lifestyle. That may be the key to the castle.

  6. I can tell you after working in the survey industry for over 40 years, and making it a career, that surveys when done properly are the closest thing we have to a crystal ball and confessional. Yes, some people lie and exaggerate, but most people don’t do these things. I have done surveys about patients with diseases and have helped to get drugs passed by the FDA. I have helped clients win court cases with surveys, testified before Congressional sub-committees, and have helped companies with mergers and acquisitions, all based on survey results. When surveys have good objective questions, with reliable samples, they produce accurate results and can change the world.

  7. I tire of the “survey research doesn’t work” argument, based on trudging out the old boogeyman of “people don’t always tell the truth”. Greenbook: how exactly is this “pushing the debate forward again?” Of course people don’t always tell the truth; and while we’re at it, let’s be clear that another more fundamental problem is self-knowledge, i.e. people don’t always know the real reasons they do what they do. Yet, in the real world, survey research that is designed well, executed properly and interpreted correctly continues to deliver practical, effective business insight.

    Winston Churchill is quoted as saying “Indeed it has been said that democ­racy is the worst form of Gov­ern­ment except for all those other forms that have been tried from time to time.” Perhaps it can similarly be said that survey research is the worst method for understanding the reasons for human behavior except for every other method that has been tried.

  8. Hi Anouar,
    thanks for sharing your doubts. I don’t think that surveys are a silverbullet for every research problem. However, surveys are very versatile and can help in a vast number of applications. They are proven in practice and usually brands can rely on survey results when making important decisions. If you feel uncertain whether the mentioned biases matter, you could integrate survey data with observational data. In my experience, this helps to adress some of the biases you mentioned and improves the understanding of consumers.

  9. Brian Lunde says it well. The problem more often is not lies, but that neither the survey designer nor the participant understand the most important things driving the behavior. Yes, through better design you can decrease this error. What most people don’t get, researchers included, is that the unstructured/open ended data is one of the most important aspects of this. An open ended question can be designed well and text analytics like OdinText can be leveraged to understand and predict real behavior far better than any number of structured survey questions. Happy to back that up with plenty of case studies and statistics by the way…

  10. In early 20th century US, the penalty for a child using a naughty word was “having one’s mouth washed out with soap.” I propose a similar ceremony for anyone announcing that they have the ideal research technique, better than anything anyone else may have proposed.

    Research is the study of human behavior. Since humans are imperfect and subject to all sorts of exogenous forces, no research method is going to be perfect. Instead, you choose tools carefully, based on the business problem under study and the client budget. No method gets a pass on this: experiments, text analytics, big data, surveys, IDIs, focus groups. Arguably, the best research draws inferences from across methods and doesn’t rely on a single technique.

    The value of research is a subjective judgment by the client. Have they learned enough to justify the expense? The core of the research business is trust. Without it, there is no justification for any expense on research.

    Hucksters diminish trust in research. We don’t need or want to turn research into the equivalent of 1900s patent medicine.

  11. As a follow up I would like to reply to the comments made here and elsewhere about my post on the trustworthiness of survey research.

    I identified the following two main counterarguments:

    (i) Consumers tend not to misrepresent themselves in surveys (Charles Shillingburg, Victor Crain)

    This is simply not true. The problem is so severe that a whole stream of academic literature is devoted to it. See for an overview here: http://down.cenet.org.cn/upfile/53/201012654032130.pdf (“In all cases the results are quite clear: there is a hypothetical bias problem to be solved.”) The solution is straightforward but hardly adopted in the industry: make it real.

    As an exercise pick a random object from your desk and ask a number of colleagues how much they would be willing to pay for it—for research. Immediately after they provided you with an answer, offer the object for sale for the amount they themselves mentioned. Witness how few are actually willing to pay that much for it. That’s indicative of the degree of reliability survey research is able to provide.

    (ii) Survey research is good enough (Nick Tortorello, Brian Lunde, Florian Tress)

    Historically, there’s a clear trend to more trustworthy measures as these become available. I’m not claiming that survey research is completely unreliable. I’m claiming, however, that we can do better, actually a lot better. For example, it used to be the case that website designs were tested with surveys and focus groups, which at the time were good enough. Nowadays, A/B testing is the golden standard to test changes because it’s basically experimenting with reality itself instead of creating a hypothetical situation. The closer we can experiment with reality, the more reliable the insights.

  12. I am glad Anouar that you don’t believe all survey research is unreliable or you should not be in the business, or call yourself a researcher. I agree that we can always improve, do research better and develop more reliable techniques. For example, I am still not comfortable with cell phone research and I still don’t believe online research is reliable in certain situations. However, as Victor Crain points out, clients pay their money and make their choices. Clients drive our business and industry, but it is our responsibility when they are demanding unreliable techniques or the biased wording of questions to say “no”. I have left hundreds of thousands of dollars on the table when clients were demanding bad research, bad techniques, and certain results. We need to have values, ethics and reinforce our expertise, rather than be manipulated by clients. This is why our business is in trouble today, too many research firms have over-promised, undelivered, and provided erroneous research and data. This dishonesty must stop!

  13. I’m going to be less diplomatic than Nick is in his response. You adopted a headline that “surveys are not to be trusted.” That’s a blanket statement from which you are now backing away, It should never have been made.

    The silly paper that you reference in your rebuttal is just that. Respondents react to stimuli, whether the stimuli is presented in the form of a question or experiment. If the stimuli is biased, the results will be biased. So what else is new? The bias is just as likely in an out-of-context experiment as it is in a survey. My point remains that experienced researchers understand the bias and how to deal with it both in designing questions and in analysis.

    The paper does not concern respondent dishonesty. There you are wrong. The paper is concerned with the gap that exists between hypothetical preferences and specific preferences. Someone might prefer to own a Porsche, but if the person can’t afford it he won’t buy one. The preference is not a lie. The question that elicits that preference is very poorly framed.

    Now let’s look at Veylinx. You use a tiny panel of 10,000 consumers. If your panel works the way most others do, only a very small proportion of that small sample actually contribute to research results.

    Is the panel representative? Of course not. The mere fact of being in a panel changes how respondents behave. “Panel effects” were first documented circa 1910.

    A cynic will say that many panel members are students, the unemployed, the retired and others short of money. Unfortunately, these people will lie to become part of a panel. I’ve seen it, documented it and obtained refunds from panel companies. I’ve also seen focus group participants lie about qualifications. That happens.

    What should be obvious is that successful business people, the highly affluent and those with active lifestyles don’t sit around waiting for panel invitations. They’re too busy. Many online surveys will miss them, but so will your methodology.

    My take away is that I don’t have a reason to trust you, to recommend you to a client, or to want to work with you on a project for a client. My belief is that in trying to create an adversarial relationship with researchers, you are actually hurting your company financially. And your point was what?

    1. Vic, I love and value your participation in the GreenBook Blog community; your experience and perspective add much to the dialogue. However, this comment veers dangerously close to “troll” territory. Calling it like you see it is one thing, personal attacks are another and there is no room in a professional forum for openly debating ideas to resort to denigrating others publicly. I ask that you please refrain from going there again no matter how riled up you get by the topic.

      I approved the headline and Anouar’s post for the express purpose of the “shock value” of each because I think in the era of mass behavioral data collection and live experimentation that it’s a point worth making: surveys have their place just like every other tool, but that place is likely declining in importance for a variety of reasons, not least of which is that there are flaws in the method just as there are in all methods. A few years ago all we had were variations of hammers so everything looked like a nail. Today we have a a rapidly expanding tool box and we should be exploring “fit for purpose” for them all, which means some things will grow in use to replace others.

  14. I think we can have a spirited discussion without attacking each other on a personal level. As all folks working with stats know, the value of the work is often not as point perfect as one would like when the sample frames and exposures exist within a dynamic environment.

    After 20 years in this business, I have learned that many clients want to prove their point more so than producing research that meets the rigor of academic standards. There are also research companies who are more interested in cookie cutter projects and bottom lines than in data that actually enhances understanding.

    That said, there are many researchers that truly want to find answers that have meaning. I would argue that many of today’s younger researchers are more engaged and more versed in a body of techniques than many older researchers. One of the main reason I got into research was the idea that what we were doing actually was a legitimate way to not only answer questions with bullet proof advice but also because it in its own way made the world a little bit better.

    That is still true but it has so much more color than it did 20 years ago and the answers come from many places, not just SPSS or SAS. If nothing else, we have learned a lot about the fickle nature of consumers (ourselves included) and a whole lot more about companies who soar and fall.

    The reality is that research is not one thing anymore nor is it limited to stats or focus groups or any other technique but instead is a living body of work that may include everything from big data to analytics to qual, A/B testing and a whole litany of other techniques as the project requires.

    The smartest researchers I know don’t assume, they inquire. They don’t defend methods, they create answers using the best methods. All techniques have drawbacks and all methods have inherent errors and sometimes the research is just wrong.

    We created a mess by doggedly sticking to techniques that weren’t representative as it became apparent that that the true margin or error often laid outside of a study. That doesn’t mean you shouldn’t use research or surveys, it means that there are ways to refine. What the poster was trying to convey is far from where this discussion has gone.

    If you are true researcher, you have to admit that the value of research has changed dramatically within organizations. That’s not because the research was bad, it is because companies can see what is happening in real time. Researchers need to see that there are many new ways to look at things that can actually help the industry and move them forward into a new paradigm of business. Our business is predicated on unbiased understanding; how ironic is it that we should be so bad at recognizing our own biases?

    If you believe in research, then you have to believe that change is good and that all facts should be considered. So please, don’t shoot the messenger but instead think about how new and real time information can help inform and how research can move forward within a new and complex world of information.

  15. Lenny, I have no problem with lowering the temperature of the rhetoric and reminding everyone to be civil to each other while disagreeing. I also think that Vic’s frustration reflects a deep generational difference and the continuing upheaval in our industry and business, as well as society and world. For example, some media people like headlines that grab attention and purposely cause debate, but may not be backed in substance or evidence.. Others believe that headlines should be more reflective of the substance and facts in the particular article. In addition, there has been a great loss of respect for older people and doing the hammering of nails the old-fashioned way. Younger people don’t like to be constrained by what has occurred before them and want to do things differently and better in their eyes. There is a lot to be said for all points of you, while not necessarily being politically correct. Passion can be a great strength, but when it rises to zealotry. it can also obliterate other points-of-view. Bottom line there needs to more respect, better collaboration and cooperation between the old and the young, the conservative and the liberal, the Republicans and Democrats, Judeo-Christians and Muslims, social scientists and data-based consultants, etc. or we are all lost and trust is extinguished.

    1. Good points Nick and Vic, and for what it’s worth I would welcome any blog posts from you and others or getting you involved in IIeX as well. I’m going on 20 years in this industry and cut my teeth in research in the days of CATI and mail as the primary methodologies for quant, and have a profound respect for the pioneers of both yesterday and today. I’d like GreenBook to be the connection point between the best of where we have been and where we are going, and your involvement is critical to that happening. So, let me know if you ever want to get more involved; the door is always open!

  16. Nick, thank you. Well said.

    Ellen and Leonard, there’s actually nothing personal in what I said. I don’t know Anouar. I made what I thought was a simple statement that when one uses gross exaggeration to promote a point of view, some people are going to walk away.

    When one introduces a new tool of any kind, you want people to try it and adopt it. We’ve seen how interesting tools (examples, Simalto and ethnography) have languished from lack of adoption. One can make the presentation of a new tool inviting, or try for a splash and drive half the market away. That’s a choice.

    Now one could make the case that one can ignore the research community and pitch directly to CMOs. However, that doesn’t help the case. Few executives like to say they’ve made mistakes, and they have sunk investment in research staff and past projects. So what exactly is a frontal assault on their decisions and commitments going to accomplish?

    For my part, I wasn’t going to respond to Ellen’s comment. I came here to thank Nick, which I have done.

  17. Lenny, thank you for the invitation. I have been processing whether to get back into the public research fray. A lot of the debate right now borders on hysteria. I am not sure whether people are really willing to listen to each other with objective ears.

  18. By the way, when I first got into the research industry with Lou Harris, we used slide rulers and an abacus to total data. It took many hours to data sheet a study. We then started computer programming with Fortran IV and Cobalt, and punch cards were everywhere. Computers were the size of a small room. I have seen a generation or more of changes to the survey research business, and when changes happen too rapidly there is a tendency to kill the past and jump to the future However, Instead of getting to heaven you languish in Purgatory! “Between two worlds: One dead and the other powerless to be born”

  19. Thank you for your conciliatory notes, Lenny, Nick and Ellen.

    As I made the clear from the very beginning in my post, I focus purely on the measurement of beliefs and preferences. That’s not coincidental. I became active in the market research space after discovering that companies still use hypothetical measures to estimate perceived value and beliefs.

    My academic background is in experimental and behavioral economics, which has been rejecting—for very sound reasons—hypothetical measures. Behavioral researchers go to great lengths to ensure that whatever they’re measuring is generalizable and that means it needs to be as close to reality as possible. As Lenny noted, many of the techniques that behavioral researchers have been using, are only now becoming feasible to apply in an industry context. At the same time, the field of experimental economics, which systematically and methodologically rejects most forms of hypothetical measures, is gaining a lot of traction among mainstream economists and marketing researchers as reflected in the increasing number of publications that adopt this methodological framework. My post is merely a popularized summary of what has been common knowledge in this field for some time.

Join the conversation